Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Differences between xBase and SQL implementations
Message
 
 
À
21/12/1999 01:56:46
Walter Meester
HoogkarspelPays-Bas
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Divers
Thread ID:
00305631
Message ID:
00306584
Vues:
76
Walter,

I'm not going down the endless debate route here with you. My Christmas vacation has started and my main objective for the next week is to enjoy the time with my daughter, family and friends.

I'm sorry I shouted, but once I saw why the SQL results were so biased I had to point it out. Benchmarking code is a tricky task at best and you have to remove as much of the ancillary code as possible. The ? command takes large amounts of time because it's converts from internal to ASCII representation, which is historically one of the slowest things to accomplish. This particular test probably wasn't too bad because it was all string data anyway.

If the SQL genereates a copy of the data in 1/3 the time of the xbase I have plenty of time left over to further work with it. I only did the INSERT to better equate the amount of work going on between the two things being compared. The fields are also all there for the SQL just add them in, I'd be quite suprised if adding more fields to both makes the SQL slower.

You also don't have to have all the indexes, or the deleted() index. The initial tests before I added the index still had the SQL slightly outperforming the Xbase with only partial optimization and with the output diagnostics of sys(3054,11) turned on. I didn't reperform the tests without the additional indexes because the SQL was already faster, and the additional indexes just made it as fast as possible to bound the test case.

You are quite free to continue using interpretted Xbase commands in loops and relational links to move file pointers if it suits your purposes. You may also retest the code I posted in your network environment to see if the results are similar. I don't have your environment handy to do the test.

Me, well, I'll continue using SQL that uses optimized C++ code wherever it is the fastest solution to the task at hand.


>I really would appriciate if you didn't shout. First of I did clearly mention that there is lot to say about this test, and did provide some means to test this for all folks on the UT, so NO, my test isn't worthless, see on later..
>
>>Item 1) A high fraction of the time spend is CPU time actually spent executing the ? command. Not on the code you are trying to really benchmark.
>
>Nope, it isn't. Did you try this test by running this trough a network where others have opened the table ? How then would you explain the messures I've got ?
>
>>Item 2) Your SELECT * is creating a result table that is 49 columns wide instead of the 5 fields your scan loop processes. So yes you are asking the SQL engine to do one hell of a lot more work than you are asking the Xbase DML to do.
>
>You're free to change the SQL statement. Though it's unfair that I did include all columns with the SQL-select statement, BUT, they're all available within the xBase approach. Please change the SQL statement and try it again
>
>>Item 3) Rushmore can't fully optimize the query because of missing indexes.
>
>This is another story, like we know we shouldn't use more indexes than we actually need (DELETED() tag thing). Maybe getting a performance gain in one area but creating one in another. I've decided to not change a thing to the original tables. In my case I use fairly large in runtime constructed queries where it would be a real pain to index all posibble fields i want to query.
>
>And yet, this is an area where the VFP team could make some improvements by making SEEKs in combination with FILTERs rushmore optimizable, hereby the same optimization method is going to be used in the xBase variant.
>
>
>Here you missed the point others have also missed (I know, I should have made this clear when I wrote this message), I don't want to create a copy of the data, I just want to skip this part, I just want to be able to use the data from its datasource itself. Here you create a SQL Query with xBase commands which is NOT the idea.
>
>For example: like i said, I use time consuming SQL-queries for a lot of reporting either in Crystal Reports or the native report writer. In the case of the native report writer I don't have to put it into a cursor. I could report from the original datasource right away.
>
>As for Crystal reports, I've got to write it into FOX 2.x format. SQL is not going to help me here either. First SQL writes it's result into a VFP cursor, then I've got to write it into a FP 2.x table. if you see the original test you'll see the simularities here.
>
>>You'll find that the SQL is 3.63 times faster than the xbase code! On my P2-300 notebook the average of 3 runs were 0.145 seconds for the xbase and 0.040 seconds for the SQL.
>
>As I admitted there is a lot to say about this. But let's look into this matter. First I don't know If you rebooted your computer between each test. This is neccesary to create reliable result in the case where a query is only executed once. The SQL-select command creates it's own indexes and stores this and (maybe) other information on the OS level. So every time the Query is executed again it *could* use this information again speeding up performance. the picture will be different when it's situated in a multi-user environment where the SQL optimizer could not (always ?) use this information as data could be changed by other users or the user itself.
>
>Another matter is that the SQL-query uses much more resources (optimizing process, possibly creating intermediate results and storing the results) Depending on your computer hardware this could be more or less a problem. I've got many clients running a NT workstation on 32 MB (government).
>
>On a NT network in this test the SQL variant has an advantage over the xBase variant because the NT server keeps the files open (depending on registry settings) for a while even if the client has closed them, when the client opens them again shortly after, it will have a performance boost because the server has not to open it again.
>
>>I added indexes on:
>>select products
>>index on discontinu tag discontinu
>>select orders
>>index on shipped_on tag shipped_on
>>and deleted() on all tables to get full optimization. Even without fully optimizeable queries the SQL is going to be faster.
>
>I thought we had skipped the deleted() tag thing. Especially when there are no deleted records and on a (slow) network , it could only burden performance, as the index is to be used in the rushmore optimizing process and can't help limiting the resultset.
>
>Regards,
>
>Walter,
df (was a 10 time MVP)

df FoxPro website
FoxPro Wiki site online, editable knowledgebase
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform