Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Differences between xBase and SQL implementations
Message
De
21/12/1999 07:08:22
Walter Meester
HoogkarspelPays-Bas
 
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Divers
Thread ID:
00305631
Message ID:
00306590
Vues:
56
David,

>I'm not going down the endless debate route here with you. My Christmas vacation has started and my main objective for the next week is to enjoy the time with my daughter, family and friends.

I understand.., I wish you a merry xMas.

>I'm sorry I shouted, but once I saw why the SQL results were so biased I had to point it out. Benchmarking code is a tricky task at best and you have to remove as much of the ancillary code as possible. The ? command takes large amounts of time because it's converts from internal to ASCII representation, which is historically one of the slowest things to accomplish. This particular test probably wasn't too bad because it was all string data anyway.

This did apply to both approaches, so i would not matter

>If the SQL genereates a copy of the data in 1/3 the time of the xbase I have plenty of time left over to further work with it. I only did the INSERT to better equate the amount of work going on between the two things being compared.

I never said that the SQL was not better optimized, yes it is... BUT it creates copies of data instead of using the live data. This forms a problem when using large amounts of data (both tablecontents and indexdata) using much of resources

>The fields are also all there for the SQL just add them in, I'd be quite suprised if adding more fields to both makes the SQL slower.

Yep, try it. Like you said including all fields does make it slower because it has to copy a larger amount of data to the disc. Somewhere else in this thread I describe to emulate a cartesian product within xBase, which was ready to go within a fraction of a second. The SQL variant will likely not finish at all because it uses too much resources and probably will try to cross the 2 gig limit.

>You also don't have to have all the indexes, or the deleted() index. The initial tests before I added the index still had the SQL slightly outperforming the Xbase with only partial optimization and with the output diagnostics of sys(3054,11) turned on. I didn't reperform the tests without the additional indexes because the SQL was already faster, and the additional indexes just made it as fast as possible to bound the test case.

The initial test (with limited field selection) was about as fast a the xbase variant (local somewhat faster, over a network somewhat slower).

The SQL variant is tends to be quicker in cases where not too much data is involved (there is not too much to copy to the disk), but when it comes to larger amounts of data, more complexity, etc. the xBase variants seem to get be much faster. This is why I want to have the 8 relational operators to apply to live data. It will mean much and much more performance.

>You are quite free to continue using interpretted Xbase commands in loops and relational links to move file pointers if it suits your purposes. You may also retest the code I posted in your network environment to see if the results are similar. I don't have your environment handy to do the test.

>Me, well, I'll continue using SQL that uses optimized C++ code wherever it is the fastest solution to the task at hand.

I'll never disagree that the optimizer in SQL is doing a good job, especially when it doesn't concern massive amounts of data. But the problem of SQL queries is that they have to copy the resultset to disk before the program can continue. In a awfull lot of cases there is not a problem with using the live data at all: It even creates advantages like live editing, using underlying indexes (implictely or explicitely). See the thread Sluggish SQL where bruce has performance problems when the resultset is to be written to disk. If bruce wants to use this for a purpose where live data is not a problem he reduces his querytime from about 2 minutes to a fraction of a second. (though I must admit that navigating trough the result set is slower because relations and filters have to be evaluated each time)

Best wishes for 2000

Walter,
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform