Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Group by clause in vfp8
Message
 
À
19/04/2003 08:15:37
Walter Meester
HoogkarspelPays-Bas
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00774269
Message ID:
00779584
Vues:
35
>George,
>
>O.K. the following testing program:
>
>CREATE CURSOR x1 (Test I, group I)
>FOR nT = 1 TO 300000
>	INSERT INTO x1 VALUES (nT, nT % 1000)
>ENDFOR
>
>SET ENGINEBEHAVIOR 70
>nSec=SECONDS()
>SELECT Group, Test FROM x1 GROUP BY x1.group INTO CURSOR y
>WAIT WINDOW STR(SECONDS() - nSec, 6, 3)
>
>SET ENGINEBEHAVIOR 80
>
>nSec=SECONDS()
>SELECT Group, MAX(Test) Test FROM x1 GROUP BY x1.group INTO CURSOR y
>WAIT WINDOW STR(SECONDS() - nSec, 6, 3)
>
>On my P4 2Ghz, it shows that the seconds query is about 10% (0.541 sec and 0.592 sec) slower than the first one. Measurable ? yes. Significant? I don´t think so

This is bad science. You've only included one field in your test. You don't consider the implications of more than one. In my situation there are multiple fields. You may come to the conclusion that, under your test that the difference isn't significant. What you don't consider is multiple calls to the function(s) along with multiple aggregates.

You consider the best case, I the worst.

>Is this a representive test ? No, because it´s a very simple query containing no joins and the data comes from memory (cursor) instead of a table stored on disk or a newwork drive.

Who brought up joins? You're changing the subject.


>When applying this same test where the data is stored on a network drive and is used by more than one user (so no read buffering occurs on the client), I found the following that the difference dropped to less than 5 percent on 100 Mbs full duplex network. When doing this same test on a 10 MB full duplex network, the difference where below 1 percent and almost not measurable anymore.

Again, you're considering a "best case", not worst. However, the point isn't whether or not there's additional overhead. There is, and you admit it. The point is, why bother to introduce it at all? You don't know what, in the future, changes might be required and neither do I.

>Given the first test on my 2Ghz machine, I see an overhead of the MAX() function of about 0.05 seconds per 300.000 selected records. This time is nothing compared to the additional time needed for doing joins or processing where clauses and processing the actual query. In the test I did also disregard the time need to open tables addressed in the Query.

Again you're attempting to complicate the argument by introducing elements that weren't involved in the original question. Further, the time needed to open tables in a VFP enivirment should be reasonably consistent. Neither has anything to do with my original response.

>WHAT THE HELL ARE WE TALKING ABOUT !?!!

You've consistently responded to my posts and consistently tried to shift the argument when you've been unable to debunk my statements. You tell me what we're talking about, then ask me what we are.

>Conclusion: in a production enviroment there is no significant overhead in using the MAX() function in stead of relying on the last physical row.

You haven't demonstrated this. I never brought up the subject of the last physical record (you did). I pointed you to a post of mine that thoroughly gave an example and the reason for my original response.

Walter, I don't know what your problem is, and further, I don't care. I don't know if you've got some sort of problem with me, or problem admitting when you're wrong. Just do me the favor of not trying to prove yourself right by trying to convulute the orginal post and response.
George

Ubi caritas et amor, deus ibi est
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform