Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Group by clause in vfp8
Message
De
24/04/2003 04:22:15
Walter Meester
HoogkarspelPays-Bas
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00774269
Message ID:
00781020
Vues:
54
George,

>I was going to ignore this post and let the whole thing die. However, better (or worse) judgment compels me to respond. Here's the question:

>How do you think that the interpreter manages this? You have a function, you have numerous records which may have values greater or less than what the function was designed to return. The answer is the values must be individually compared. This is why there is addtional overhead when using any aggregate function.

That's correct, but you're confusing the term execute and intepretation. See below.

>Now if you think this is incorrect, just look at an SQL Server execution plan. It will show that a table scan (without an index) must be performed.

Yes, I never said that this was not the case.

>Since, however, you claim to know about p-code, let me ask you this question: Why can VFP 6.0, 7.0 and 8.0 run under any of the runtime (6.0, 7.0 and 8.0) libraries without modification provided no properties or methods don't have the same name as a new property or method?

That's easy, because the P-Code is largely upwards compatible. IOW the P-code itself has not been changed for functions available from previous versions. For new functions a new P-code is introduced and for the additions (parameters or clauses) there are new Pcode 'flags' introduced.

I think here there is a misunderstanding about what executing and interpretation means in the context of this issue.

Interpretation means reading the next commandline from the program, set some environmental variables (again keeping track of the last lineno, handle messages, etc), and identify its clauses and parameters, search its internal C/C++/ASSEMBLY entry and execute after which it returns the result. The interpretated command can have subcommands (expressions) that also have to be evaluated. However, because of the nature of the main command (re-evaluating subcommands) VFP does less than interpretating. Since the MAX() command in a SQL select command was interpretated once, it stores all static data belonging to this interpretation process (like the C/C++/ASSEMBLY code entry) for subsequent use.

Let's look at the following example:
nSec=SECONDS()
nT = 1
DO WHILE nT < 300000 
	=MOD(nT,1)=0
	nT = nT + 1 
ENDDO
nNotEmbTime=SECONDS() - nSec 
WAIT WINDOW "Not embedded: "+STR(nNotEmbTime, 6, 3)

nSec=SECONDS()
nT = 1
DO WHILE nT < 300000 AND MOD(nT,1)=0
	nT = nT + 1 
ENDDO
nEmbTime= SECONDS() - nSec
WAIT WINDOW "Embedded in DO WHILE: "+STR(nEmbTime, 6, 3)


nSec=SECONDS()
nT = 1
DO WHILE nT < 300000 
	nT = nT + 1 
ENDDO
nDoWhileTime= SECONDS() - nSec

? "Total Time MOD() function not embedded case", nNotEmbTime - nDoWhileTime 
? "Total Time MOD() function Embedded in DO WHILE case", nEmbTime - nDoWhileTime
The goal of the example is to compare a DO WHILE with an MOD(Test,1) embedded to one where it is not embedded. Both examples iterate 300.000 times. If your assumption was true both MOD functions are interpretated fully in both cases. In that case you would expect the same performance from embedded and the non embedded variant.

In my tests (PIV 2Ghz) there is a difference of about 0.147 - 0.095 = 0.05 seconds (35% savings) in the two. This time is time the DO WHILE command has saved on interpretor savings because the MOD() function was embedded. The 0.05 seconds difference in my case is static, because it won't change if I use another function like SIGN(), MAX() or MIN(), while the total time needed for execution is variable for each function. The savings percentagewise thus is also variable depending on the function used.

Well in the case of SELECT MAX(), I cannot determine the percentage difference because it is a SQL function that does not have a direct equivalent that can be used in a DO WHILE function, but if you take the results of the sample above I'd expect a interpretating saving of 0.05 seconds for 300.000 records.

Walter,
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform