Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Milliseconds
Message
From
02/01/2018 07:24:14
 
 
To
01/01/2018 15:16:19
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Title:
Environment versions
Visual FoxPro:
VFP 9 SP2
OS:
Windows 10
Miscellaneous
Thread ID:
01656863
Message ID:
01656881
Views:
814
>>Gettickcount usually is good enough for SQL queries, xBase calc calls and finding unoptmised locates.
>>But there already is the optimizing output ;-)
>
>To all, thank for your input.
>It is obvious that it comes down to the timing resolution of VFP and Windows. So as long as the timing is consistence, I guess the I will have to live with it. Besides, the count will be different on different machines. What I am experiencing is that multiple runs of my test cases result in very different timing result. I believing it may be caused by other window services running in the background.

Embolded: Correct, different machines will give you different counter results if CPU freq is different. But both machines will give you real timespans if corrected with the frequency mesured in advance - see code by Antonio for vfp usage. The only thing to repeatedly check is that Frequency does not change - it might drop if a previously "single" running task had thrown the CPU into overdrive single core mode and a virus check or other service taxes the other cores later, forcing the "vfp core" down to normal level. Define ranges of frequeny, map # of runs in each range to get a guess at possible wrong timings. If running on a "suspicious" machine add some code on C level to get frequency on each mesurement and discard all using different start/stop frequencies.

Was implemented in my fwk, but nearly never necessary, as verified by sampling frequency about each minute once in my fwk - adding those complex freq checks added another 10% of runtime to the 6-12% already on top by continous measuring. In "slow" runs the overhead was at lower border, as more data took more time to run between timing calls, making the percentage more acceptable. Some changes in index structure or bad maintainance change could add more than 50% runtime, so continous check on long running tasks was the better compromize than running again instrumented in case of trouble.

Stability of mesurements might have depended a lot on the fact that my runs were done in VMs and I had complete control over physical and VM setup as well, cleaning up the .vdi of "internal fragmentation" before important runs. So the effort to create checks in my fwk was not wasted, as it allowed me to cull "bad runs" from analysis.

Italics: If timing dbf access, be suspicious of more index hops due to different reccount(), location of key in cdx, fragmentation status of dbf/cdx/dbt (less on SSD) and disc caching elimiating physical disc access. Optical check of task manager+resource monitor set on disc most of the time good enough to see if windows acts up and might screw your timings.

It helps a lot if you conduct measurements with a repeat count starting to be relevant for basic statistics like arithmetic mean and variance, perhaps regression on data size and time spent. Throwing min/max into it helped as well, but to a lower degree.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform