Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
SEEK(),INDEXSEEK() or KeyMatch() or SELECT-SQL?
Message
From
12/04/2005 10:25:38
 
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Environment versions
Visual FoxPro:
VFP 8 SP1
OS:
Windows XP SP2
Miscellaneous
Thread ID:
01002645
Message ID:
01003732
Views:
27
This message has been marked as a message which has helped to the initial question of the thread.
Hi Nadya,
>
>Would you please elaborate more on different methods of testing code and finding slow pieces? Which books would you recommend?

Mostly articles (I don't create link lists). I suscribed a few email-newsletters from java and .Net, and sometimes there are interesting articles - ranging from performance measurement or tricks to n-tier topics and database issues. Reading the headlines and between 10 and 20% of the articles keeps me "informed" on a broader scope.

There is profiling by sampling, by coverage, by self-defined milestone measurement. IMHO the coverage profiler is best used to give hints where to place your own measuring hooks. Then you must decide on the parts you want to trace (time spent is the biggest issue (and at what resolution/cost), but sometimes snapshots of memory usage is helpful, or number of objects created, or number of work areas open, cpu time spent for the thread...)

If you google for the queryperformance-functions you can find some interesting insights, since they are nearly always used by people interested in performance.

In MSDN there is at least one good article on performance issues for .Net, going right down to the timings of code, cache access, memory access and so on. This doesn't translate 1:1, but the speed barriers we encounter (memory/current record vs. searching locally vs. searching in remote files) still have similarities. If you are using COM / Remoting / Webservices, the base rules apply ("chunky, not chatty") but that is bordering on self-evidence. There is also a newsgroup in dotnet.framework dedicated to performance.

Back to vfp:
For long running processes just watching the task manager can give you some insights (there was this case of a pc with reinstalled nt4, where the drives where not using DMA. Just watching the kernel line compared to the others showed the area where the machine was overstressed). If you have done a few coverages on the same machine you have an idea on the processing speed and the "typical" lag introduced by profiling. If it is huge, you know that there is too much code running, if it is minimal, you know that you probably have mostly slow single lines (queries, locates and so on).

>BTW, I'm running code with local database.
No problem if the deployment scenario is identical. Otherwise check again when running on a deployment-typical setup<bg>. Test on RamDisk/LocalHD/NetHD100/NetHD10/WAN to get estimates of possible bandwidth bottlenecks...

From what I read I get the impression that you have similar interests in having "good and fast" code, but perhaps tend to look from a not wide enough perspective. Also keep in mind, that premature optimization is one of the ways to hell paved by good intentions ;-)

I'ld still recommend including your measuring hooks early, but only to get a "feel" for the performance of the app / interesting parts.

HTH

thomas
Previous
Reply
Map
View

Click here to load this message in the networking platform