Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
SQL performance
Message
From
11/10/2006 04:13:06
 
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Title:
Environment versions
Visual FoxPro:
VFP 9 SP1
OS:
Windows XP SP2
Network:
Windows XP
Database:
Visual FoxPro
Miscellaneous
Thread ID:
01160628
Message ID:
01161115
Views:
18
Kevin,

>>"Any chance of sending the data ? Probably faster than guessing about selectivity and distributions."
>In a heartbeat, not real familiar with all the features of UT, what's the best way to get it to you?

RAR or ZIP it (RAR often compresses better, which is helpful for downloading)
Upload to some webspace/FTP dir of yours, send me logindata for that directory per mail

>There are multiple queries, so I don't think the "special index" is a good route.
Thought so.

>I tried your modification of the INLIST and that didn't appear to help.
Probably the reading of at least 1400 records is the culprit: since this is the grouped result, perhaps the number is far greater...

>I'm getting the same results when I'm local or when I'm on the network (with the virgin cache). I was thinking of moving the EXE off the locally hosted web server, but then I'm running it on my local hd on my notebook and still getting these times, so I'm not sure that's the going to solve the problem either.

You have very different pieces of HW and should have different timing! Exe and dbf are on the same box ???

>
>I haven't defragged my HD in a while, there's currently 3.85 gb free out of 100 gb.

COuld be real bad. In your shoes I'ld run to get a BIG Usb disk for Backup (and perhaps testing as well!),
backup up the whole hard disk (cheap security: 100$ to 120$ for 300 GB) and verify backup first during the night.
After that, delete at least 8 GB before even trying to defrag. Try for 25 GB of free area before defragging if possible!

My personal favorite defragger is OOSoftware, other defraggers sometimes work only partially if not having at least 20% free space. Depends also on No. files, file sizes - but you are probably hurting yourself with such a small free space. Compressing read-only dirs sometimes helps if you really need all that stuff, but after a compression of a normal forder the harddisk free space and data layout is a mess. But you want to defrag in any case. You are probably allocating space in the MFT already.

>
>The speed drop on cached data will go down to about 0 to 1 second from the 7 to 9 (you can probably feel my frustration <g>). And there's a series of queries to get the final results, and they have similar times so it takes anywheres from 25 to 35 seconds to return the web page.

I am still unclear why the virgin cache is so important to you. If these tables are accessed very often in your web connect app, they probably will be cached by the OS and you will have better turnaround. If they are almost never called (and thereby fall out of cache) perhaps for those few instances give a message back: "longrunning query, pls wait..." If the web page DOESN'T get faster if called again, you should investigate that - potential speed up by caching is huge. Getting additional 2 GB or more of RAM is cheaper than manually optimizing<g>.

Since on the cached data even grouping on a set resulting in 1400 recs is not hurting, it is not the amount of data vfp has to write for but only the amount it has to physically read.

>The bad sector issue I think is eliminated by the fact I'm getting very similar results on multiple computers, one of which is my laptop, the others are a Windows 2003 server and a Novell server (whatever the latest version is) running on 100mb LAN with the latest CAT cabling utilizing switches.
>The Firm table's CDX is 1,235kb, with a 5,241kb FPT and 10,030kb DBF
>The SRO table's CDX is 12,225kb with a 82,494 FPT and 72,055 DBF

Via Lan you have to move most of the 13.5 MB of cdx, 1400 records in the SRO table and 1400 times*Memo field accessed at speeds of 5 to 10 MByte - should be markedly faster if coming from a local ATA/SATA drive. Don't access the tables via LAN if possible - you loose many of the benefits vfp has in Web scenarios.

>
>To measure the SQL speed, I've written a routine that I call immediately before and immediately after the SQL Select, similar this below:

Haven't checked thoroughly, but looks good - you seem to measure always apples to apples.
>
>Which gives me an HTML file containing each SQL's analysis in the below format:
>
>Current User: JJB
>User Type: SP
>ALIAS: CURSROORDERS
>Records Returned: 1432
>DateTime: 10/10/2006 12:03:49
>Method: ARFS.BUILDPREVIOUSORDERS
>Line No: 212
>Duration: 7
>FiltersApplied: lcSelection= sro.Case_id ='0000006913'
>Select: BUILDPREVIOUSORDERS_FirstSelect
>SELECT sro.firm_id as Firm_ID, sro.sro_id as sro_id,
>sro.case_id as case_id, MAX(sro.client) as Client,
>MAX(sro.srref) as SRRefNo
>FROM Analyzer!sro
>WHERE &lcSelection AND INLIST(sro.sro_status,10,20,30,33,40) AND sro.type = "I" GROUP BY 1, 2, 3 INTO CURSOR curSROOrders readwrite
>Using index tag Case_id to rushmore optimize table sro
>Using index tag Sro_status to rushmore optimize table sro
>Using index tag Type to rushmore optimize table sro
>Using index tag _del to rushmore optimize table sro
>Rushmore optimization level for table sro: full
>
>Sorry for the long post, and again, thanks for any help. Hopefully the knowledge I gain on the first select will help in the others.

Don't be sorry, it saves much more of my time of guessing! You might describe the Layout of HW / Exe / tables for your web setup that the optimized query is targeted for and why this server cannot be counted to cache the tables.

regards

thomas
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform