Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
File size vs. performance
Message
General information
Forum:
Visual FoxPro
Category:
Other
Environment versions
Visual FoxPro:
VFP 8 SP1
OS:
Windows XP SP2
Network:
Windows 2003 Server
Miscellaneous
Thread ID:
01256701
Message ID:
01256748
Views:
10
>Couple of questions:
>
>1) You say the program is used in several states is this using their own local server or pulling the data over the net.

Pulling over the net, so I know there is a certain amount of performance lost there

>
>2) You say the code is converted FP2.5 code, have you updated the program to use SQL commands.

It does use SQL commands where it can

>
>3) How have you determined you are already optimized, the indexes are the most critical for speed.

Indexing is being saved for another fight. Right now, I'm trying to get archival of old data through.


>
>Bob
>
>>No problem on the precision - vague in, vague out
>>
>>Here's what we've got.
>>
>>This system is, essentially, Fox2.5 code compiled in VFP8. It generates html code for display. Most of the detail data is kept in memo fields and no, I can't redesign the data tables. To avoid the 2 Gig barrier, the detail files are split into 200,000 record 'sub-files'
>>
>>Buffering is turned off.
>>
>>This application is used in several states, and the clients are complaining about time-outs on certain functions and the code is as optimized as it's going to get - so the only thing left is data size optimitzation, IMHO.
>>
>>Since the original designer of the system has to approve changes like this, I was hoping to find something besides "in my experience" or "this operation takes .2 seconds on my test data, but takes up to 5 minutes on the production data and that's as fast as I can get it" to convince him.
>>
>>
>>>>Can someone point me to anything that gives any kind of data about VFP table size versus application performance?
>>>
>>>Nope, that depends on too many factors to give a valid prediction: No, speed and distribution of disks, cache mem, HW layout, tables across LAN, no. WS accessing, xBase / View type of access, buffering used or not, heavy or light index use... and there are more.
>>>
>>>Also premature optimization is bad: do you have actual perf problems ? If so, go scientific and make an experiment. Or measure, think and measure again.
>>>
>>>But I'ld argue for a strategy to make certain 2 GB barrier is not reached with those memo files - if you are not multi user data entry, splitting the table or saving off old data could help as well. in heavy multiuser scenario think about C/S. But if the app is written with large data sizes in mind, the speedup is not very large unless you can force major data parts into a large RAM file cache.
>>>
>>>Sorry to be that unprecise
>>>
>>>thomas
"You don't manage people. You manage things - people you lead" Adm. Grace Hopper
Pflugerville, between a Rock and a Weird Place
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform