Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Slowness on Windows 2012 Servers
Message
From
29/01/2020 21:53:02
 
 
To
29/01/2020 17:04:08
General information
Forum:
Visual FoxPro
Category:
Troubleshooting
Miscellaneous
Thread ID:
01672834
Message ID:
01672843
Views:
73
>- Your environment e.g. servers virtualized vs bare-metal, desktops virtualized vs bare-metal, networking/switchgear
>- What you've looked at so far e.g. have you found any bottleneck(s)? The slow tasks - are they limited by CPU, disk I/O, memory or network?
>- What version of VFP are you using?
>
>Update: ISTR Thomas Ganss saying VFP can slow down significantly if there is a lot of available RAM memory (contrary to what you might expect). I think his recommendation was to limit VFP RAM usage via SYS( 3050 ) to 511MB or less (not 512MB), and 256MB works fine for almost all cases.

Definately true, but there are more factors at play ;-) The described slowdown happens - but not always.
If the running code uses lots of SQL with multiple joins and creating intermediate tables (pipe output of sys(3054) with "my" sys(3092), happens when fields from more than 2 different "alias-joins" end up in result set) , it is better to give vfp more memory, as intermediate tables/cursors or repeatedly used source tables will more often stay in RAM and the total SQL join will be faster - sometimes by A LOT. Just filtering, independant of JOIN or WHERE syntax, does not benefit that drastically, as long as enough memory for the Rushmore bitmaps is available and index optimization is done fully and it is not the same table queried all over again.

There is a second exception to the rule: sometimes memory gets fragmented (Calvin has some articles and a tool to visually inspect memory in his blog) by memory leaks that cannot be compacted in garbage collection. This can create situations, where Rushmore bitmaps cannot be established in RAM any more (no space between blocked memory large enough for bitmap, which needs to be continous), resulting in a ***drastic*** slowdown if you run into it - comparable to running on track vs "running" in a swimming pool armpit high in water.

If (BIG IF!!!) the above happens in the program, the slowdown will be expirienced EARLIER, if less memory is used - flip side of that being that giving more RAM might exempt particular fragment/data run combinations from expiriencing the slowdown.

If the vfp code uses only SQL without creating temp tables and/or traditional xBase (NB: old xBase JOIN syntax not tested as not used in my code...), each garbage collection is faster as only a smaller memory area has to be checked and compacted. Memory allocation also will involve checking more RAM for "best" position. As long as memory for given task is "ample enough", you are on the plus side usin less RAM. As basic xBase with Rushmore filtering was created for mem sizes of DOS and DOS extenders, the old style file handling often can run within 8 to 32 MB even with the larger file sizes encountered today, as running/comparing old style data munging under vfp and FPW/FPD shows. Object orientation in vfp does task memory much more than non or simple (aka strings) pointered mem access, so that sweet spot often was between 128 MB and 384 MB for vfp programs I tested - most with medium to heavy fwk code interwoven.

regards

thomas

//upd: Another area where RAM usage is beneficial is RO on small tables sporting few indeces with low selectivity and a result set of high % of reccount() of base table. The benefit of reading such tables into pure RAM cursors (no index, table scan in RAM) vs. standard access via many .cdx based bitmaps and then still reading most of base table for data needed in result can be drastic on old HD (where physical heads have to be moved in both operations) and on remote data. Does not happen often in my code, as I often use 1 or 2 "bag" lookup tables "logically half denormalized" with different column#. If there is a repeated need to read many records "document style", I try to cache the result as object or array structure. Code not employing such effort is often better served with more memory and caching such data in local in-memory cursors, each having a few KB footprint (typically recsize 30 - few 100 bytes, if many char or memo fields are used) and reccount() of 2 or low 3 figures.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform