Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Windows 10 and memory
Message
From
12/04/2018 03:14:15
 
 
To
11/04/2018 17:50:29
General information
Forum:
Visual FoxPro
Category:
Troubleshooting
Environment versions
Visual FoxPro:
VFP 9 SP2
OS:
Windows Server 2012 R2
Network:
Windows Server 2012 R2
Database:
Visual FoxPro
Application:
Desktop
Virtual environment:
VMWare
Miscellaneous
Thread ID:
01659138
Message ID:
01659331
Views:
64
There is no general, linear rule which can be applied to all vfp programs.
Base assumption is that physical memory as seen from the OS (which may be virtualized) is backed in 1-to-1 mapping by real RAM chips,
then for most programs using only xBase style coding a very low memory setting minimizes garbage collection time and there is seldom the need for more RAM, as Rushmore bitmaps are in place and seldom/never created/destroyed.

When you use SQL, the benefit of more RAM for new, temp cursors/indices/bitmaps is larger then the time spent in garbage collection, and this time is often helping user expirience, as time waiting for specific join is minimized, whereas garbage collection added time is minuscle and happens so often, that added processing time does not register in user expirience.

Even pure datacrunching programs will have different RAM settings for the fastest processing time: SQL heavy, stage table approaches will be best served with settings above 1GB, whereas seek/locate heavy dbf handling often will be fastest in the area 128-384 MB (that is if many large tables are involved: the same program, using data sets 1/10 might have lower settings of "fastest" compared to the best setting for large data sets).

As processing time tends to fall more than linear when working on smaller data sets, I usually only find out best setting for largest data set ;-)

If you want to look deeper, the speed of HD also plays a role: garbage collection time will be stable/minimally different between machines, whereas the beneficial effect of more RAM will be more marked on systems using (single) HDD compared to multiple HDD/SSD setups, esp. if table access is optimized to use a specific disc layout (imagine: read disc 0 with 1 to write 2, or breaking up dbf/cdx on same drive to put cdx on SSD or RAM disc as they are accessed more often and more often in random I/O these excel at). Often hardest is to define the benchmark mix in user-action heavy programs, as only the specific time needed for those interactions is measured ;-)

If you have a typical script which can be timed (better: logs time needed for different tasks!!!), running in 64 or 128 MB steps will be sufficent. Again: it is to be expected that "general" time needed will increase minimally for each added RAM step, wheras sometimes larger chunks of individual task processing time can be circumvaded if another processing step can be moved from HD to RAM. Do tests across the whole set of RAM, esp. if indeces/cursors are created often or append/writeout in bulk often happens.

Also consider setting RAM large on some tasks, if heavy SQL processing is happening at certain times (new form utilizing many CA based cursor creations invlving heavy joins) and setting it lower again for usage when created. nice settings for such [3,4,5,6]*64 MB for user interaction and [8-24]*64 MB for SQL






>The link Al Doman shared seems to indicate that the 1/3 rule is no longer a good rule of thumb. It seems that best performance is to keep this value under 512MB. I have not yet had time to test the lowered values (right now, it is set to 1024MB).
>
>>>
>>>I am going to check out having too much memory assigned to the foreground memory - I think I have it set to about 1024MB. See my other thread replying to Naoto.
>>
>>No idea if this is still appropriate today, but back when Mac Rubel did lots of experimentation on this stuff, he found that about 1/3 of physical memory worked best.
>>
Previous
Reply
Map
View

Click here to load this message in the networking platform