Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Vfp not the only App in Troubled with more RAM
Message
From
24/07/2007 07:09:29
 
 
To
23/07/2007 15:17:21
Al Doman (Online)
M3 Enterprises Inc.
North Vancouver, British Columbia, Canada
General information
Forum:
Visual FoxPro
Category:
Other
Environment versions
OS:
Vista
Miscellaneous
Thread ID:
01242544
Message ID:
01243104
Views:
41
Hi Al
>So, the main reason you want to be able to use more RAM is for a larger disk cache? Some thoughts:

Yupp, did some rather time-consuming but clearly interpretable tests running the same application under different levels of OS visible RAM (via /MaxMem in boot.ini) and with different levels of sys(3050) with distinct measuring for different types of data crunching.

As this app is mostly xBase with some rushmore needing code it works best with level between 96 and 192 MB of RAM, which gets elevated for housekeeping tasks. I am fairly certain that the base findings are still correct (intensive data gatering in FPW, VFP6, VFP8Sp1 - no extended testing under VFP9 as we had those few rushmore-issues correct even before - had only a handful of deleted() indices left which were morphrd into binary).

>- are there any disk controllers with large on-board hardware caches?
No - they are usually faster only if rearranging requests, which will happen more often in multi-user database scenarios. Info gained through some database tests between SQL backends (usually PostGeSQL and MySQL/MyISAM and MySQL/InnoDB and one unnamed participant and some database tests on TomsHW ore Anandtech). The better fault security battery-backed controller-caches have is irrelevant for us - we just start the process again in any case of error.
The typical workstation profile is better (and cheaper) served by oodles of usual RAM.

>- what about a separate disk subsystem (e.g. SAN) with fast connection (Gbit or FC)?
We have GBit, but running across TCP/IP is much slower. We also tested NetBEUI under NT, but all in all it is faster to locally copy the setup data (2-8 GB) from the zipped store needed for backup puposes anyhow and crunch away locally. FC not tested, but I doubt it is faster than JBOD'ed local disks. Again here we go for many chepaper disks spreading the different data accesses (for instance all index files are on a different disks as the data files) and separate disks for system and tmpfiles.

>- as a cheap(er) alternative to above, how about a 64-bit Linux system with lots of RAM running Samba, and again with a fast network connection?

Ummmh - if you mean across a physical LAN wire I doubt performance is up to it. BUT if a quad core 64-Bit Linux hosts samba and is internally queried by VM's on the same box, most of the disk caching should happen on the 64-Bit side and I read at least once that although internal TCP/IP access is done across simulated 100MBit LAN cards, the actual speed is much higher. Anecdotal, but another interesting option. Thx for the nudge into a corner I had not really considered before.

>As for host-OS disk-cache handling for VMs, that's a good question I don't know the answer to. Your best bet is probably to find a forum for VMWare etc. and ask there. My guess is it could work the way you want:
>
>- suppose a 64-bit Linux host OS w/8 or 16GB RAM
>- also suppose a single x86 VM configured to use 4GB RAM

Probably those VM's are kept at 511 MB for the memory problems in vfp. There also was an article on how to create minimal XP VM's - probably easy to find via googling.

>Overall, an interesting approach for improving the performance of disk-bound x86 operations.

Thx - if I get clear results I'll post them.

regards

thomas
Previous
Reply
Map
View

Click here to load this message in the networking platform