Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Performance improvements with SSD?
Message
From
25/03/2009 19:01:38
 
 
To
25/03/2009 14:52:57
General information
Forum:
Visual FoxPro
Category:
Other
Environment versions
Visual FoxPro:
VFP 9 SP2
Miscellaneous
Thread ID:
01391449
Message ID:
01391524
Views:
70
Hi Al,

Thanks for this very comprehensive response! I will try and do it justice - my answers after each of your points.

>1. What is CPU utilization on the client workstations? If close to 100% then nothing you do on the server or network side will speed things up any further

Good question, I have not checked this since I moved the index local.

>2. In addition to having local indices, would it be possible to have local data copies or subsets, so all processing is local on the client (i.e. take snapshots), and possibly just return updates to the server?

One of the first perf improvements I tried was to take a local copy of data and indexes, the perf improvements were no-way near as good (still better but not as good as I get now).
I really think this is because I am splitting the cache plus the disk access across 2 PC's (client & server).

>2a. I assume you have TMPFILES in CONFIG.FPW pointed to a local folder on each client, not a network share

I did have tmpfiles=c:\temp, but then found that as occasionally users connect via a citrix server, the c:\temp was routing to the local c: while the application was running from the Citrix server - therefore the benefits of local c: were totally lost.
So, in the end, I removed all tmpfiles= settings from config.fpw, which means that VFP defaults to TEMP= settings in windows variables. This works fine both for Citrix and normal clients.

>3. If you have to share data from a server, make sure each client has a gigabit network connection. If you have a small group of users generating traffic to a dedicated server, on an otherwise general-purpose network with many other standard users, you might be a good candidate to install managed network switch(es), and segmenting your network to manage traffic more efficiently and reduce contention

Good idea for managed switches.. I will see how easy it is for my client to implement.

>4. If you find the server is still the bottleneck, the best way to improve disk performance is to eliminate disk accesses as much as possible. You do this by having a lot of RAM in your server so it can implement a large disk cache. If you have roughly 4GB data + index files, you would want at least 8GB RAM in your server, you may want to consider 16GB. This means going with a 64-bit OS. Large data files will easily overwhelm any CPU cache, so you will want the fastest front-side bus (FSB) speed and matching RAM speed that you can reasonably afford, in order to keep the CPU(s) fed as efficiently as possible

16gb of ram.. hmm I could set up a 4gb ramdisk in that :)
Ok, thanks for the tips, I will include this in my server recommendation.

>5. I haven't used SSDs myself, but I've been following their progress. As you've no doubt found, there are lots of reviews e.g. http://www.tomshardware.com/reviews/Storage,5/Internal-Storage,19/ . It seems they can improve disk performance a lot, especially for random read operations; write operations (to flash-based units) are much slower. There have been some reports of wear-leveling algorithms causing fragmentation over time, and leading to greatly reduced performance. If you're going to have a large disk cache, then SSDs would help mainly in the initial loading of the data from disk to cache, they wouldn't help that much after the cache is loaded, especially if write-behind caching is implemented, or you're not doing a lot of writes to the server. So, for loading a relatively small number of large files (i.e. sequential, rather than random reads) to cache, you may find the performance improvement of SSDs to be not worth their (currently very) high cost.

You say "very high cost" - not really, I can get 5 64gb disks for around $1000
However, you are right.. I don't need SSD if I have a big enough cache on the server.

>6. You may find that 15 hungry clients with gigabit connections, reading from a high-performance server, may be able to overwhelm a single gigabit NIC in the server. In that case, you may want to install 2 or more gigabit NICs in the server, and aggregate them: http://en.wikipedia.org/wiki/Link_aggregation

I will get the server admins to check CPU and network utilisation after my local index trick - hopefully this will indicate any bottlenecks.

>I don't know of any way to have 2 different files in the same Windows folder be present on 2 different physical drives. If you have a large disk cache, this should not be important. You can, of course, specify a .CDX file on a different network share, located on a different physical disk, from a .DBF table.

Hmm.. you are correct, but this involves changing code in lots of places.. and makes the application more complex to maintain. I would also have to modify the overnight reindex routines to work with this new layout,

Thanks a lot for your answers Al, I really appreciate the time you spent to answer.
You have given me lots of food for thought, I will report back if I find anything interesting.

Cheers
Will
Will Jones
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform