Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Performance improvements with SSD?
Message
From
25/03/2009 14:52:57
 
 
To
25/03/2009 13:45:23
General information
Forum:
Visual FoxPro
Category:
Other
Environment versions
Visual FoxPro:
VFP 9 SP2
Miscellaneous
Thread ID:
01391449
Message ID:
01391472
Views:
98
>Hi All,
>
>General observations & questions really.
>
>Has anyone experimented with SSDs and VFP tables?
>
>I have a biggish system which is starting to strain a bit.
>Some general info:
>15 simultaneous users
>4 gb total across all tables & indexes.
>
>Just as an FYI
>All 15 users need to run a big job on the same day, which was really hitting the network and server hard - this job was taking up to 3 hours to complete (for each user).
>As part of my analyses, I found the indexes were being read 30 or 40 times for each .dbf read (when this was multiplied up to 15 users and thousands of dbf reads it became crazy).
>
>Therefore, I had the idea of taking a local filtered copy of certain indexes before starting any of the big processes (index on xx to yy for zz)
>This reduced these 3 hour job down to 10 minutes for all 15 users - Bliss!
>Network and server contention was dramatically reduced.
>
>Next steps are to move the DB to a dedicated file server (at the mo it's on a shared fileserver with 200+ normal fileserver users).
>
>I was also thinking that it may be a good idea to put in some SSD's on the new server. I understand that drive seek latency is a big slowdown in physical drives. Could this make a difference?
>
>Another related question: Does anyone know of a windows / raid solution to host a .cdx file on a different drive than the dbf while it is still in the same folder? I guess this could help too.

You've already made a more-than order-of-magnitude improvement - from 180 minutes down to 10 minutes. What I'd look at would be:

1. What is CPU utilization on the client workstations? If close to 100% then nothing you do on the server or network side will speed things up any further

2. In addition to having local indices, would it be possible to have local data copies or subsets, so all processing is local on the client (i.e. take snapshots), and possibly just return updates to the server?

2a. I assume you have TMPFILES in CONFIG.FPW pointed to a local folder on each client, not a network share

3. If you have to share data from a server, make sure each client has a gigabit network connection. If you have a small group of users generating traffic to a dedicated server, on an otherwise general-purpose network with many other standard users, you might be a good candidate to install managed network switch(es), and segmenting your network to manage traffic more efficiently and reduce contention

4. If you find the server is still the bottleneck, the best way to improve disk performance is to eliminate disk accesses as much as possible. You do this by having a lot of RAM in your server so it can implement a large disk cache. If you have roughly 4GB data + index files, you would want at least 8GB RAM in your server, you may want to consider 16GB. This means going with a 64-bit OS. Large data files will easily overwhelm any CPU cache, so you will want the fastest front-side bus (FSB) speed and matching RAM speed that you can reasonably afford, in order to keep the CPU(s) fed as efficiently as possible

5. I haven't used SSDs myself, but I've been following their progress. As you've no doubt found, there are lots of reviews e.g. http://www.tomshardware.com/reviews/Storage,5/Internal-Storage,19/ . It seems they can improve disk performance a lot, especially for random read operations; write operations (to flash-based units) are much slower. There have been some reports of wear-leveling algorithms causing fragmentation over time, and leading to greatly reduced performance. If you're going to have a large disk cache, then SSDs would help mainly in the initial loading of the data from disk to cache, they wouldn't help that much after the cache is loaded, especially if write-behind caching is implemented, or you're not doing a lot of writes to the server. So, for loading a relatively small number of large files (i.e. sequential, rather than random reads) to cache, you may find the performance improvement of SSDs to be not worth their (currently very) high cost.

6. You may find that 15 hungry clients with gigabit connections, reading from a high-performance server, may be able to overwhelm a single gigabit NIC in the server. In that case, you may want to install 2 or more gigabit NICs in the server, and aggregate them: http://en.wikipedia.org/wiki/Link_aggregation

I don't know of any way to have 2 different files in the same Windows folder be present on 2 different physical drives. If you have a large disk cache, this should not be important. You can, of course, specify a .CDX file on a different network share, located on a different physical disk, from a .DBF table.
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform