Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
VFP Memory
Message
De
17/01/2012 17:07:44
 
 
À
17/01/2012 15:33:16
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Titre:
Versions des environnements
Visual FoxPro:
VFP 9 SP2
Divers
Thread ID:
01533076
Message ID:
01533137
Vues:
87
I have a client running a woodworking operation with a small LAN and SBS 2003 on a low-end Dell server (SC440). Downtime is expensive for them so they spec'd redundant HDs in RAID1 for the server.

Recently the RAID controller failed. Luckily the controller was SAS with SATA drives attached, so it was possible to plug one of the drives directly into a SATA port on the motherboard and the system "just worked", although with reduced performance on the single drive.

They decided to migrate to a new controller (replacement Dell SAS units are nearly impossible to find) and new drives for better performance. They expressed interest in SSDs for the server so I did some research.

From what I found, for servers it's not quite as simple as using enterprise-class SSDs:

1. Drive designs are still evolving rapidly. All makers - even Intel - are running into issues ( e.g. http://communities.intel.com/thread/24205 ). IMO it's still not wise to host critical functions such as the system partition of a DC or SBS on a single SSD, regardless of make/model.

2. For small servers, redundancy usually requires hardware RAID. RAID is also handy if you want to obtain a large-ish volume using several SSDs, whose capacities are typically pretty limited unless you want to spend big bucks on 512GB - 1TB units.

3. SSD controllers are getting better at wear leveling, to the point that even some of the better MLC drives are getting decent life. However, everything I've read is saying SSDs all require periodic TRIM operations to maintain their life. Modern OSs like Win7/Server 2008 support TRIM, but older ones like XP/Server 2003 do not. But here's the real kicker - there is no support for TRIM for drives attached to hardware RAID controllers. If you attach an SSD to a hardware RAID controller, you're dooming it to a life without TRIM.

4. My understanding is some SSD-based storage appliances and SANs support TRIM internally, so they get around that issue. But the price of those units is out of reach of many companies.

5. Some single modern SSDs (let alone RAID) can saturate a SATA 3Gbps connection, so SATA 6Gbps (or better) is really a requirement these days to exploit the full performance of SSDs. Unless your hardware is quite new, it's unlikely to support SATA 6Gbps, so a controller upgrade may be required even if you decide to use a single drive. I haven't seen any enterprise-grade add-on SATA 6Gbps controllers that aren't RAID controllers; even if they are available, do their chipsets support TRIM passthrough?

It would have been nice to tell my client to put two enterprise SSDs on a SATA 6Gbps RAID controller in RAID 1 - and be blown away by the boot speed < g >. However, with all the above in mind, I couldn't in good conscience recommend that. We ended up using a couple of WD RE4 2TB HDDs on an LSI MegaRAID 9240-4i. The server now boots SBS 2003 R2 in less than two minutes, before it was taking 7+ minutes, we're pretty happy so far. SBS 2003 running Exchange and lots of other stuff is disk-chatty, and the server is noticeably more responsive than it was before the original controller failure.

>re: SSD's -- there are two levels of drives, desktop and server. The difference is mainly in how long they will last. You wouldn't want to put a desktop drive into a busy server, since expected life would likely be around a year. With the new, good server SSD's, I think you can count on 5 years, which is longer than Google and an independent research have established as the average server HD life (3.5 years).
>
>Hank
>
>>>>Hi All,
>>>>
>>>>Several days or even weeks ago I read a post here which I think was from Hank Fay. It mentioned something about the maximum amount of memory that all running instances of VFP could collectively have was the 2Gb limit. I understand that VFP cannot reach beyond 2Gb. But why would this affect separately running instances of VFP on a single machine? If they share the same DLLs then again I can understand it. But what if they all had their own copies of the VFP DLLs? Could someone expand on this?
>>>
>>>Pretty sure you misinterpreted his post.
>>>It was about the possibility to patch vfp with the LARGEMEMORY[too lazy too Google] or something flag,
>>>which IN OTHER programs allows those to adress 3 insted of 2 GB RAM inside each W32 process.
>>>So it is not collectively, but in each instance the maximum "program" footprint, as some memory is needed from the top as well,
>>>hence the 1 GB even with LARGEMEMORY. There are some programs that look for the correct location in an exe and patch,
>>>but if Hank says it doesn't work, I would first look at other alternatives. SSD can speed up previosly disc bound programs like crazy -
>>>this is the direction I went 18 month ago. Depending on the task I got a speedup of 20 - 60%, elevating single core allocation from
>>>30 ~50 to 90 ~ 95%. If you are running VM's on big RAM HW check out the different modes of using that RAM:
>>>depending JUST on the configuration the speed can be different by a factor of 3 on the same HW.
>>>
>>>regards
>>>
>>>thomas
>>
>>
>>Hi Thomas, thanks for the reply. I am not actually solving a problem - I was just curious about the post and had forgotten at the time to inquire about more information. Thanks for your input. SSD = Solid State Disk, yes?
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform