Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
SQL Server - High Performance Disk Subsystem
Message
De
23/03/2008 11:19:06
 
 
À
22/03/2008 20:03:58
Information générale
Forum:
Visual FoxPro
Catégorie:
Client/serveur
Versions des environnements
Visual FoxPro:
VFP 9 SP1
OS:
Windows Server 2003
Divers
Thread ID:
01304529
Message ID:
01304711
Vues:
22
Hi Al,

as just a few more thoughts, as we are in agreement about Pareto:
>>See above - get an idea of the MB moved.
>
>The VAR claims 20-30Mbit/sec traffic per workstation for SBO. I'm not completely sure if that's average or burst; I suspect the latter. The VAR's primary parameter for gauging SBO backend performance is the PerfMon counter Physical Disk:Average Disk Queue Length.
>
>According to an MS Technet Whitepaper http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx#EFRAE , ADQL should not exceed 2. We tracked this on the SBO backend while running various queries on client workstations. Most of the time the value is 0, as you might expect, while everything is idle. Small queries bump this value up to 30-50 for half a second or so. A large query could push the value over 350 for as long as several seconds. The VAR claims that as long as peaks don't exceed "40 or so" for any length of time, then performance is OK for SBO.
>
>The VAR claims that, on a number of the problem installs they fixed, the fix was increasing SBO backend performance to reduce ADQL. Again, I have to respect that.
>
>OTOH, the VAR isn't really too sure exactly how their recommended disk configuration helps SBO in particular. What they do know:
>
>- In some installs they are given complete control of the backend, they specify the backend hardware, set up SQL Server and SBO on it and deliver it as a package to the client. In these cases, they have always specified lots of disks as outlined below, and these installs "don't have problems". BTW they are an IBM hardware VAR as well so in those cases they spec IBM hardware.
>
>- In fixing "problem" installs, to reduce ADQL they recommend disk configurations as below, and the problems go away

The above argues for at least partial "server read" optimization of the app. Just pointing out things "not clicking" in my visualisation.

>Being a big believer in Pareto's Law, I have to ask myself what do we really need to fix the problems? That's why I'm trying to get some feedback here < g >

Having externals tweak config is expensive as well - just throwing HW at the problem is actually a pretty good strategy<bg>.

>>>The server is currently a Dell PowerEdge 1800 w/two dual-core Xeons, 4GB RAM and 5x 15K SCSI hard drives on a PERC4/SC controller in a RAID5 array. There is only one logical volume, C:. This server is dedicated to SQL Server, running 32-bit W2K3 R2 and 32-bit SQL Server 2005. At time of purchase it was thought by everyone that this server would be more than adequate to support the user load.
>>
>>4 Gig on 4 cores seems to be the most pressing problem. As second step I'd NEVER create a "one big partition" layout.
>
>Re: the partition - we were specifically told by the VAR to set up the disks that way :-(

At least they are out of business now!

>>>- 4 disks in RAID10 for Windows and the swap file ( C: )
>>This causes one highly raised eyebrow - what is swapped here to need that setup ? Especially Raid10 for a swap file ?
>>Better image your boot partition and Raid0 the swap file, if necessary at all
>
>The swap file will get hammered in low memory conditions, so yes, a RAID0 at least will probably help. If the OS (without swap file) is put on yet another volume, for redundancy that would need to be RAID1 anyways. Add 2 more drives for RAID0 on the swap file, and you're up to 4 drives. That may be why RAID10 is recommended here.

Still doesn't fly with me, as you can prepare the installation to use other disks for swap. These other disks will be used if the room on the first disk is not enough or the first swap disk has problems. Analogous thinking from client OS behaviour, doubt if server behaves differently.

>>>- 2 disks in RAID1 for SQL Server logs ( D: )
>>As Raid1 does not speed up writing, this only gets you security. Necessary ? Perhaps use 2 disks from Swap file to implement Raid10.
>
>Apparently SQL Server log writing is sequential, in their whitepaper Dell recommends that RAID1 is enough. Have you heard otherwise?

Check the numbers written. Raid1 will give you NO speedup on write, sometimes a speedup on read with "clever" controllers. The amount of data is less - it might be enough. Measure bottlenecks later. My quickdraw answer is 3:5, and 2Raid1 vs. 4 Raid10 gives 1:2 "oof factor"

>That was the first thought I had - the idea that the fastest disk I/O is no disk I/O, boost RAM for Windows disk cache and SQL Server working space. Any upgrade could conceivably be in 2 stages:
>
>1. Boost RAM and switch to 64-bit software. Maybe rearrange existing disks RAID at the same time
>2. If that's not good enough, then go with more disks

>>That should be more useful on write- or transaction-heavy usage. Is SBO that kind of app ?
>I don't think it's write-heavy. As for transactions, I don't know - do you know how to monitor that?

ask the "smart" VAR or SQL profiling logs

>The Xeons seem to be Prescott/NetBurst: "PROCESSOR..., 80546K, 3.0G, 2M, XEON IRWINDALE..., 800, R0". So, lower performance than Core2, but I don't recall ever seeing the server even close to being CPU-bound. Especially since SBO is not true client-server, so a lot of the query grunt work is being done on the workstations. Have you seen any numbers on how these CPUs perform 64-bit vs. 32-bit in W2K3?

If it is REALLY Netburst, shoot the person recommending such a server. Yes, FSB will eat into scalability much earlier, as the CPU caches are smaller (might become even more pressing in 64Bit). But ALL Opteron based solutions give much better scalability than Netburst servers. Try earlier Tweakers or Anand for comparison. IF this is just a bogus arguement to get a Core-based server smile and agree.

>>You bet. How about installing a REALLY beefy server and run SBO on the server, controlled by MSTSC ? Factor out most of the physical network from the equation ?

>An interesting idea, but probably not for the reason you'd think < g >. The VAR guru ran EtheReal on the network for a while and found it very clean. All we did was to disable AppleTalk on some HP network printers that broadcast FF:FF packets. All nice clean wiring going to a Dell PowerConnect 2724 GB switch. 100Mbit to all workstations, which is more than adequate, and GBit to the various servers, again plenty. Everyone agrees the physical network is no problem at all.

Did he try for perfect storms with the SBO app running on different WS at the same time ? 30MBit/s might create the backlog, if a sizable amount of data is "planned" to travel the down wire. Dunno if file cache trashing might become pert of the oicture. Try GB net at two WS.

regards

thomas
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform