Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Runaway UPDATE on Large Table
Message
From
17/10/2002 16:30:49
 
 
General information
Forum:
Microsoft SQL Server
Category:
Other
Miscellaneous
Thread ID:
00711832
Message ID:
00712576
Views:
9
Thanks Michael. I had avoided indexing to optimize the INSERTion of about 300,000 new rows per day, but it looks like I may have to bite that bullet.
We are going to SCSI RAID 5 for the disk subsystem.
But I still have a nagging question - what about RAM? I have 4GB RAM on the existing server, but since I'm running SQL Server Standard, only 2GB are addressable by SQL. Does that make sense when I'm anticipating tables which will be as large as 60GB? I read somewhere that you should try to have enough RAM to hold your largest table. Have you had any experience with RAM requirements for tables this big?

Carl

>Build an index on the 20 million row table. Is it PAC? Make it a composite index on PAC.CID and PAC.Sys.
>
>You're getting killed by your disk subsystem. A queue length of 140!!! The rule of thumb is that it shouldn't go above 2*(the number of disks).
>
>Having the transaction log on a seperate physical disk would help alot also.
>
>Be patient. This kind of thing doesn't happen in micro-seconds.
>
>-Mike
>
>>From Query Analyzer I attempted to run an update to a 13GB unindexed table (about 20 million rows) in a database sized at 30GB containing only the table in question and a 400-row joining table. After 36 hours running I cancelled the query. Here's the query code:
>>
>>UPDATE P
>>SET P.Sys = W.Sys
>>FROM PAC P INNER JOIN WORK W ON (P.CID = W.CID)
>>WHERE P.Sys IS NULL
>>
>>Specifics:
>>WIN2K Standard Server
>>SQL 2K Standard
>>4GB memory
>>Promise ATA RAID 10
>>Dual Pentium 1.8 processors
>>
>>
>>PerfMon samples:
>>Processor: between 1 and 3% utilized
>>Avg. Disk sec/transfer: between .008 and .800
>>Avg. Disk queue length: between 1 and 140
>>Avg. Disk sec/write: between .005 and .800
>>Avg. Disk sec/read: between .007 and 1.500
>>
>>By these statistics I infer that my disk subsystem is woefully inadequate, but I'm also wondering if memory could be a problem since memory usage is at about 1.9GB during the attempted UPDATE.
>>I'm pretty green around the ears when it comes to tables of this size, but a very similar update query ran successfully in about 40 minutes when the table (despite the lack of indexing) was about 1GB smaller.
>>
>>Any ideas about how I can resolve this problem without spending a ton of money on a big box, SQL Enterprise and 64GB of memory? Will a SCSI subsystem be enough? If so, should I use fiber or would Ultra160 be enough?
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform