Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Proper way to recycle record
Message
De
20/01/2015 02:11:40
 
 
À
19/01/2015 21:27:56
Information générale
Forum:
ASP.NET
Catégorie:
Autre
Versions des environnements
Environment:
VB 9.0
OS:
Windows 8.1
Network:
Windows 2008 Server
Database:
MS SQL Server
Application:
Web
Divers
Thread ID:
01613891
Message ID:
01613948
Vues:
39
>>Before moving to a recycling process did you Google [sql server delete no log] ?
>>
>>I haven't read any of the hits but what you're doing isn't unique so others may have found useful workarounds.
>
>Thanks, this search is proven to be useful. I found some interesting topics.
>
>This page discusses some of that:
>
>http://www.sqlservercentral.com/Forums/Topic648742-338-1.aspx
>
>But, there is one over there who votes for the importance of a rollback. Lets say I do not want a rollback because what I delete as to go no matter what.
>
>Basically, what I was hoping for was a way to delete without any transaction log being involved. When I first tried that, I tried to do it in one command. That would be similar to what is being discussed at this page:
>
>https://social.msdn.microsoft.com/Forums/sqlserver/en-US/52e6be98-6165-4b1e-a926-5d5609ab8486/delete-large-number-of-rows-without-growing-the-transaction-log?forum=transactsql
>
>But, that was making the transaction log to crawl. So, I went into selecting all the primary keys I need to delete and delete one record at a time. While this is safer, it is not the faster way to do it.
>
>They recommand a batch of 2000 records. But, this batch limit, in order to keep it safe, varies based on the environment. If I have to delete two million records every 30 minutes, deleting them 2000 at a time will still take a lot of time. I remember not so long ago we were able to achieve that in a fraction of a second. lol
>
>...oh yes, there is TRUNCATE. But, that wipes out the entire table. I wish there would be something in the middle.

Based on the first thread you list, you still get logging unless you use workarounds that endanger backups or database integrity in the event of a failure. Batching deletes 500 or 2000 at a time will reduce logged transactions by that factor (which could be a big win) but at the cost of having to execute the statement 1000 or 4000 times to delete 2 million rows. Have you tested that scenario? Maybe the time is quick enough.

Actually, looking at your post again, if you need to delete 2 million rows every 30 minutes ( i.e. 1800 seconds ) that's over 1000 rows/sec. That implies that somewhere else you're also adding at least that many rows per second.

To me that sounds like a lot of continuous IOPs. If that's the case you're probably running on a strong server with a fast disk subsystem and some effort may have already been expended in setting up the server and databases for high performance. You might want to ask whoever did that for their recommendations.

In this table, do you:

- Need to permanently retain certain rows, but periodically delete other rows that become invalidated, or

- Want to maintain a log file of, say the 10 million most recent rows (like Windows event logs)? If so then this may actually be a good candidate for row recycling as long as the overhead to do so isn't too much in your environment
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform