Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Optimizing deletion in batch
Message
De
07/01/2014 01:51:01
 
 
À
07/01/2014 01:33:55
Information générale
Forum:
Microsoft SQL Server
Catégorie:
Maintenance bases de données
Versions des environnements
SQL Server:
SQL Server 2008
Application:
Web
Divers
Thread ID:
01591061
Message ID:
01591447
Vues:
47
This message has been marked as a message which has helped to the initial question of the thread.
>To add more information to this, this does not seem to be related to an increase of the log file. Tonight, the log file was shrink and the delete command generated the same behavior. While it was executing, no one was able to insert into that table because of a deadlock.
>
>I really do not know the trigger of such situation that starting to happen about 10 days ago. We never had the situation before. I was able to delete a few thousand records every 15 minutes and this was not impacting users on the Web site which were doing hits, thus creating new records in the same table. The only think I could see is that the number of records that have to be deleted every 15 minutes is bigger than before. This may create a situation that SQL Server is now forcing a lock on the table during that timeframe.
>
>Having said that, unless I can find the cause of all this, I need to find another technique for deleting those records. The only one I could see presently would be to delete them one by one in a For/Next loop. I still need to do some benchmark tests to see if the performance would be adequate and if this would avoid the deadlock situation. There is also a possibility of recycling existing records in realtime, thus replacing instead of inserting, but this is something I would rather not consider as this would create a primary key situation and would break the chain of records.

Michel, generally, a "one by one, for next loop" is not recommended. (I won't say "never", but as a general rule, it's better to delete rows as a set-based operation)

Just to throw out some additional possibilities....

- Can you identify the fragmentation level of the table/indexes? (I can give you some code to check for it, if you would need it). I'm not saying it's necessarily what's going on here, but I see many client databases that suffer serious neglect in the area of fragmentation
- How often are the indexes rebuilt on the affected tables?
- How often are database statistics updated on the affected tables?
- Are the conditions for the deletes (and the concurrent queries that are locking up) effectively using indexes? (i.e. can you check the execution plans of all relevant queries to see if maybe the execution plan isn't as efficient as it could be)

These might or might not yield any useful information, but they're worth checking out.
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform