Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Proper way to recycle record
Message
From
20/01/2015 14:49:48
 
 
To
20/01/2015 08:19:35
General information
Forum:
ASP.NET
Category:
Other
Environment versions
Environment:
VB 9.0
OS:
Windows 8.1
Network:
Windows 2008 Server
Database:
MS SQL Server
Application:
Web
Miscellaneous
Thread ID:
01613891
Message ID:
01613976
Views:
33
>>Based on the first thread you list, you still get logging unless you use workarounds that endanger backups or database integrity in the event of a failure. Batching deletes 500 or 2000 at a time will reduce logged transactions by that factor (which could be a big win) but at the cost of having to execute the statement 1000 or 4000 times to delete 2 million rows. Have you tested that scenario? Maybe the time is quick enough.
>>
>>Actually, looking at your post again, if you need to delete 2 million rows every 30 minutes ( i.e. 1800 seconds ) that's over 1000 rows/sec. That implies that somewhere else you're also adding at least that many rows per second.
>
>Yes, that is a scenario that I would like to test. For now, we are at 8 hits per seconds. I delete them every 15 minutes, for those older than 30 days. This is the incoming so this table is the most active in the application and is constantly being used.
>
>>To me that sounds like a lot of continuous IOPs. If that's the case you're probably running on a strong server with a fast disk subsystem and some effort may have already been expended in setting up the server and databases for high performance. You might want to ask whoever did that for their recommendations.
>
>Yes, it is a very powerful environment. I just want to be sure our application is maximized as far as performance.
>
>>- Need to permanently retain certain rows, but periodically delete other rows that become invalidated, or
>
>It is the Hit table. Every 15 minutes, I remove everything older than 30 days. This is receiving some incoming at the same time I delete. I moved it to 15 minutes as once this process was executed at every hour, it was too long. I will see however with removing 2000 records at a time.
>
>>- Want to maintain a log file of, say the 10 million most recent rows (like Windows event logs)? If so then this may actually be a good candidate for row recycling as long as the overhead to do so isn't too much in your environment
>
>The problem with recycling is that the locking mechanism to be sure one of those 8 hits per second select its own record to be reused will end up to be costly. This is at every hit. The average response time is 0.6 second. I do not want to jeopardize that. The recyling mecanism in record to the locking infrastructure will have an impact on that.

In that case there's another general approach possible. If you need to retain 30 days' worth of hits, you could use 32 separate hits tables:

HitsDay01
HitsDay02
...
HitsDay32

Initially you store "HitsDay01" as the table your hits recording function writes to. When the day changes this value changes to "HitsDay02" etc.

When you start on HitsDay31, you now have a full 30 days' worth of data. But you don't delete anything yet.

At the start of HitsDay32, you have a full 31 days' worth of data. Now you truncate HitsDay01 and you have 30 days' worth (02 to 31 inclusive), and you're adding new rows to 32.

The next day you truncate 02, so you have 30 days' worth of data (03 to 32 inclusive), and you're writing to 01.

Downsides:

1. Your hit tracking function has to know which table to write to

2. Operations looking at log entries will need to UNION all the tables. However, you could create a view that's a UNION ALL, it may not be that expensive. I have no idea if indexing the individual tables would help an aggregate ORDER BY or WHERE clause in queries of that view
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform