I just wanted to pose this question to everybody to make sure our application is archiving as efficiently as possible, given that the process takes quite some time.
We have a master Deal table that holds core transaction information and then there are a bunch of children related tables that hold more information (accounting entries, etc) about the deal which are related through their internal system deal number key. We have an archive function that lets users create and archive database in a separate directory that moves the old information to this new data set and removes the old data from production.
To accomplish this, we create the new DBC and verify the structures of the archive tables (I'm thinking of changing our code to use SDT to accomplish this). Then we append from the old corresponding table into the new archive table, followed by a PACK on the new table to trim the FPT size if necessary. After the APPENDs are completed, we then go through the old tables and DELETE the records in question, followed by a PACK on the old tables.
Does this procedure sound reasonable or is there a better / more efficient way to accomplish this archive?
Thanks.
Jon
Jon Rosenbaum
Devcon Drummer