Mike Yearwood
Toronto, Ontario, Canada
General information
Category:
Coding, syntax & commands
Hi Wayne
I agree with most of what you've written.
>I guess I don't get it... I still see no need for buffering - the concept of buffering is that you get a local copy of the data (that already exists), you work on it, update it, etc. Then you commit the changes to the underlying table. It is not for large insert operations.
>
>If you create memory variables and INSERT INTO ... FROM MEMVAR, you are essentially waiting until you have it the way you want before you create your archive set. Buffering is just overkill.
>
>The process you need to employ sounds like this...
>
>Loop through existing master table and identify records to archive
What he could do here is use begin a transaction.
>For each record, summarize parent and child records into memory (either use select and/or m. assigns)
>Insert parent/child summary into archive (INSERT INTO __archive FROM MEMVAR)
>Mark the master table record as archived
Then end the transaction here. That should provide the safety that he thinks buffering will provide.
>
>If the process fails anywhere - you just restart it and it picks up where it left off.
Which would be even simpler, because if the transaction fails for one record, nothing should have been changed. So it should be caught on the next restart. Right?
Previous
Next
Reply
View the map of this thread
View the map of this thread starting from this message only
View all messages of this thread
View all messages of this thread starting from this message only