I need to run updates on some very large files 180M to 200M records. The first time the process is run every row will be updated. Subsequence runs will touch about 20% of the set but the set will grow at 10% per run.
My problem exists with TempBD. Its growing larger than I can get disk space.
I’ve tried to use SELECT INTO as much as I can to minimize the updates.
The DB is all batch loaded there is no user transaction.
Any ideas?
Software engineers are trained to read and understand code; they are not trained in mind reading. Document the purpose not just the functionality.