We are trying to automate updating our client's database.
There is only 1 database and all the tables have no dependencies on being in a database. That is, freeing a table makes no changes besides not being in the database. There are 66 tables, and the largest has about 200 fields, and our large customers have about 10,000 records in that table.
The scheme is to create a pseudo data dictionary from the library. Then ship it with update code to run out on clients' server.
The code so far, generates a client pseudo data dictionary, then compares with the shipped dictionary, and logs any differences.
My main question is what is the best way to handle the table updating?
1) Once any change is found, create the updated table call temp, create all indicies, append from the current table, run any initialization, remove current table from database, rename current table, rename temp table to current, add updated table.
2) Issue ALTER TABLE for each change logged. Run any initialization.
I am more concered with not corrupting the database, nor tables. Of course, the very first thing is: everything is copied to a backup directory. What happens when the power is switched off in the middle? I am also concerned with the update code maintanance. Will issuing commands cover all the table updates? Is (1) better when there are 3 or more fields updated in a table? Am I totally off base? Are there any links that cover remotely updating tables?
TIA,
Carol Dewar
Magram Computer Services
Carol Dewar
Magram Computer Services LLC