Jeff,
>I have a VFP application that needs to run in several different sites, then merged together for combined reports. I need a better way to keep the primary keys from being duplicated. I am currently using a modified version of getid() to assign the keys now, but sometimes the users will have to restore their system and forget to re-set the key values creating a real mess. What would be the "pro" way to handle this?
Some possibilities:
1) As for users forgetting to reset key values after a restore, do you mean that creates a mess because you will get duplicate ID with some of the records that have been merged already?
Other than that, I don't see where you would have a problem with restoring, if all the data files in your app are all restored.
2) You could assign each location a range of ID's that are widely separated from each other, and monitor that periodically.
3) You could use a location code field in addition to the pk field, and make the pk a concatenated key: loc+id. This assumes a character-type id.
4) If data tables are small enough, you could use an integer field for pk and add a large value to the next number in your id-generation scheme. For example, location 1 would grab the next id and add 100 million to it. Location 2 would add 200 million, etc.