>
Pascal,
>When it's primary keys I'm not with the idea of supporting !deleted() filter. Gendbc currently doesn't support deleted() filter on PK and I think it's not just w/o thinking something reasonable. Some frameworks use deleted() filter on PK but that's just my personal opinion, I don't support the idea.
The entire idea of reusing a PK violates the basic concept of a PK - a PK is true, unique identifier, not just at some given instance in time in some subset of the domain, but across the entire life and breadth of the domain of the PK. Like it or not, people who insist on reusing PKs don't understand what a PK is...or what it should be, and will end up missing large chunks of their posteriors at some point because of it.
Physical record reuse is a different issue. While I consider it a clear sign of brain damage, or in the rare instances where you're reaching physical limits of file sizes supported by VFP, bad judgement in selecting the file system to handle the data, or unexpected data volume growth at the very least, in most cases, given the rapidly decreasing costs of reliable storage, record recycling represents a bad misallocation of system investment. Disk space is cheap, and the performance costs for recycling records far exceeds any long-term performance advantages from periodic physical reorganization. And the cost of screwing up in the recyling process logic is far, far higher still.
Why invest a great deal of time and effort, and needlessly increase the fragility of the file system, trying to avoid some probably beneficial periodic file reorganization, from using a scheme that's going to break the instant you stop using the native file system? Falls in the same category of reasoning that explains how smart we were in accepting a file system that we knew would break if we ever had a hard drive bigger than 768MB. Or how 640K was more memory than anyone would ever need!