>Hi Bob,
>That's a fair enough approach, but I'm surpised about bloat from Deleted(). Rethinking this, Dragan's method might be much better than mine in many circumstances because the deleted records can be 'clustered' or clumped together in the data set, in which case taking a random point in the full file, using reccount(), might cause say a 90% hit for example on a particular deleted record. This defeats the purpose. Your scheme might also suffer this.
>
>(Dragan's method took a random point amongst the deleted records only)
Well, I assumed that we have a tag on deleted(), since the deletion mark of the record is important here. In Bob's case, he'd probably have the index on LDeleted. My method could be faster if we had about 10% of deleted records, but in his case his way is probably faster than mine. I'm just amazed at the fact that the record may have been already done by someone else before the RLock() was placed... I mean this:
go 1 + (rand() * lnRecCount) && randomly select a record (for most records,
if ldeleted and rlock() && lDeleted = .t.)
if ldeleted && It didn't changed while we were getting
lnReturnedRecNo = recno() && the lock so we will send back it's
exit && record # for recycling.
else
unlock record recno() && unlock JUST this one (It was lDeleted, but
endif && by the time we got the lock, it wasn't!)
endif
It must really be crowdy at his network if between the first line and the end of the second line someone else may update the record. I'm just wandering what would I do if I had such a situation... I know! I'd search back for this thread :)