Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
SET REPROCESS TO
Message
From
28/12/1998 16:20:24
 
 
To
28/12/1998 15:45:31
Dragan Nedeljkovich (Online)
Now officially retired
Zrenjanin, Serbia
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Miscellaneous
Thread ID:
00169741
Message ID:
00170935
Views:
42
>>Hi Bob,
>>That's a fair enough approach, but I'm surpised about bloat from Deleted(). Rethinking this, Dragan's method might be much better than mine in many circumstances because the deleted records can be 'clustered' or clumped together in the data set, in which case taking a random point in the full file, using reccount(), might cause say a 90% hit for example on a particular deleted record. This defeats the purpose. Your scheme might also suffer this.
>>
>>(Dragan's method took a random point amongst the deleted records only)
>
>Well, I assumed that we have a tag on deleted(), since the deletion mark of the record is important here. In Bob's case, he'd probably have the index on LDeleted. My method could be faster if we had about 10% of deleted records, but in his case his way is probably faster than mine. I'm just amazed at the fact that the record may have been already done by someone else before the RLock() was placed... I mean this:
>
>    go 1 + (rand() * lnRecCount)  && randomly select a record (for most records,
>    if ldeleted and rlock()       && lDeleted = .t.)
>      if ldeleted                 && It didn't changed while we were getting
>        lnReturnedRecNo = recno() && the lock so we will send back it's
>        exit                      && record # for recycling.
>      else
>        unlock record recno()     && unlock JUST this one (It was lDeleted, but
>      endif                       && by the time we got the lock, it wasn't!)
>    endif
>It must really be crowdy at his network if between the first line and the end of the second line someone else may update the record. I'm just wandering what would I do if I had such a situation... I know! I'd search back for this thread :)

Dragan
Glad you noticed that one last check. It may be required only on a theoretical level but I was getting some index corruption that seemed to be related to the recycling of records, so I wanted to make sure there wern't any gaps on my side when I started testing. We can never be sure when windows might terminate our time slice and give control to another process.

I hear you ask in advance what the solution was to the index corruption ... Well I can't actually prove anything yet, but it appeared that my problem went away when I turned on buffering (type 3, but it shouldn't matter). As with the above, the corruption only showed up during VERY HIGH contention ... lets say 50 hits per second against the same table spread over 6 processes for example.

Bob
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform