Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Reading and writing data, blocks, locks, etc
Message
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Divers
Thread ID:
00693459
Message ID:
00694584
Vues:
18
>[Subtitle: How VFP's cache is mapped onto the network tables]
>
SNIP

Peter,
My apology for being so late with a reply. My comments are in-line...
>
>I have some perceptions of the "cacheing" operations of VFP, I'd like to be shot at. Or better, to be worked out as to how it really is.
>It is the most easy to describe a scenario as my perception :
>
>1.
>At the server we have a VFP table. It comprises of, say, 4KB blocks (clusters). At some moment in time, 100 blocks are in there. The last block just cannot hold a new record to be added.
>

I don't think we know if VFP is programmed to fill whole "clusters" regardless of the length of the records (thus having records spanning clusters) or if it operates allowing only complete records per cluster (thus leaving unused bytes at the end of every cluster). My guess is that it always fills all clusters (except the last, as necessary) but it would be nice to know for sure.

>2.
>User A wants to change a record in block #100. He performs an RLock(), causing the block to be refreshed in his PC.
>Never mind preceeding SEEK's etc.; the RLock() has to be performed in order to receive the latest version of the block. In the end it is about the latest version of the record, but the block is received in the mean time.
>Supposed 3,700 records are in the 100-block file, and this is about record #3698.
>
>3.
>User B wants to add a new record, and all he does is an Append Blank.
>This implies a lock of the header of the table, so not two users at any same time will write to the header. Hence, the append blank from User B will result in incrementing the record count, in this case from 3,700 to 3,701.
>After the header is updated, it is unlocked again.
>

OK, but I think it is fair to assume that a blank record 3,701 is also written.
Also, you specified above that the last block just cannot hold a new record, so we should assume that a new cluster #101 and it's EOF Marker is also written. For purposes here, let's ASSUME that the new record spans cluster 100 and cluster 101. [I see later that you assume this too]
>Now all my assumptions start ...
>
>4.
>User A performs a replace at record # 3,698. He is not performing an unlock yet. Hence, the new data from record #3,698 will stay in User A's cache only. No fresh copy of the block is obtained, because it is not necessary; the record is already locked.
>It is here where I can be dead-wrong.
>

We do not know for sure that a REPLACE command remains ONLY in the workstation's cache until a unlock is executed. A SET REFRESH may cause it to be written WHILE the lock is still held. There may be other factors that could cause the record to be re-written while a lock is still held. One thing is certain - other than some performance impact it would be perfectly legitimate for a locked record to be re-written. It seems that a FLUSH would cause a re-write, don't you think?
>5.
>At the append blank of User B, User B will obtain a fresh copy of block #100, because its last portion contains the beginning of his new record 3,701. User B also receives block #101, because in there is the remainder of record 3,701 (the record contains blanks only at this moment).
>Note that User B will not obtain the record #3,698 as how User A already changed it. User B will see the old contents of it (when he'd look for that).

But we cannot be certain of this. But in any case record #3,698 is still locked, so we will see what develops.

>Also note that User B is legitimatly able to obtain a fresh copy of block #100 (as he just did), but that "fresh copy" means : as how it is on the server's (virtual) disk.
>It is my statement that by, for example, performing an RLock() on record #3,699, all records from block #100 will be refreshed to the status as how it's on the server's (virtual) disk. A user could perform any RLock() upon a record in block #100 (say records 3,663 u/i 3,700), but for record #3,698. Hence, this record will fail to get locked at this point in time, because User A has got it locked for himself. And, because the lock won't be done, the block will not refresh either.
>

Well again we don't know for sure. But I would use this situation as evidence that, to be perfectly up-to-date with all data, VFP must re-write records that are (still) under lock.

>Note that in between this all, the VFP activity on the Set Refresh is operating, and that that might cause a refresh of the block afterall. But again, the contents of record # 3,698 as how User A sees it right now, will never be obtained for that contents : it's just not on the server's (virtual) disk.
>

To me this is the argument supporting that VFP DOES re-write records while still under lock.

>6. User B, who just appended the new record, is performing an Rlock() to record #3,701, for his due replace.
>In between the append blank and the replace there is a point in time we can let it all flaw. That is, when no file locking (FLock()) is used, within this period the appended #3,701 can be obtained by someone else, and can be replaced by someone else. But, this will be shooting the "system", whereas by no means logic can be found in such an operation.

Keep in mind that if the record #3,701 is involved in a FPD/FPW "screen" GET field or a VFP "form" entry field (at least with no or pessimistic buffering) the APPEND BLANK would immediately be followed by an intrinsic RLOCK() for the record in question. I'm with you in that I feel confident that VFP covers the situation else there would be corruption all over the place. Possibly APPEND BLANK places a RLOCK() automatically and the documentation just doesn't say so.
>
>7.
>At this stage, User A unlocks record #3,698 and because of that it will be flushed to the network's disk. "It" means the whole block #100, because transfer is at the block level.
>How will this physically work ? It hardly can be true that some "dumn" file system is able to merge the new block #100 from User A with the current new block #100 as implied bu User B. Or can it ?
>Two options exist :
>a. The block is re-fetched to the PC first, and all the intelligence necessary is applied by VFP;
>b. The block is not re-fetched, and the file system is performing a merge.
>I can only fo for a.
>Assumed that a. is the case, it won't be allowed to receive record #3,701 because it is locked. Furthermore, what about the #3,701 being spread over block #100 and #101 ? Will User A receive Block #101 anyway, anticipating on it now logically being connected to #100 ? Would a Go Bottom imply the presence of #3,701 anyhow ? I think so ... But, User A can't go around with replacing #3,701 because it is locked by User B.
>

I really believe that some kind of "merging" is done by VFP. I believe that it has to as this is the only way that I can see for FP/VFP data to always be accurate, which it always is (except for the odd corruption instances that may be bugs in the processing logic).
As mentioned earlier, it may well be that locked records are re-written upon a replace or periodically for other reasons.
>8.
>Now User B is performing the replace, and when the assumptions so far are right, it will imply a re-fetch of block #100 and #101 first. The data from #3,701 is now applied to both block #100 and #101.
>User B is now performing an unlock for #3,701.
>
>9.
>As I assume it, now User A is according my statements at the end of #7, capable of replacing #3,701, overriding the contents as just applied by User B. But again, this is not applying proper app logic, because first User A should RLock() #3,701. When he does that, he will obtain the fresh contents of block #100 and #101, record #3,701 being part of it.
>
>All poper working depends on the re-fetch of the data, even when the record concerned was locked already. I personally wonder whether this is true ...
>
I believe that this has to be true.
SNIP
I feel that the general integrity of data has been proven, if only by time alone.
This doesn't, however, preclude the existence of bugs, especially as concerns the more obscure situations that may arise.
An end-of-cluster situation presents at least four possible situations:
1) There is room in the existing cluster for a whole new record and then some. Most frequent, I guess.
2) There is exactly room in the existing cluster for the whole new record and its EOF Marker. hardly frequent I guess.
3) There is exactly room in the existing cluster for the whole new record BUT not for its EOF Marker. Least frequent I guess.
4) There is room for only a part of the new record so a new cluster will also be required. Second most frequent I guess.
Logic for the above may be impacted too by the cluster situation on related .CDX or .FPT files or by the sequencing of writes to the three different files.

Can you think of some way to obtain the answer to the unknowns suggested in all of the above? It seems to me that this basic missing (I assume) information could be published by MS without jeopardizing its patents.

cheers
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform