Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Auto inc skips
Message
 
À
15/12/2007 03:13:03
Al Doman (En ligne)
M3 Enterprises Inc.
North Vancouver, Colombie Britannique, Canada
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Titre:
Divers
Thread ID:
01274981
Message ID:
01276492
Vues:
23
>>>>>>It looks like I'll have to go into more server details than I wanted - I know we have RAID but don't know the type, level, configuration... One of the network guys said that cache is good because it allows the server to go over periods when there are more requests to write than it can serve. I'd accept that is a good thing if the cache keeps track about pending writes and reads from the most recent, not from the disk.
>>>>>
>>>>>That's exactly the way a write cache is supposed to work. Write caches are most reliable with the simplest systems e.g. within a single computer (CPU write-through cache, local disk write cache etc.)
>>>>>
>>>>>File-server databases like VFP put a lot of stress on cache subsystems. Some of the issues involved:
>>>>>
>>>>>- The server must coordinate write requests from multiple workstations that all have the same file open simultaneously. This requires management of file and record locks as well as cache coherency
>>>>>
>>>>>- Requests from workstations have to make it out of VFP (and any caching/buffering it may be doing), through the local OS caching/buffering (if any), to the network redirector (again with possible caching/buffering), then through the network to the server. So, there are several points of potential failure
>>>>>
>>>>>- Now add in unknowns like antivirus programs hooking into file system reads/writes and who knows what else (on both workstations and the server)
>>>>>
>>>>>I think in many cases write-behind caching gets a bad rap. Sure, as Neil points out there are some unusual cases where you can point to problems with specific equipment, drivers or implementations but I don't believe they're common - MS could not afford to turn on write caching by default if they were. With multi-user VFP apps I believe it's important to ensure workstations and network infrastructure are reliable before changing global server settings.
>>>>>
>>>>
>>>>If autoinc require 2 round trips from WS to the server (1 to get the current value, and second one to write the incremented value to the header) then it seems to me that network speed combined with VFP data engine are the problem, not caching on the server.
>>>>If it is just one trip, then it is the cache on the server that fails. The speed of the network and WS different levels of cache shouldn't matter.
>>>
>>>I'm not sure how you draw those conclusions - could you explain further?
>>>
>>
>>It is just speculation for a scenario to get duplicate autoinc values. Let's say users U1 and U2 each insert a record on the same file. The current autoinc next value in the header is 5.
>>
>>With 2 trips: U1 reads the value 5, but there is no lock on the file header, and nothing in the server write cache, because it is just a read. U2 reads the same value 5, before U1 makes the second trip to write the incremented value 6, because U1 trips are long. In the second trips both users write 6 to the header, or to the cache
>>Cache is not relevant, because both U1 and U2 have the same value anyways.
>>
>>With 1 trip: U1 locks the header, reads 5 and writes 6 to the write cache, and releases the lock.
>>- If U2 comes while the header is locked, it gets an "Attempting to lock...", it can't get any value, no duplicate.
>>- If U2 comes after the lock is released, and 6 is still in the cache, then it could get a duplicate if it reads 5 from the disk not 6 from the cache.
>>The speed with which U2 comes to the server is irrelevant, but cache is.
>>
>>I don't expect that is actually how it works - but I am looking for a scenario in the hope it may allow me to focus on where the problem is, instead of looking at a huge fuzzy maze.
>
>VFP9 help, "Autoincrementing Field Values in Tables" has a short description on what happens but IMO it's not detailed enough to discuss what's really going on with locking, file I/O and caching. I will say that when a write cache is in place its value is the only one client apps are allowed to see; none of them can read the disk "directly", to allow that would mean chaos.

I agree that's the way it should be.

>... So, if an autoincremented value successfully travels from the workstation to the server that's the only value that other workstations should see, regardless of whether it's been written to disk or not.


You seem to exclude cache failures, and say that it all depends on values successfully traveling from WS to the server. What is successful? When is the file header locked/unlocked? Could you give a scenario for getting one autoinc duplicate using your premises?
Doru
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform