Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Auto inc skips
Message
 
À
18/12/2007 15:41:17
Al Doman (En ligne)
M3 Enterprises Inc.
North Vancouver, Colombie Britannique, Canada
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Titre:
Divers
Thread ID:
01274981
Message ID:
01276512
Vues:
29
>>>VFP9 help, "Autoincrementing Field Values in Tables" has a short description on what happens but IMO it's not detailed enough to discuss what's really going on with locking, file I/O and caching. I will say that when a write cache is in place its value is the only one client apps are allowed to see; none of them can read the disk "directly", to allow that would mean chaos.
>>
>>I agree that's the way it should be.
>>
>>>... So, if an autoincremented value successfully travels from the workstation to the server that's the only value that other workstations should see, regardless of whether it's been written to disk or not.
>>
>>You seem to exclude cache failures, and say that it all depends on values successfully traveling from WS to the server. What is successful? When is the file header locked/unlocked? Could you give a scenario for getting one autoinc duplicate using your premises?
>
>Just as a quick recap, a write cache is conceptually simple - it acts as a layer on top of the disk subsystem, mediating reads and writes to it. As long as a file is cached, as far as local server processes are concerned, or remote read/write requests from workstations are concerned, the file's contents in the cache are the file. The contents stored on-disk may be way "behind" the contents in the cache, and are only guaranteed to catch up when an OS-level flush is performed, most often at system shutdown.
>

This is how I understand it too.

>I don't know what you mean by "cache failure" but here are some possibilities that could cause problems:
>
>1. Cache bypass. As I mentioned above, if somehow a process bypasses a write cache and reads directly from disk, the data read there may be out of date. This should be impossible, but software perfection is (mostly) also impossible.
>
>2. File loaded in cache ("steady state"): There should only ever be 1 copy of a file in cache. If there somehow is more than one, and an "old" copy is accessed this would be a serious problem. I'd consider the possibility of this occurring with a file duly loaded in cache to be extremely low to non-existent. Likewise if an incorrect offset was being read from the cache; that would likely return garbage rather than an out-of-date value, and AFAIK you're not seeing garbage.
>
>3. Process of a file being loaded into or dropped from cache. A disk cache is dynamic; the list of files it contains changes constantly, so an individual file may be loaded and later unloaded many times. I speculate that what happens is something like this:
>
>Load a file into cache:
>- ask the OS for exclusive use
>- once obtained, load its contents into the cache from disk
>- start intercepting file read/write requests
>- the cache retains exclusive use as long as the file is cached
>
>Unload a file from cache:
>- temporarily deny any write access to the file (more likely, return some sort of "busy" signal to requests, or buffer them somehow)
>- flush the cache contents to disk
>- stop intercepting file read/write requests
>- release the exclusive use lock
>
>I'd guess that what actually happens is a lot more complicated that this, because you can have scenarios such as a file being shared by multiple users being loaded or unloaded from cache (e.g. VFP) on the fly, and having to manage multiple pending read/write requests during those operations. If I had to guess, I'd consider this the most likely point of software failure at the server.
>
>4. Hardware errors. Most hard drives have built-in RAM buffers; the disk subsystem controller may have buffer or cache RAM as well. Firmware bugs are possible. Firmware bugs in drives I'd consider very low probability as their buffers are simple. Firmware bugs in subsystem controllers I'd consider more likely, the more complex they are. Some controllers offer firmware updates to fix errata. It might be interesting to ask your network admins if there are any such updates available for your servers' controllers.
>
>5. System-level utilities intercepting or hooking into file system operations (e.g. antivirus "real-time" scanners, software firewalls). It's probably a good idea to make sure AV is not scanning any component of a VFP app, including temp files.
>
>6. Network request/message handling issues. I still consider this the most likely cause of an autoinc issue. One simple scenario: suppose a workstation updates the value, which is written to the local VFP file cache, but VFP fails to forward that update back to the server? Any other workstation would pick up an "old" value from the server cache. Or, if VFP forwards the write, but the workstation network redirector doesn't properly send it to the server? Or, if the redirector sends the request, but the network is busy and the request times out? Or, antivirus or an outgoing firewall prevent or corrupt the request? Or worse yet, malware actually present on a workstation? What if the VFP app, or the entire workstation itself crashes in the middle of an update? The list goes on and on. Many workstations, times many possible points of failure per workstation makes it seem, to me, that the problem most likely lies there.

If 6. is the issue then that will create a lot more chaos than anything else. I don't believe you can count on the WS and/or the network connection to get unique values.
I believe my scenario fits 1., and that's what I meant by "cache failure".
Doru
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform