Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Auto inc skips
Message
 
À
14/12/2007 00:33:02
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Titre:
Divers
Thread ID:
01274981
Message ID:
01275947
Vues:
20
>>>>It looks like I'll have to go into more server details than I wanted - I know we have RAID but don't know the type, level, configuration... One of the network guys said that cache is good because it allows the server to go over periods when there are more requests to write than it can serve. I'd accept that is a good thing if the cache keeps track about pending writes and reads from the most recent, not from the disk.
>>>
>>>That's exactly the way a write cache is supposed to work. Write caches are most reliable with the simplest systems e.g. within a single computer (CPU write-through cache, local disk write cache etc.)
>>>
>>>File-server databases like VFP put a lot of stress on cache subsystems. Some of the issues involved:
>>>
>>>- The server must coordinate write requests from multiple workstations that all have the same file open simultaneously. This requires management of file and record locks as well as cache coherency
>>>
>>>- Requests from workstations have to make it out of VFP (and any caching/buffering it may be doing), through the local OS caching/buffering (if any), to the network redirector (again with possible caching/buffering), then through the network to the server. So, there are several points of potential failure
>>>
>>>- Now add in unknowns like antivirus programs hooking into file system reads/writes and who knows what else (on both workstations and the server)
>>>
>>>I think in many cases write-behind caching gets a bad rap. Sure, as Neil points out there are some unusual cases where you can point to problems with specific equipment, drivers or implementations but I don't believe they're common - MS could not afford to turn on write caching by default if they were. With multi-user VFP apps I believe it's important to ensure workstations and network infrastructure are reliable before changing global server settings.
>>>
>>
>>If autoinc require 2 round trips from WS to the server (1 to get the current value, and second one to write the incremented value to the header) then it seems to me that network speed combined with VFP data engine are the problem, not caching on the server.
>>If it is just one trip, then it is the cache on the server that fails. The speed of the network and WS different levels of cache shouldn't matter.
>
>I'm not sure how you draw those conclusions - could you explain further?
>

It is just speculation for a scenario to get duplicate autoinc values. Let's say users U1 and U2 each insert a record on the same file. The current autoinc next value in the header is 5.

With 2 trips: U1 reads the value 5, but there is no lock on the file header, and nothing in the server write cache, because it is just a read. U2 reads the same value 5, before U1 makes the second trip to write the incremented value 6, because U1 trips are long. In the second trips both users write 6 to the header, or to the cache
Cache is not relevant, because both U1 and U2 have the same value anyways.

With 1 trip: U1 locks the header, reads 5 and writes 6 to the write cache, and releases the lock.
- If U2 comes while the header is locked, it gets an "Attempting to lock...", it can't get any value, no duplicate.
- If U2 comes after the lock is released, and 6 is still in the cache, then it could get a duplicate if it reads 5 from the disk not 6 from the cache.
The speed with which U2 comes to the server is irrelevant, but cache is.

I don't expect that is actually how it works - but I am looking for a scenario in the hope it may allow me to focus on where the problem is, instead of looking at a huge fuzzy maze.

>>
>>>A lot of servers these days have fast CPUs and tons of RAM, so they tend to be disk-bound in performance. Some disk subsystems such as RAID 5 tend to have relatively poor write performance so they really need write-behind caching. Turning it off affects all processes on the server, not just VFP writes. It may be that VFP use on a given server is only a few percent of what it's doing, and admins tend to take a dim view of penalizing the other 90-odd percent of the processes just to coddle one.
>>>
>>>In general I'm a big fan of caching. I believe that, fundamentally, they're an elegant implementation of Pareto's Law.
>>
>>Well, if you are talking about fashion, then elegance is important. For database applications a 80/20 success rate distribution is a disaster.
>
>I wasn't talking about reliability or success, I meant performance improvement. Using a disk cache as an example:
>
>- the gold standard for "disk" storage these days is solid-state flash ( http://en.wikipedia.org/wiki/Solid_state_disk )
>- for small, intermittent writes that don't overwhelm the cache, perceived performance of the disk subsystem (DDR RAM) may be even faster than flash
>- the cache size on a typical Win32 server may be of the order of 1GB for a (conventional) disk subsystem that may be of the order of 250GB. So, until the cache gets overwhelmed you get all of the benefit by applying less than 1% of the RAM
>
>That's actually a lot better than Pareto's 80/20 rule of thumb, which is why I call it "elegant" ;)

I see, if you include only speed in "performance improvement" then your argument is... elegant. I just can't accept the idea that from time to time I must pay back all that gain with interest.
Doru
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform