Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Why did FLUSH solve this problem?
Message
 
To
10/05/2001 22:55:26
Al Doman (Online)
M3 Enterprises Inc.
North Vancouver, British Columbia, Canada
General information
Forum:
Visual FoxPro
Category:
FoxPro 2.x
Miscellaneous
Thread ID:
00505050
Message ID:
00506282
Views:
31
>>SNIP
>>
>>Hi Al,
>>
>>I thought that I recognized that Compaq reference. Went to my (just recently finished it) copy of "Inside SQL Server 7.0" and sure enough, there it was.
>>It states definitively that write-back caching *must* be diabled unless the controller is guaranteed to complete I/Os posted as I/O Complete ACROSS A FAILURE - and not just a power failure (they cite a single-bit parity error as well). Of course I presume this is in the context of SQL Server.
>>
>>Their reasoning is that some controllers have been found to "lie" (their term) about the I/O Complete, posting it as such when it hasn't physically been written back. Seems to me that is the whole point of "write-back" caching.
>>The Compaq they cite apparently has on-board battery backup to preserve data and logic to write the stored data first on resumption.
>>
>>I would think, since the danger here is strictly from a motherboard/power failure, that it would be safer and cheaper, *if* SQL Server detected an 'non-guaranteed' HD adapter, for SQL Server to modestly delay the "final consequence" of a I/O Complete to give it a chance to really be written. Possibly staying 1-behind for any drive or something like that (with a time limit of course).
>
>My understanding is that SQL Server is so entwined with the host OS that in many respects it no longer follows the application -> OS -> disk subsystem delegation model. I just get the impression that SQL Server makes use of un- or poorly-documented OS features to get a performance boost and stay ahead of Oracle and DB2 on TPC benchmarks. Then MS whines about possible hardware failures... when really, in the enterprise market, they should be concentrating on software-based fault tolerance. To be sure, performance is important, but reliability > performance.
>
>
>>IN ANY CASE. . .
>>
>>The issue at hand is categorically NOT related to failure/resumption of service without loss of data, the issue is why did a FLUSH work when a UNLOCK didn't.
>>In this case, based on what the "Inside SQL Server 7.0" book has to say, disabling the write cache (assuming it is otherwise designed/operating correctly) is not the solution.
>>
>>I don't know what is, but I do know that if this were a general 'problem' FP/VFP would have been dead long ago. I've got to suspect that the problem at hand is related to settings in VFP or the OS.
>
>The original problem *is* interesting, that FLUSH worked and UNLOCK didn't. AFAIK, FLUSH operates strictly on the local OS; it doesn't pass a message across the wire to the server requesting a FLUSH there as well. OTOH, UNLOCK *does* go back to the server, which controls the locks in the first place. Superficially, you'd expect UNLOCK to work and FLUSH not to.
>
>I would suspect a problem in the transport layer(s) between the workstation and server. Looking through the thread I didn't see if anyone asked if the server is Novell, and using the Novell network client. Another possibility is if both the server and workstation are W2K and there is some sort of issue with opportunistic locking.

Indeed, it seems as if the solution is opposite what it should have been. Just for the sake of discussion I should point out that both of the machines in question were running Windows 98 in a very simple peer to peer network configuration.
Paul R. Moon
Business Software Solutions
paul@businessoftware.com
Previous
Reply
Map
View

Click here to load this message in the networking platform