Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Header corruption and KB Q293638
Message
From
17/11/2001 14:58:14
 
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00580276
Message ID:
00583150
Views:
30
This is most interesting, Mark.

Was the workaround recommended by MS or did you figure it out yourself?

Am I correct in interpreting that the workaround consists of a total of two new (and unused) fields, one character and one integer?

Do they appear at the beginning of the record, or anywhere as long as it's before the primary key?

In what order do you put those workaround fields? Is the char. field field#1 or is the integer field field#1.

What length is used for the char. field?

Does this happen outside of BEGIN/END TRANSACTION only, or only within TRANSACTIONs, or either way?

Do you have any idea as to whether the "dummy" fields could actually be legitimate data but non-key fields?

Inquiring minds.... and thanks

Jim Nelson


>There is a flaw in VFP tables/indexes when you are adding records at a high transaction rate (1 record per 2 -3 seconds) from multiple sources simutaneously (and even from a single source) which will cause corruption to the primary key index and sometimes result in the header corruption.
>
>
>From January 2001 to March 2001, I worked with Microsoft on trying to figure out the problem why the primary key (integer) field was always getting corrupted and once in awhile the header. We looked at our network harware, looked at WAN connections, we looked at server hardware, we look at the NT operating system and, we looked at application coding.
>
>We found no permanent fix. We found a work around where I created two dummy character and integer fields physically before the primary key field in the table which would capture the corruption. This almost has eliminated the problem and brought our data integrity to about 99 %.
>
>We do weekly maintenance on the various tables to rebuild the primary keys on the high volume tables. The typical high volume table) with reach the 2GB limit in 5 months. We dump data out when they reach 1 GB since performance greatly is impacted.
>
>People will suggest for me to move everything to SQL, or Oracle, etc. I do use SQL Server for other parts of the application, but this is for another thread and I can explain the various reasoning there when the time comes.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform