Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Unique Key Violation
Message
From
24/07/1999 17:48:14
 
 
To
24/07/1999 16:25:06
Antonio Gubaira
Ingenieria Y Sistemas
Valencia, Venezuela
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00245748
Message ID:
00245758
Views:
11
>That means that every time I delete a record a PACK must follow? Why is SET DELETED ON not hiding the duplicate key ?

Because your primary key expression doesn't take the status of DELETED() into account in its expression. Remember that a record exists in the table, and hence in its indexes, until it is physically removed from the table.

You have several choices, all but one of them bad. The good one AFAIC, is to not attempt to reuse a primary key. There are several straetegies for creating a unique key for every record; the most common ones involve either a separate table of numeric keys that you access each time to get a new key, bumping the value in the table so that the next access references the next key. You need to ensure that you lock the key table during the proces of grabbing the new key each time to prevent two users from trying to allocate the same key at the same time. If the key need only be unique and not sequentially assigned, you can use the SYS(2015) funtion to generate unique keys without the need for a separate table - SYS(2015) returns a 10 character string that is, for all intents and purposes, unique.

The bad alternatives are to factor the DELETED() function either into the primary key expression, or to use a filtered index. The first approach, extending the key by prepending or post-pending a character expression with the result of an IIF() based on DELETED() is inherently flawed - it will work only as long as you never delete and recycle the key more than once - the key's uniqueness is violated when there are two occurances of the key where the result of the DELETED() function is the same.

The second bad alternative is to use a filtered index; adding a FOR clause to the index expression prevents deleted records from being considered in the index. The problems that arise come from the need to programmatically create the primary key (primary keys are added using the ALTER TABLE statement which doesn't have a way of building a FOR clause built into it; you can fudge with the DBC and DBF table headers, but that's just asking for trouble) and the fact that your primary key will never be available for use in Rushmore optimization,
which can have an impact on SQL Select statements that logically join two tables, where no unfiltered index exists that otherwise duplicates the primary key.

Unless there's a really compelling reason to re-use primary keys (I've never found one, and since my primary keys are surrogate keys, meaning that they're arbitrary values used to join tables, rather than a sequencing mechanism where the primary key has a meaning as data in and of itself) I'd scrap the idea of reusing keys - it'll make life much simpler and your database more portable to other environments.
EMail: EdR@edrauh.com
"See, the sun is going down..."
"No, the horizon is moving up!"
- Firesign Theater


NT and Win2K FAQ .. cWashington WSH/ADSI/WMI site
MS WSH site ........... WSH FAQ Site
Wrox Press .............. Win32 Scripting Journal
eSolutions Services, LLC

The Surgeon General has determined that prolonged exposure to the Windows Script Host may be addictive to laboratory mice and codemonkeys
Previous
Reply
Map
View

Click here to load this message in the networking platform