Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Proper Way To Check For Duplicate Primary Key
Message
From
23/06/2001 21:40:29
 
 
To
23/06/2001 17:00:21
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00522709
Message ID:
00522851
Views:
8
Hi Keith

I would guess that it depends a lot on what can be assumed about the data source(s). One would expect that within the source, the uniqueness of primary and candidate keys should always be met. Adding a field for the source and treating it as part of the merged primary key should then be easy, and errors should be rare, so handling the error should be more efficient than a seek for every record to be merged.

I would expect the errors to be an occasional blank record (depending on how records were added in the source system), or duplicates from attempting to process a source file a second time. I'd be testing for the first, counting the errors for duplicates, and bailing out if the count reached an arbitrary limit.

Of course, if you cannot make those assumptions about the source data, a different strategy would be required.

HTH

Geoff


>>>What is the most efficient way of handling this?
>>>Assuming I have a compound PKEY or CANDIDATE
>>>should I do a SEEK before modifying a one of
>>>the components or is it more efficient to trap
>>>for the error?
>>>
>>>   ...kt
>>
>>I usually do both. Not sure, it's the right approach (I had several discussions here about this problem), but this is how it works in our system (single tier):
>>I have a special textbox class, which checks for uniqueness of the entered value (if it's not unique, User can correct it immediatelly) and our form also has some mechanism to deal with errors on the save stage.
>>
>
>Hi Nadya,
>
>My situation is a bit different. I have a merge
>application where I need to merge 10K to 100K
>records at a time. I'm trying to find out if
>anyone has done any benchmarks that indicate
>which is actually the most efficient method of
>checking for duplicates.
>
>   ...kt
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform