Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Deleting records using REPLACE
Message
De
25/01/2000 05:28:35
Cetin Basoz
Engineerica Inc.
Izmir, Turquie
 
 
À
24/01/2000 16:12:57
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00321564
Message ID:
00322075
Vues:
31
>>
AFAIK you can't do it in one command. OTOH if I understood it correctly
>>you're using recno() as PK (very dangerous if also pack is allowed). Then it's
>>dangerous to replace with negative recno() because negative recno()s are also
>>recno()s used by buffered records. You could inadvertently duplicate them (ie:
>>You delete record 6 and change its PK to -6, you have 7 records that aren't
>>tableupdated yet, they would have recno()s as -1 ... -7). If your PK always
>>get unique values then it wouldn't be necessary to change them before delete.
>
>This matter needs some more explanation :
>
>this is the code that adds records to a certain table :
>IF SEEK(compound_keyvalue, "mytable", "myindex")
> SELECT mytable
> GATHER MEMVAR
>ELSE
> INSERT INTO TABLE mytable FROM MEVAR
>ENDIF
>
>As long as 'myindex' is in good condition, 'mytable' won't have compound_keyvalue's. Though in the past I have had several situations 'mytable' contained duplicate key's.
>
>So that's why I put an candidate index on the compound_keyvalue. The system will prevent adding a duplicate value to the table.
>
>*BUT* when the record is deleted, it's allowed to add a new record with a previous deleted key value, but due to the uniqueness constraint, VFP won't let me unless I force one of the fields of the compound index to a value that will never exist (I my case negative values won't occur) before deleting.
>Also it's true that packing a table changes the RECNO() but as in this case they're only used in deleted records they don't appear in the packed file anymore ...
>
>My conclusion to this issue is as suggested by Ricardo :
>I don't like using filters on indexes for variuos already mentioned reason's, but I think in this example it is the most appropriate solution ....
>
>Thanks for your input !


Pascal,
When it's primary keys I'm not with the idea of supporting !deleted() filter. Gendbc currently doesn't support deleted() filter on PK and I think it's not just w/o thinking something reasonable. Some frameworks use deleted() filter on PK but that's just my personal opinion, I don't support the idea.
What you described sounds like record recycling. It could be done w/o setting filters on PK. Consider you have a deleted() filtered "regular" index on PK or candidate key. A simple rough recycling routine :
* Buffered addition - before tableupdate()
lcExpr = myCompoundKeyFieldValues
if indexseek(lcExpr, .f., "myFilteredRegularIndexTag") && Do not move pointer
   scatter memvar memo
   tablerevert(.f., "myAlias") && Discard new entry
   indexseek(lcExpr, .t., "myFilteredRegularIndexTag") && Move pointer
   gather memvar memo          && Make new entry recycled
   recall
endif
Cetin
Çetin Basöz

The way to Go
Flutter - For mobile, web and desktop.
World's most advanced open source relational database.
.Net for foxheads - Blog (main)
FoxSharp - Blog (mirror)
Welcome to FoxyClasses

LinqPad - C#,VB,F#,SQL,eSQL ... scratchpad
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform