Information générale
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Titre:
Rejecting Duplicates during appends
I'm trying to import very large (millions of records) flat files .TXT into indexed tables. The records need to be checked so duplicates aren't imported. In the interest of speed, I was trying to avoid using low-level file commands to read/check one record at a time and I don't want to import all the records into the table(s) then remove duplicates. So.... I created a candidate index and tried "APPEND FOR error()<>1884". This works only 'until' a dupe is found and the APPEND stops; more like a "WHILE". Anyway does anyone know a way I might get the APPEND to skip the duplicate and continue? Or maybe a better way to approach this issue? I am going to try the low-level code route anyway, but because the files are so large, I want to benchmark several ways, then decide which is most efficient time wise.
Thanks,
Suivant
Répondre
Voir le fil de ce thread
Voir le fil de ce thread à partir de ce message seulement
Voir tous les messages de ce thread
Voir tous les messages de ce thread à partir de ce message seulement