General information
Category:
Databases,Tables, Views, Indexing and SQL syntax
Title:
Rejecting Duplicates during appends
I'm trying to import very large (millions of records) flat files .TXT into indexed tables. The records need to be checked so duplicates aren't imported. In the interest of speed, I was trying to avoid using low-level file commands to read/check one record at a time and I don't want to import all the records into the table(s) then remove duplicates. So.... I created a candidate index and tried "APPEND FOR error()<>1884". This works only 'until' a dupe is found and the APPEND stops; more like a "WHILE". Anyway does anyone know a way I might get the APPEND to skip the duplicate and continue? Or maybe a better way to approach this issue? I am going to try the low-level code route anyway, but because the files are so large, I want to benchmark several ways, then decide which is most efficient time wise.
Thanks,
Next
Reply
View the map of this thread
View the map of this thread starting from this message only
View all messages of this thread
View all messages of this thread starting from this message only