Hi, Greg.
>I'm trying to import very large (millions of records) flat files .TXT into indexed tables. The records need to be checked so duplicates aren't imported. In the interest of speed, I was trying to avoid using low-level file commands to read/check one record at a time and I don't want to import all the records into the table(s) then remove duplicates. So.... I created a candidate index and tried "APPEND FOR error()<>1884". This works only 'until' a dupe is found and the APPEND stops; more like a "WHILE". Anyway does anyone know a way I might get the APPEND to skip the duplicate and continue? Or maybe a better way to approach this issue? I am going to try the low-level code route anyway, but because the files are so large, I want to benchmark several ways, then decide which is most efficient time wise.
Maybe you cant APPEND to a temp cursor, and then SCAN trough it and append the non-duplicated records to the real target table.
Hope this helps,