>>>>>Our client(s) is send us CSV files with demographics of people they serve. The problem is that a few of the rows in the CSV have names and/or addresses with embedded comma in them. This is throwing the APPEND a problem. The wrong data is ending up in the wrong columns.
>>>>>Can anyone suggest how to prevent this from happen at my end.
>>>>
>>>>Perhaps you should process such files line by line?
>>>
>>>Yes, I have considered that. But there are 10s of thousands of rows, and several files of them. The performance hit would be great.
>>
>>One other possibility, I think, is to have a file with several extra dummy columns. If you import the file using APPEND from command and find that few extra columns got populated, then you know you'll need to re-process that file.
>
>How do you know which column has the commas?
You would not know, but you'd know that something is wrong with that file if you find that just a few of the extra dummy columns got populated. That's a reason to re-process the file.
If it's not broken, fix it until it is.
My Blog