Information générale
Catégorie:
Codage, syntaxe et commandes
Versions des environnements
Admittedly what I've suggested is likely to be a very poor approach (indeed IS going to be SLOWER, use MORE memory AND disc space than what the original poster was using -- especially considering you're performing the read loop iteratively, and incurring a noticeable performance hit because of the overhead in processing memo fields [albeit it is only used within a temporary file]), but it does get around the problem as described where long lines get "wrapped" into separate lines when using APPEND FROM with text file into table. The main problem I see with "wrapped" lines is that there is the likelihood that long lines aren't getting "wrapped" in whitespace, but likely in the middle of "word" items (contiguous blocks of letters, digits), which would likely cause changes in interpretation of the resulting text when re-exported back into a text file.
>If there is lots of data, that will be SLOW and chew up memory and disk space fast.
>
>>Perhaps you can use combination (as suggested earlier) of techniques. You can use FGETS() to read the file line-by-line, then use temporary table, but rather than using a character field, use a memo field. I do vaguely recall the original memory thread (though can't remember the exact details of what you were trying to do). I suspect you might've run into a problem if you were trying to read the entire file in one step using FILETOSTR(). Although ALINES() could be used, it also runs the risk of crashing if the file contained enough lines to cause array to contain too many elements (>65000 elements). Using FGETS() to read the file line-by-line would get around that problem.
Précédent
Suivant
Répondre
Voir le fil de ce thread
Voir le fil de ce thread à partir de ce message seulement
Voir tous les messages de ce thread
Voir tous les messages de ce thread à partir de ce message seulement