Information générale
Catégorie:
Codage, syntaxe et commandes
Versions des environnements
I'd only suggested using table and memo fields because he'd stated he was having trouble with processing the data otherwise. If it was me, I'd likely approach it with low-level file routines and essentially process the file on a line-by-line basis. My initial impulse would to implement the program as an AWK script.
>I've done the memo field approach. The application got so slow that it was unusable. Alas, it was many years ago, so I don't remember how we ended up implementing it.
>
>It should be possible to read the file in chunks using FGET and process that chunk. If you get part of a line, keep it around and process with the next chunk.
>
>
>
>>Admittedly what I've suggested is likely to be a very poor approach (indeed IS going to be SLOWER, use MORE memory AND disc space than what the original poster was using), but it does get around the problem as described where long lines get "wrapped" into separate lines when using APPEND FROM with text file into table. The main problem I see with "wrapped" lines is that there is the likelihood that long lines aren't getting "wrapped" in whitespace, but likely in the middle of "word" items (contiguous blocks of letters, digits), which would likely cause changes in interpretation of the resulting text when re-exported back into a text file.
Précédent
Suivant
Répondre
Voir le fil de ce thread
Voir le fil de ce thread à partir de ce message seulement
Voir tous les messages de ce thread
Voir tous les messages de ce thread à partir de ce message seulement