Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Vfp50 - import from a text file into vfp
Message
De
11/07/1997 08:26:21
Jp Steffen
Leadership Data Services
Des Moines, Iowa, États-Unis
 
 
À
08/07/1997 13:26:03
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00038828
Message ID:
00039412
Vues:
40
>>>>>I have several apps running that process tables of several million records. I chose FoxPro because it had the fastest db engine available. My apps regularly post several 100,000 record files in text format using low level file functions to a VFP .dbf of 1.8 million records.
>>>>>
>>>>>"30,000 Records?" You won't have time to get a cup of coffee before VFP has processed your records, up, down, sideways and backwards!
>>>>>
>>>>>>I want to import a text file that has about 30,000 records
>>>>>>into vfp50. Can visual foxpro handle 30,00 or maybe 100,00
>>>>>>records? thanks
>>>>
>>>>thanks Steffen for your answer.. with your text file, are you using APPEND FROM to post to the vfp dbf file? I want to move a text file (using a low level file
>>>>functions) to a vfp dbf file. Could you show me some code?..thanks again..rob
>>>
>>>if you really want to use low level file functions to do this, you are either crazy or wanting to pad your billable hours. what does the help for append from tell you to do?
>>
>>Call me crazy Dave, but how do YOU append from a file with out CR and LF at the end of each record. Keep in mind that the IBM Main Frame world doesn't always conform to our neat little desktop ASCII standards! When APPEND from includes a RECORD LENGTH parameter which enables it to parse records without CR or CRLF, then I will save my clients the extra billable hours. Until then they are perfectly happy to PAY EXTRA to get the first record along with 1.8 million records which follow it!!!
>>
>>JP
>
>APPEND FROM does the equivalent of parsing the strings for you. When it hits the last field in your .DBF structure, it assumes that >the next how-ever-many characters go to the next record. Unless you are downloading a file with a bunch of redefines, IBM still uses >a fixed length record. I can tell you from the experience of having to optimize data loading procedures written by other people that one >of the most inefficient ways to do it is low level file manipulation. Why are you reprogramming Fox?

Ok Guys! Sorry I was off the thread for a few days, but I wanted to answer Doris' question, "Why am I reprogramming Fox?" Well Doris you must be aware of some SET command or APPEND FROM argument which has of yet eluded me. Do me a favor and try this little test out. 1) Create a text file by opening a text editor and just typing into it WITHOUT inserting hard returns (No CRLF characters). Make sure to type a couple hundred characters in there for good measure. 2) Create a small .dbf table with 3 or four fields of 2 or three characters each. 3) Now use our highly esteem APPEND FROM command, APPEND FROM test.txt SDF.

Doris and/or Dave, what happend on your machine? Do you know of some undocumented aurgument that tells Fox to break the strings according to the character length of the .dbf table. Not that this would change my reasons for using low-level file functions, I do things like input audits and conditionally post incoming tape data and un-packing 'packed binary' data from its original EBCDIC format. APPEND FROM is limited. Yes your right Doris, Low-level file functions can be a big performance hit. Sometimes I use a langauge a little closer to the hardware. But in many cases Fox's LL File functions allow me to read in, interpret and audit incoming data as it would normally come to me form an Standard 9 Track Tape.

Dave, I never said you were a lazy programmer.

JP
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform