Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Can this code be faster?
Message
De
27/10/2003 09:52:48
Hilmar Zonneveld
Independent Consultant
Cochabamba, Bolivie
 
 
À
27/10/2003 09:36:46
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00842920
Message ID:
00842931
Vues:
16
In any case, I would consider LLFF the most trustworthy alternative, if you don't have an upper limit for the file-size.

You could read the entire file into a single memory variable with FileToStr(), and perhaps processing would be faster (especially if you use mline() and the _mline variable) - but I didn't try this. However, if the file can become greater than free RAM, doing this will start using virtual memory.

Another thing you could try is to read, for instance, a hundred thousand bytes at a time, and then process with mline() and _mline. Your main loop would be a little more complicated, but it might be interesting to test whether this is faster or not.

Regards,

Hilmar.

>I am pulling information from a file that can be of any size. We have not determined the average size of the file yet. The file starts out with a header and immediately following the header is the size of the data that belongs to that header. For instance in the first line below the header would be 1MHG and the next 167 characters hold data for that header. Starting at position 168 will be 2TRG, position 173 following that will be 6CVA... in the example below:
>
>1MHG167...
>2TRG172...
>6CVA273...
>5VEH272...
>2TRG172...
>6CVA273...
>5VEH272...
>3MTG102...
>
>1MHG is a message header. It occurs only once per message and is the first characters of the string always. 2TRG starts a new transaction. There are many transactions per file.
>
>The speed seems ok at this point, however, the files that I am reading in could get extremely large. Is there any way to speed up the process of the FREAD()? Right now I am processing data a transaction at a time. The file could contain 10,000 or more transactions. It works great functional wise (I'm verifying that I am reading all data in the file), but I want to be prepared in case the file gets much larger and optimize it for speed.
>
>TIA,
>
>Simplified snippet:
>
>
>m.handle =  FOPEN(al3_file)
>m.sizeof = FSEEK(m.handle,0,2)
>=FSEEK(m.handle,0)
>
>DO WHILE !FEOF(m.handle)
>   m.header=FREAD(m.handle,4)
>   m.size = FREAD(m.handle,3)
>   m.data = FREAD(m.handle,VAL(m.size)-7)
>   m.data = CHRTRAN(m.data,'?',' ')
>   DO CASE
>     CASE m.header = '1MHG'		&& beginning of message
>	DO p_1MHG WITH m.header+m.size+m.data
>     CASE m.header = '2TRG'		&& new transaction
>	DO p_2TRG WITH m.header+m.size+m.data
>     CASE m.header = 'Something else'   && about 15-20 cases
>        * etc...
>     CASE m.header = '3MTG'             && end of message
>        * do something
>   ENDCASE
>ENDDO
>=FCLOSE(m.handle)
>
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform