Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Two loop or not to loop?
Message
De
01/09/2004 13:58:50
 
 
À
01/09/2004 11:06:05
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00938414
Message ID:
00938472
Vues:
20
That's an interesting one, Jos.

Are you in a position to open the updateable tables EXCL? If so, that itself should make it significantly faster.

Do you have some constraint on VFP's memory usage in efffect? If so, that likely is hurting you in THIS case. Since you are not seeing significant CPU use it follows in my head that you are doing real I/O, probably more or less as it happens. The idea is to get as much data continuously held in cache so that it become more CPU intensive (by virtue of limiting the I/O).

Let us know... ready to think harder on it.

cheers

>Hi All,
>
>Here is the situation:
>
>1) I receive packets of data throughout the day, 1000 records of data at a time. This data arrives as an array of 1000 rows.
>
>2) Each record (array row) must be written into up to 10 different files of similar structure but under different conditions.
>
>3) Every record (array row) is always written to the first file.
>
>4) Records (array rows) may or may not be written into the other 9 files. However if a record does need to be written into the second file then it will need to be written into all the nine other files.
>
>5) Currently I have a master loop going through the array rows, 1 to 1000. Inside the loop I write the data into the first file. Then I determine if the data needs to be written into the other files.
>
>6) If the data needs to go into the other 9 files then I call another routine that writes the data into the other files using a 9-cycle loop – one loop for each file. The loop starts off selecting the file and then writing the data into it.
>
>7) The files contain on average 1 million records each. All the files have associated CDX files containing 2 indexes each.
>
>Steps 5 and 6 can be summarized by saying that I loop through the array of records, write each record into the 10 files in sequence, and then process the next array record.
>
>If I need to write every array record into all the files then this process goes quite slowly. The machine is fast, 3.2 GHz CPU, 1Gb memory with plenty free, and everything runs locally, not over a network. Tested on WXP and W2K.
>
>However, I do not see the CPU being pushed at all. The above is not, apparently, a CPU intensive loop according to task manager.
>
>Question: would it be faster to process all 1000 array records into file 1, then process all 1000 array records into file 2, then all 1000 array records into file 3, and so on? Or, phrased another way, is the above slow because I am looping through each array record and writing it into 10 files, one array record at a time? Would VFP run it faster if it could just deal with one table and index at a time?
>
>Obviously I can re-write the above to do the test myself but this would be quite complex to do and I am hoping that there is already a known “best practice” solution. I wouldn’t want to re-program these routines if it will not help anyway.
>
>Thanks
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform