Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Any way to optimize Append from for huge file?
Message
 
 
À
03/05/2000 20:02:30
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00365681
Message ID:
00365922
Vues:
14
Hi Dore & Gar,

I've read your suggestions and I think, I find an easy solution. What we do right now is: Create an empty database and table shell, create all indexes, then append from, then replace all. It's very unsufficient and takes ~2 hours.

My idea is to create an empty database shell, then add the result of SQL into this database (may be using the syntax Gar suggested), then just alter table (add the rest fields), then replace all, then add indexes. In theory it should work much faster, because now there is no need to append, right?

Another idea as I already described: create an empty database and table shell, but instead of append from and replace all, do insert into within scan loop. Should be faster also, but the first idea seems better.

BTW, there is no SQL Server involved, these are all native VFP tables.
We have our normalized database, but the work file should be denormalized table (all fields from all relational tables).

Thanks a lot for your advices, now it's time to try and choose the best approach. As I said, this is not my code, so I let my colleague to test.


>It's probable that you're REPLACE command is what is taking all of the time. My suggestion had to do with the fact that when you do a large APPEND FROM all of the tags in the structural compound index file are updated with each record appended. Many times a combined APPEND FROM with no index tag updates followed by a full REINDEX will take less time. To do this rename the .cdx file to something else like tablname.was, when you next use the table you will have to allow VFP to rewrite the .dbf header since there is no .cdx associated with the file. Do the APPEND FROM then rename tablname.was back to tablname.cdx then USE tablname index tablname (this reassociates the .cdx to the .dbf) then REINDEX. It sounds like a pain but may save you some processing time.
>Good Luck.
>
>>Hi Dore,
>>
>>I'm not sure, that I understand your advice, could you please elaborate a little more? This table (I mean the target shell) has lots of compound indexes.
>>
>>TIA
>>
>>>>Hi everybody,
>>>>
>>>>We're having this problem: our SQL produces ~1000000 records. SQL works fast. Then we need to place this records into Database with predefined structure, indexes, etc. So, firstly it prepares a shell Database and a table and then uses Append from command (runs DeNull program in between). This operation takes already ~15 min. and it's not finished yet. What can you suggest to optimize it?
>>>>
>>>>Thanks in advance.
>>>
>>>Depending upon the number of and complexity of the index tags in the structural compound index file it might make sense to decouple the .dbf and .cdx files, do the APPEND FROM, relink the .dbf and .cdx and REINDEX.
If it's not broken, fix it until it is.


My Blog
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform