Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Tableupdate() a large amount of records
Message
De
27/08/2004 10:58:54
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00936296
Message ID:
00937013
Vues:
22
Hi,

Thanks for everyone's input. I ran my code on the same dataset twice. The only difference is that the first time I ran it there was no buffering and the second time I ran it there was buffering. Without buffering it took about 3 seconds to run. With buffering on the exact same data it took 5 seconds to run. On a really large dataset that difference could be hours. So it seems that there quite a big cost in terms of time when doing large amounts of inserts and deletes with table buffering. What I think I'm going to do is come up with some sort of mechanism to make sure that only one person can run the archive mechanism at once, no one can import internet data while data is archived, and that the archiving mechanism cannot start if someone is currently importing internet data. That should take care of my worries. And if it should fail halfway through for some reason, the archiving should just resume where it left off. Checking if the archive record already exists before I add it (and then deleting it if its already there) should keep all the data together properly.

Thanks for the suggestions!

Chris

>I guess you could transact the possibility of not updating the master table if the record was written to the archive, and crashed before updating the master table.
>
>But if you want to just do it... then;
>Scan MASTER for unarchived records
>If the archive record exists (Check for the existence of the key in the archive table)
> Delete archive record - because it shouldn't be there
>End
>
>Create Archive record
>Insert into archive
>Update Master
>
>Next
>
>
>
>>Hi Wayne
>
>>
>>I agree with most of what you've written.
>>
>>>I guess I don't get it... I still see no need for buffering - the concept of buffering is that you get a local copy of the data (that already exists), you work on it, update it, etc. Then you commit the changes to the underlying table. It is not for large insert operations.
>>>
>>>If you create memory variables and INSERT INTO ... FROM MEMVAR, you are essentially waiting until you have it the way you want before you create your archive set. Buffering is just overkill.
>>>
>>>The process you need to employ sounds like this...
>>>
>>>Loop through existing master table and identify records to archive
>>
>>What he could do here is use begin a transaction.
>>
>>>For each record, summarize parent and child records into memory (either use select and/or m. assigns)
>>>Insert parent/child summary into archive (INSERT INTO __archive FROM MEMVAR)
>>>Mark the master table record as archived
>>
>>Then end the transaction here. That should provide the safety that he thinks buffering will provide.
>>
>>>
>>>If the process fails anywhere - you just restart it and it picks up where it left off.
>>
>>Which would be even simpler, because if the transaction fails for one record, nothing should have been changed. So it should be caught on the next restart. Right?
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform