>Hi,
>
>I'm working on a mechanism that is going to archive a couple of large tables. (you may have read my question from yesterday) A tremendous amount of records would be tableupdated at the same time. I saw somewhere else in our software a developper was commiting the buffers every 100th record. That developper no longer works here so I can't ask him about it. Would it be a good idea for me to do something similar? Also, why would it (or not) be a good idea to do it that way? We are using VFP 6 with VMP (version 4) framework.
>
>Thanks,
>Chris
Chris,
I couldn' understand the whole scenario. Out of the blues few ideas.
-Do you really ever need to buffer.
-Do you really need to use traditional fox commands. As I understand you'd archive old entries entered in physical order. A simple lowlevel 'cut' from big table might be as fast as flat file copying and IMHO worth to think if you are serious saying hours.
-Instead of checking size,reccount (reccount is practically useless) every Nth record can't you calculate it beforehand based on recsize.
Cetin