>Hi,
>
>I'm working on a mechanism that is going to archive a couple of large tables. (you may have read my question from yesterday) A tremendous amount of records would be tableupdated at the same time. I saw somewhere else in our software a developper was commiting the buffers every 100th record. That developper no longer works here so I can't ask him about it. Would it be a good idea for me to do something similar? Also, why would it (or not) be a good idea to do it that way? We are using VFP 6 with VMP (version 4) framework.
>
>Thanks,
Chris,
I haven't read all your replies, but in either VFP (with table buffering) or SQL Server trying to update a large number of records is inherently slow.
When you table buffer your records in VFP, you're basically doing the same thing as with SQL Server. In both instances, you're much better off using the file as a set based set of records rather than a file based set of records. If you reduce the number of records retrieved, in both cases, you'll get better performance.
George
Ubi caritas et amor, deus ibi est