I remeber your post from yesterday. Now, how about something totally radical. Why not install an instance of MSDE or SQL Server and archive the data to that? MSDE I think has a table size limit, but if it does, it is greater than VFPs.
The other developer may have been commiting buffered records that frequently because of network instability. Either that, or the speed perception may have been better by commiting every 100th record instead of waiting until then end then committing them all at once where the wait could could have been substantial. Of course the overall time would still be the same. Maybe he also checked the table size after every 100 records to see if the 2GB limit had been reached instead of using a calculation.
>Hi,
>
>I'm working on a mechanism that is going to archive a couple of large tables. (you may have read my question from yesterday) A tremendous amount of records would be tableupdated at the same time. I saw somewhere else in our software a developper was commiting the buffers every 100th record. That developper no longer works here so I can't ask him about it. Would it be a good idea for me to do something similar? Also, why would it (or not) be a good idea to do it that way? We are using VFP 6 with VMP (version 4) framework.
>
>Thanks,
>Chris
Mark McCasland
Midlothian, TX USA