>Al,
>
>Sounds as if you solved it, but:
>
>I agree with the others that you want to get everything in shape before appending to SQL Server. In your shoes I'd use SELECT expressions into a new cursor, eg
>
>
select EVL(DateTimeColX,.NULL.) as DateTimeColX, ...
>
>
(but watch out if the very first value is blank or VFP will throw a "cannot determne field type" error) >
>and then append from that cursor. Also, if rows are safe I'd look at the tablebuffering since SQL Server needs lots of room for database log if it's all one fat append.
>
>HTH, J
Yes, doing any datamunging ahead of upload to SQL Server is definitely the way to go. For some reason I thought that a tablebuffered remote view would act like a local cursor until TABLEUPDATE() was applied (i.e. not involve the backend). Clearly that's not the case.
Per your suggestion about table buffering, I switched the RVs to use optimistic row buffering (=3) instead of the default optimistic table buffering (=5). AFAIK those are the only two settings that are valid for RVs.
Results:
- Table buffering: 7:49
- Row buffering: 8:07
Those results are well within the normal variance so for this scenario I don't think that setting is making any difference.
One thing that does make a (literally) big difference is the SQL Server recovery mode for the databases. The prototype DBs I was given to work with were in FULL mode. After some test import runs I was getting warnings about low disk space on my dev VM. Found the 3 database log files had ballooned up to 19GB each. I definitely don't need the point-in-time recovery options that FULL mode provides, so I switched the DBs to SIMPLE recovery and dbcc shrinkfile'd the log files to 1GB each (could probably be smaller, but SQL Server seems happy to work with that size).
Regards. Al
"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov
Neither a despot, nor a doormat, be
Every app wants to be a database app when it grows up