Thank you for such a quick reply.
I neglected to give the total picture. Sorry.
Code 1 sample - this was the first try
table has 30,000 transaction and we are top of file
begin transaction
do while !eof()
test if summary record on server exists using an sqlexec statement
if not
add summary records to summary tables (3 of them)
close these tables
endif
update summary file on server using sqlexec
insert record into transaction file on server
skip to next record
enddo
if failure occurred
rollback
else
end transaction
endif
We could never get to the end transaction as we ran out of memory.
Note - we did not use a tableupdate thinking the end transaction would take care of flushing any buffered records.
Code 2 sample - This was the second attempt - we put the begin and end transaction around only the insert and update processes so the buffers would be flushed each time (or so we thought). This also fails but did get farther in the process.
table has 30,000 transaction and we are top of file
do while !eof()
begin transaction
test if summary record on server exists using an sqlexec statement
if not
add summary records to totals tables
close tables
endif
update summary file on server using sqlexec
insert record into transaction file on server
end transaction
skip to next record
enddo
In this example, we put the begin and end around each transaction as it was posted instead of ending after all 30,000 were posted. This got us further along in the file but still failed.
Does the use of tableupdate inside the transaction process have any effect on the buffer?
Is there a flush command that can be used to force the program to free memory and write all buffers to disk?
We thought the end transaction would clear the buffer. I guess we thought wrong.
Help......