>OK, here are some numbers. I had 64,152 records in an SDF-formatted file (~11 MB), at 362 bytes per record. Before each timing test, I started with an empty destination table. All of the times are in seconds.
>
>
> Run1 Run2 Run3 Run4 Average
>SDF Append 12.15 11.13 10.98 10.94 11.30
>SDF->DBF, DBF Append 13.10 13.52 12.44 12.88 12.99
>
>
Which one is which? Your program is a little slower, right?
>>>Well, I wrote a program that will do this. It still needs some tweaking, like checking for file existance and such. However, I don't think you'll see any speed improvements by doing this and performing a DBF APPEND over doing an SDF APPEND to begin with, since there will be some time involved in getting the DBF created from the SDF file. However, it was a good learning experience. Most of the info I got came from what everyone has posted in this thread, plus looking in the help under "Table File Structure". I think there may be a typo there. For the last bit of reserved bytes in the field header portion, it says it goes from bytes 19-32, but since they start at 0 and the field header is supposed to be 32 bytes, it should probably read as going from bytes 19-31.
>>>
>>>The program can be found at:
http://www.crosslink.net/~jcochran/jonathan/SDF_To_DBF.htm. I would appreciate any feedback anyone has to offer.
>>
>>The only theoretical speedup could be achieved if we could use the LFs for delete marks, and invent an extra field to keep the CRs. In that case, you wouldn't have to go record by record; the number of records could be taken as just the file size divided by record_length+2 and the whole file could be appended at once. Though, I doubt that you would gain any significant speed increase - LLFF are as fast as it gets.
>>
>>Nice work, very well documented, and probably very fast. BTW, what's the speed gain compared with Append SDF?
If it's not broken, fix it until it is.
My Blog