>I'm trying to dream up some simple tests to check this out and measure (basically) the impact of originally-laid (fragmented) versus defragged.
I am thinking about this.
I don't know whether it is better to do a complete defrag for a start or not - but one way or the other, when adding records the files will become fragmented (since information is added both to the DBF and to the CDX).
How about creating a table and an index. Insert 10,000 or perhaps 100,000 records. This should result in a nicely fragmented table.
Now, some ideas for the speed tests.
Sequential processing, with a SCAN. Do something simple, like copy a field to a variable each time.
SEEK tests in a loop.
A SELECT - SQL command that gets perhaps a total of 100 records.
Hmmm... multitable selects might be appropriate as well. This will require additional table(s) in the setup phase.
I consider the SQL - SELECT statements especially relevant, because I think they tend to be the most time-consuming.
Now, defragment your hard disk, and repeat the tests.
Hilmar.
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)