>I've planned the test files as follows:
>a) 3 separate tables of at least 100 meg each;
>b) 2 of those tables having Memo fields to make a .FPT too.
>c) All 3 to have 5 indexes, including 1 on DELETE() in each (the last mainly because a DELETE ALL / RECALL ALL may be helpful tests).
>d) All files have records in a 1-1-1 correspondence.
>e) Logic will roughly be:
>--- 3 CREATE TABLE commands in a row.
>--- INDEX ON statements for 5 fields (two INT, two CHR and DELETED() after selecting each table.
>--- INSERT a record to each table individually in a loop to create the desired size of tables.
>f) I will not keep timings for this part.
>
>I've still got a way to go on the actual testing, but I do plan as you say - time run(s) as originally laid out, run a DEFRAG 'analyze' to get a report of the fragmentation of the files in question, do a DEFRAG (for real), then re-run the tests.
That seems appropriate to me. You probably don't need much convincing, in that it seems appropriate to measure the time for each individual test separately.
>I'm thinking that I will also run some things before any timing runs. Things like generating a 10,000 entry SEEK list come to mind.
I assume that has to do with buffering, right?
The critical part, it seems to me, is precisely selecting appropriate speed tests; but I don't have further ideas for that now.
Regards, Hilmar.
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)