>
It's just the matter of do we want to code for that one-in-ten-thousand case;>
>Dragan -
>I thought that's why we get the big bucks<s>. Inside the paradigm, all things are equal. A low frequency failure (eg 1:10,000 or 1:100,000,000,000) has the same
franchise a high freqency failure enjoys (eg 1:10). Should not both be treated as though they have the same
likelyhood to occur?
Are we talking programming principles or "best business practices" here? :)
I guess the answer to both questions (yours and mine) would depend on who are you working for. I've long ago learned a rule of thumb regarding speed: if it's run many times a day, squeeze all the speed you can; if it's run daily, make it as fast as you can, unless it takes more time than you'll save; if it's run weekly or monthly, and it works, well, if you really got nothing else to do, give it some speed.
Now regarding the robustness and stability, coding your wagons into a circle against something that may happen once in a leap year...would depend. In our particular case at hand (reindex in case the cdx file header is fubar) it did make sense in the times of just free tables, i.e. Fox 2.x. Nowadays it's much more complicated - if a cdx has crashed, what are the chances your dbc is intact? If you rebuild the .cdx, what happens with the RI which relies on it? And, besides, what will teach the users to have a backup at all times, if not a case like that?