> I'm wondering what table operations will cause the need to re-index (delete
> tag all, then index over again). I'm assuming a CDX which is always open
> when table has changes made. In the past, I've done this with Pack/Zap, but
> I've never really studied it...
>
> Any ideas?
I've studied it. As I've mentioned some threads ago, I have lots of
instalations in villages where the electricity is... well, having the
UPS beep every time someone makes coffee :(. Such things happen even in
new buildings while they are not completely finished - you never know if
some electricians somewhere will power off the whole building just to
attach some additional cables. And there are always some users who reset
the machine or just power off.
So, all my apps include an indexing routine, which fires upon certain
errors on file open ("record is out of range") or is called by user. We
recommend to users to do it after any irregular exit or once a week,
just in case. Besides, the indexing routine is usually very quick - the
only one which runs more than just seconds is indexing 14M table (among
others) on a 486/40.
As for the experience with this, I've drawn some guidelines:
- assume nothing of the .cdx - if you have to index the table, it's
probably because .cdx is bad, so the table probably can't be opened
regularly, and REINdexing may prove to be impossible
- check if the table can be opened exclusively, if it can't, issue a
message and skip to the next table
- all the info should be available to the indexing routine from other
sources than the table itself (and/or its .cdx) - I do it by generating
the indexer, so it "knows" everything.
- .cdxes can't be regenerated. Kill and make a new one, regardless of
the old one's state. Opening a table with a damaged .cdx may make some
nasty mish-mash with the memory (some offsets may go long way south and
you never know if there's range checking for everything).
I've had a routine for generating such routines ever since 1989, and now
the VFP version got into beta. Still working on it.