>>It will not break, but it will give you a millisecond here and there.
>>With ldCount=ldCount+1, you always get about up to 499 milliseconds, and the moment it errors out is when it reaches the 500 - at 501, it breaks the index.
>
>Thanks for the code snippet...here's another one
>
>
>lnTraps = 0
>ldCount = {^2001-01-01 12:00:00 AM}
>for j=1 to 100000
> prevldCount = ldCount
> ldCount = ldCount + 1
> if (ttoc(prevldCount) = ttoc(ldCount))
> ? j,prevldCount,ldCount,;
> (ldCount - ctot(ttoc(prevldCount))-1+0.00000)
> ? "Mod: ",mod(ldCount - ldStart,1)+0.00000
>
> lnTraps = lnTraps + 1
> endif
>endfor
>? lnTraps
>
>This gets one hit, at iteration 92,186 (at 500.02ms error). When stored to the table, the duplicates occur between record 92,000 and 92,184. Maybe the data changes slightly when it gets written to (and/or read back from) the table.
I think it gets rounded to three decimals (i.e. milliseconds). And the 92000 range is about the time when it gets sufficient accumulated roundoff error to get above the 500 ms boundary, and starts rounding up.
An alternative to a key like this would be to create a timestamp integer instead, or a timestamp pair of integers:
index on bintoc(nJulianDateIntegerField)+bintoc(nSecondsAfterMidnight*1000) ... tag ...
>Thanks for playing with me. :)
My curiosity purrs with satisfaction, being well fed :)