This matter have been discussed to a point of boredom before...! <bg> No offence intended.
A decimal number can not be presented correctly binary, it is simply impossible, so they have to choose the best of several evils. If you want real precision, calculate the data by a large number, and divide the end result by the same number.
>Hi Everyone,
>The purpose of this code is to use math to determine how many decimal places are in a floating point number. I have used this algorithm when importing unknown/unpredictable data to help determine what data type and size I would like the field to be. The odd thing is that although the algorithm is mathematically sound, Fox does some creative math when I feed in numbers with a lot of decimal places. Here is the code:
>
>
>nValue = 234.8234721345
>LOCAL decPortion, itterationCtr
>
>itterationCtr = 0
>decPortion = nValue - INT(nValue)
>DO WHILE INT(decPortion) < decPortion
> decPortion = decPortion * 10
> itterationCtr = itterationCtr + 1
> ?decPortion
>ENDDO
>? itterationCtr
>
>
>... and the output starts like ...
>
>
>8.2347213450
>82.3472134500
>823.4721345000
>8234.7213450001
>82347.2134500008
>823472.1345000083
>8234721.4560000830
>
>
>You can run it & see how it ends from there, but the interesting part begins on the 4th element of output. All I am doing is multiplying the number by 10. Why does 823.4721345000 * 10 get me 8234.7213450001? I'm assuming it has SOMETHING to do w/ mathematical precision, but if I just type in
?823.4721345000 * 10
outside of this looping logic, it returns the CORRECT value w/o that '1' tacked onto the end of it. Does anyone have any thoughts or insights on this?