>This will probably end up as a discussion on definitions, and if so I'm out of here! :-)
>
>What I am saying, is that any integer has a defined binary value, like 0=0, 1=1, 2=10, 3=11, 4=100 and so on. So for integers they don't need a 100 pages standard. For decimals it does not work that way, so they have different standard, like IEEE standards, to define how to express them binary. I am far from an expert on this issue, but I know that no matter how you convert a decimal into binary, there is room for errors. On some systems you can specify the accuracy, let's say to 20 decimals, but internally most (all?) of those systems then simply multilpy the values by 10^20, and divide the result by 10^20. You don't overcome the problem completely, but for most calculations you will get the mathematically correct value.
This is actually quite simple: any non-integer which can be expressed as n/10**m has a finite decimal representation. That's why we can write 1/2, 1/5, 3/8 etc as numbers with a finite numbers of decimals, because they can be expanded to have 10, 100, 1000 etc as denominators. Thus 3/8 becomes (3*125)/(8*125)=375/1000=0.375. Any other non-integer numbers have an infinite number of decimals.
In a binary system, we don't have a base of 10, we have a base of 2. So, the set of such numbers is reduced (but still exists) to those which can be expressed as n/2**m. The binary expression for 0.5 is 0.1; for 0.25 is 0.01 etc.
What standards do is to define the positioning of the mantissa, exponent, sign and exponents sign in the given number of bytes - which bit is the sign of the number, which bit is the sign of the exponent (or is the exponent unsigned but offset by half of its maximum), and how many bits is the exponent long - and then the rest is the mantissa. This doesn't change the principle.