As already stated, floating point numbers on a PC represent real values as the product of the mantissa with 2 (it's binary) raised to a given exponent. An important consequence is therefore that if a given value cannot be written as an exact sequence of powers of 2 within the limits of the range provided for mantissa and exponent, the resulting value will only be an approximation of the decimal value intended. Only those decimal numbers that are a true multiplication of 2 can be exactly represented.
For instance, the value 0.25 can be exactly represented in the single format as:
0 01111101 00000000000000000000000
= $3E800000
Mantissa = 00000000000000000000000
= 0
Add the implicit 1
= 1
Exponent = 01111101
= 125
Subtract bias 127
= -2
Sign = 0 = positive
The value is thus:
1 * 2-2
= 0.25
In this context, many programmers confuse accuracy and precision when dealing with floating point values. Accuracy is a measurement of how close the floating point value is to its intended target. Precision is a measure of how detailed, finely granulated, the value is expressed. The more storage space, the larger the precision of the value, so a double
is more precise than a single
. Floating point numbers are more precise than integer types (their granularity is finer, they can express fractions more precisely), yet often lack in accuracy (they are close to a value, an approximation, rather than an exact representation).
Next: Programming Guidelines