Floating-Point Precision Calculator
Compare float16, float32, and float64 precision, range, and machine epsilon.
See how many decimal digits each format reliably represents and where rounding errors appear.
Computers represent real numbers using the IEEE 754 floating-point standard. Every format has three parts: a sign bit, an exponent, and a mantissa (also called the significand). The mantissa bits determine precision; the exponent bits determine range.
The three common formats:
float16 (half precision): 1 sign + 5 exponent + 10 mantissa bits. Roughly 3 significant decimal digits. Used in machine learning for memory-efficient inference. Maximum value: 65,504.
float32 (single precision): 1 sign + 8 exponent + 23 mantissa bits. About 7 significant decimal digits. The default in most graphics and many scientific applications.
float64 (double precision): 1 sign + 11 exponent + 52 mantissa bits. About 15-16 significant decimal digits. The default in Python, R, MATLAB, and most numerical computing.
Machine epsilon is the smallest number e such that 1.0 + e > 1.0 in that format. It equals 2^(-mantissa_bits). For float32, epsilon is about 1.19e-7. For float64, it is about 2.22e-16. Any arithmetic result smaller than this relative error is lost.
Why it matters. Subtracting two nearly-equal numbers causes catastrophic cancellation — you can lose most of your significant digits in a single operation. Adding a very small number to a very large one can cause the small number to vanish completely. These effects scale with machine epsilon and are the reason numerical analysts choose float64 for most scientific work.
Decimal precision is calculated as floor(mantissa_bits × log10(2)).