← back to tools

float32 Precision Explorer

See exactly how and when float32 loses the ability to represent whole numbers — and why int32 never does.

Whole numbers stored as int32 are always exact. When you store them as float32 instead, they stay exact up to 16,777,216 — but above that, float32 starts rounding. Use the slider to see where and why.

⚠️
Precision loss — float32 cannot store this exactly
You want 16,777,217, but float32 rounds it to 16,777,216. If you compare these two values using a float, they'll appear equal — a silent bug.
range: 0 – 2,147,483,647
▲ Precision limit (16,777,216)
What actually gets stored
Your number (int32)
Stored perfectly, always
16,777,217
✓ exact
float32 rounds to
Rounded by 1
16,777,216
⚠ off by 1
float32 step size here
Jumps over 1 integer at a time
±2
step = 2
Number line — zoomed in around your value
Green ticks ▲ = integers that int32 can store exactly. Blue ticks ▼ = values float32 can represent. Below the limit they line up. Above it, the blue ticks spread apart and integers fall in the gaps.
Bit-level breakdown
float32 (IEEE 754)
Divided into 3 fields. The mantissa (blue) is the precision budget — it has 23 bits, which covers integers exactly up to 2²⁴ = 16,777,216. Beyond that, the gap between representable values doubles every time the magnitude doubles.
Sign:positiveExponent:scale factor (×2^24)Mantissa:23 bits — the precision budget
0
1
0
0
1
0
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
S
8 exponent bits
← 23 mantissa bits (precision) →
int32 (Two's complement)
A direct binary count. Every integer gets its own unique bit pattern — there's no exponent trick, no rounding, no gaps.
Sign:positive/negativeValue bits:31 bits — exact whole numbers up to 2.1 billion
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
S
← 31 value bits — no rounding, ever →
Why does this happen?
Think of float32 like scientific notation: 1.something × 2^n. The "1.something" part has exactly 23 binary digits of precision. Below 16.8 million, those 23 digits are enough to distinguish every consecutive integer. Above it, the numbers are so large that two neighboring integers would need a 24th digit that doesn't exist — so float32 rounds them to the same value. int32 simply stores the full binary value of the number, so it never needs to round.