4-bit floating point FP4
by chmaynard on 4/18/2026, 5:21:02 PM
https://www.johndcook.com/blog/2026/04/17/fp4/
Comments
by: burnt-resistor
FP4 1:2:0:1 (other examples: binary32 1:8:0:23, 8087 ep 1:15:1:63)<p>S:E:l:M<p>S = sign bit present (or magnitude-only absolute value)<p>E = exponent bits (typically biased by 2^(E-1) - 1)<p>l = explicit leading integer present<p>M = mantissa (fraction) bits<p>The limitations of FP4 are that it lacks infinities, [sq]NaNs, and denormals that make it very limited to special purposes only. There's no denying that it might be extremely efficient for very particular problems.<p>If a more even distribution were needed, a simpler fixed point floating point like 1:2:1 (sign:integer:fraction bits) is possible.
4/18/2026, 9:16:59 PM
by: chrisjj
> Programmers were grateful for the move from 32-bit floats to 64-bit floats. It doesn’t hurt to have more precision<p>Someome didn't try it on GPU...
4/18/2026, 5:28:28 PM
by: ant6n
> In ancient times, floating point numbers were stored in 32 bits.<p>I thought in ancient times, floating point numbers used to be 80 bit. They lived in a funky mini stack on the coprocessor (x87). Then one day, somebody came along and standardized those 32 and 64 bit floats we still have today.
4/18/2026, 8:47:51 PM