Oddities of floating point: Negative zero
The infamous case of 0.1 + 0.2 is just one of the many oddities of the IEEE-754 floating point number formats. Computers deal with integers quite easily, but many programs have to deal with floating point numbers, which computers don’t deal with quite as easily.
Designing the floating point number formats involved a lot of compromises and trade-offs, leading to certain oddities, ranging from the merely curious to the outright annoying.
In this article, I’m discussing one of the merely curious oddities of the floating point number formats: negative zero.
Please try it out in your Web browser’s JavaScript console or a REPL for your favorite programming language, such as Scastie for Scala. Not sure if it’ll work in Python.
Try dividing 1.0 by a small positive number like 256. The result will be something like 0.00390625, a number close to 0 but clearly distinct from 0.
Now try dividing 1.0 by positive infinity. I have a Scastie snippet for that. The result is 0.0. The JavaScript console in Firefox responds with 0. Not sure that’s mathematically valid, but whatever.
And now try dividing 1.0 by negative infinity. I also have a Scastie snippet for that. The result is −0.0. You can get the same result with −1.0 divided by positive infinity.
There are lots of other ways to get negative zero, such as by dividing a negative subnormal number by a sufficiently large positive normal number (such as 2.0 in the case I looked at).
Negative zero is the result of a trade-off in which the symmetry of having each floating point number have its negated counterpart represented in the format was preferred over having a single unambiguous representation for zero.
With the signed integer formats used in almost all computers today, there’s an asymmetry between the negative integers and the positive integers. For example, a signed byte can represent 128 distinct negative integers, from −128 to −1, but it can only represent 127 distinct positive integers, from 1 to 127, since the all zeroes bit pattern represents 0.
This leads to the annoying problem that the smallest negative number in an integer format multiplied by −1 is… itself.
For example, in a signed byte, −128 multiplied by −1 is −128, which is clearly wrong (to verify this in a REPL for a programming language like Java, you might have to cast the result of the multiplication to byte
, since the default integer data type is wide enough to represent both −128 and +128).
The commonly used integer formats use two’s complement. To get the negative of a number, toggle all the bits and then add 1. This works well enough for all the integers except the lowest available in the format, since toggling all the bits and adding 1 just leads back to the original number.
This is not a problem in IEEE-754 floating point. The lowest finite number that can be represented in 64-bit floating point is −1.7976931348623157 × 10³⁰⁸, and multiplying that by −1 to get 1.7976931348623157 × 10³⁰⁸ is a simple matter of toggling the sign bit.
So toggling the sign bit on the all zeroes bit pattern still gives a subnormal number (unbiased exponent 0) and no significant on bits in the mantissa. Essentially 0.0 multiplied by −1. It’s a number different from 0.0 only in its bit pattern and its textual representation.
Don’t worry though about negative zero breaking your program where it expects “positive” zero. The equality operator in the various programming languages (or equality operators in the case of JavaScript) all regard −0.0 and +0.0 as equal.
Don’t begrudge IEEE-754 “wasting” one value on negative zero, given that the 32- and 64-bit floating point formats “waste” trillions and trillions of values on NaN (not a number) values that are generally collapsed to a single “canonical” NaN.