Why 0.1 + 0.2 != 0.3
讀書百遍而義自見 - 晉・陳壽《三國志・魏志・王肅傳》
最后更新于
讀書百遍而義自見 - 晉・陳壽《三國志・魏志・王肅傳》
最后更新于
When you enter 0.1 + 0.2 in Python or JavaScript console, the result is not 0.3. How could this happen?
It's quite an old question and there're already many answers in . But I still want to explain it a bit, because I want to (a) get a better understanding of this question, (b) explain this question using my own words.
Computers do math in binary, let's convert 0.1 from decimal to binary manually with below method. But usually I just go to and type "0.1 to binary".
After adding 2 numbers in binary, based on how many digits you keep, you will get results like below.
That's exactly the same output(except for the rounding) as it is from console!
To conclude, human uses decimal while computer uses binary for arithmetic. Decimal numbers are converted to binary for calculation and then converted back to decimal afterwards. There are precision lost during the conversions, so we see the odd output.
Written in float64 format and that's how 0.1
is represented in memory:
Note that it depends on which number representation standard used by each language. e.g. Python and JavaScript represent floating point numbers using double precision defined by the IEEE 754 standard, but for C, it offers a wide variety of arithmetic types, double precision is not required by the standards. And platform also matters, e.g. float64 will work for a 8-bit machine.
When I first read this, I asked myself, how are these largest and smallest numbers calculated? Luckily with the handy online conversion tools I got the answer.
1 hour clockwise equals 11 hours anti clockwise.
Below is an example, assuming numbers are 8-bit.
Two's-complement integer representations eliminate the need for separate addition and subtraction units. It simplifies the design of CPU, because it can use one type of circuit for both addition and subtraction.
It also conveys the idea of unification in software engineering.
壬寅年谷雨後一日於Düsseldorf
So 0.1
in binary can be represented as , using a scientific notation it's . Similarly .
But it's not yet how it is represented in computer, e.g. AKA float64, requires numbers to be formatted as below.
Described in formula it's .
For 0.1
, sign
equals to 0 and e
equals to 1019. So it's .
Referenced from
JavaScript represents numbers using the 64-bit floating-point format defined by the IEEE 754 standard, which means it can represent numbers as large as and as small as .
Largest representable number: ->
Smallest number without losing precision: ->
Smallest representable number: ->
This part is largely referenced from an article from .
A negative number is represented by , a non-negative number is represented by its ordinary binary representation.
The two's complement of an -bit number is defined as its complement with respect to ; the sum of a number and its two's complement is .
It's called "two's" because it's based on 2, more accurately 's complement. Similarly there are 10's complement(decimal), 12's complement(clock).
can be written as
-> Chapter 2 - Representing and Manipulating Information