To be picky about details I’d have to disagree with you – computers can do integer calculations in binary totally accurately, because all integer numbers can be represented in binary, up to the maximum possible in the number of bits in the word size being used by that computer (often 64 bits now). There is a direct one to one correspondence between an integer decimal number and its binary representation.

What binary cannot do accurately is represent real decimal numbers (fractions of integer numbers or otherwise) because there is not a one to one correspondence between fractional binary numbers and decimal numbers. Specifically decimal numbers are represented in binary as 2 parts – a mantissa and an exponent. The conversion between decimal and binary fractional numbers involves calculating the power of ten involved against the decimal number shifted so there is only one digit before the decimal point, and then converting that shifted number into binary.

And binary fractions (1/2, 1/4, 1/8, 1/16) do not correspond directly to decimal fractions (1/10, 1/100, 1/1000). This results in a rounding error at the end of the binary number for many real decimal numbers when converted – typically around the 17th decimal digit.

As you have shown this rounding at the end of the binary number can go up or down, which can produce inconsistent results depending on how a calculation is done, and in very complicated calculations this rounding gets compounded up until the rounding error can become visible in the final result.

I totally agree that storing certain numbers as decimal can lead to inaccurate results, and that is why monetary amounts for example should always be stored as integer numbers and not as decimal numbers, so that calculations are always accurate on them (within reason, but certainly more accurate than using decimal numbers). In fact, it is only division that can potentially result in a rounding error in binary – addition, subtraction and multiplication of integer numbers are always accurate. This problem is inherent to the way computers work in binary, and is nothing unique to Oracle – it affects all other databases in the same way. I’ve seen one application run into issues when it switched from using Sybase to using Oracle because one rounded up at the 17th digit and the other rounded down when converting, and the “test suite” was comparing outputs to the full 17 decimal digits. Neither database was “wrong” because there is no absolute conversion between decimal numbers and binary numbers. I had to educate the development team on why binary storage of decimal numbers was not totally accurate, and how any “output value” comparison should be limited to a sensible number of significant digits.

]]>Connor,

That’s fine if you want integers as “int” is implicitly number(38,0) – but …

SQL> insert into t1 values(1/3); SQL> insert into t1 values(2/3); SQL> select * from t1; N1 -------------------------------------------------- .00000000000000000000000000000000000000000000000 1.00000000000000000000000000000000000000000000000]]>

SQL> create table T ( x int ); Table created. SQL> desc T Name Null? Type ----------------------------------------------- -------- ------------------ X NUMBER(38) SQL> insert into t select 1 - 1/3 * 3 from dual; 1 row created. SQL> insert into t select 1 - 3 * 1/3 from dual; 1 row created. SQL> column n format 9.99999999999999999999999999999999999999999 SQL> select * from t; X ---------- 0 0]]>