(To understand the title, see * this Wikipedia entry*)

The title could also be: *“Do as I say, don’t do as I do”*, because I want to remind you of an error that I regularly commit in my demonstrations. Here’s an example:

SQL> create table t (n number); Table created

Have you spotted the error yet ? Perhaps this will help:

SQL> insert into t select 1 - 1/3 * 3 from dual; 1 row created. SQL> insert into t select 1 - 3 * 1/3 from dual; 1 row created. SQL> column n format 9.99999999999999999999999999999999999999999 SQL> select * from t; N -------------------------------------------- .00000000000000000000000000000000000000010 .00000000000000000000000000000000000000000 2 rows selected.

Spotted the error yet ? If not then perhaps this will help:

SQL> select * from dual where 3 * 1/3 = 1/3 * 3; no rows selected SQL> select * from dual where 3 * (1/3) = (1/3) * 3; D - X 1 row selected.

Computers work in binary, people (tend to) work in decimal. 10 = 2 * 5, and 5 (more precisely, dividing by 5) is something that a computer cannot do accurately. So when you do arbitrary arithmetic you should use some method to deal with tiny rounding errors.

In Oracle this means you ought to define all numbers with a precision and scale. Look on it as another form of constraint that helps to ensure the correctness of your data as well as improving performance and reducing wasted storage space.

I am saved from this simply by being too lazy to type “number”, opting normally for “int” :-)

Comment by connormcdonald — November 4, 2015 @ 4:38 am GMT Nov 4,2015 |

Connor,

That’s fine if you want integers as “int” is implicitly number(38,0) – but …

Comment by Jonathan Lewis — November 4, 2015 @ 7:42 am GMT Nov 4,2015 |

“Computers work in binary, people (tend to) work in decimal. 10 = 2 * 5, and 5 (more precisely, dividing by 5) is something that a computer cannot do accurately.”To be picky about details I’d have to disagree with you – computers can do integer calculations in binary totally accurately, because all integer numbers can be represented in binary, up to the maximum possible in the number of bits in the word size being used by that computer (often 64 bits now). There is a direct one to one correspondence between an integer decimal number and its binary representation.

What binary cannot do accurately is represent real decimal numbers (fractions of integer numbers or otherwise) because there is not a one to one correspondence between fractional binary numbers and decimal numbers. Specifically decimal numbers are represented in binary as 2 parts – a mantissa and an exponent. The conversion between decimal and binary fractional numbers involves calculating the power of ten involved against the decimal number shifted so there is only one digit before the decimal point, and then converting that shifted number into binary.

And binary fractions (1/2, 1/4, 1/8, 1/16) do not correspond directly to decimal fractions (1/10, 1/100, 1/1000). This results in a rounding error at the end of the binary number for many real decimal numbers when converted – typically around the 17th decimal digit.

As you have shown this rounding at the end of the binary number can go up or down, which can produce inconsistent results depending on how a calculation is done, and in very complicated calculations this rounding gets compounded up until the rounding error can become visible in the final result.

I totally agree that storing certain numbers as decimal can lead to inaccurate results, and that is why monetary amounts for example should always be stored as integer numbers and not as decimal numbers, so that calculations are always accurate on them (within reason, but certainly more accurate than using decimal numbers). In fact, it is only division that can potentially result in a rounding error in binary – addition, subtraction and multiplication of integer numbers are always accurate. This problem is inherent to the way computers work in binary, and is nothing unique to Oracle – it affects all other databases in the same way. I’ve seen one application run into issues when it switched from using Sybase to using Oracle because one rounded up at the 17th digit and the other rounded down when converting, and the “test suite” was comparing outputs to the full 17 decimal digits. Neither database was “wrong” because there is no absolute conversion between decimal numbers and binary numbers. I had to educate the development team on why binary storage of decimal numbers was not totally accurate, and how any “output value” comparison should be limited to a sensible number of significant digits.

Comment by John Brady — November 9, 2015 @ 10:28 am GMT Nov 9,2015 |

[…] I should have used: cast(dbms_random.string(‘U’,4) as varchar2(4)) and the column definition would have been a proper varchar2(4) from the start. (As noted in a comment from Ivica Arsov below, substr(…, 1, 4) would also have worked. Of course I should have used cast() for the numerics as well so that I could declare them as things like number(8,0) rather than the slightly dangerous “number”. […]

Pingback by Quiz Night | Oracle Scratchpad — February 12, 2016 @ 8:52 am GMT Feb 12,2016 |