Oracle Scratchpad

January 15, 2018

Histogram Hassle

Filed under: Bugs,CBO,Histograms,Oracle,Statistics,Troubleshooting — Jonathan Lewis @ 1:01 pm GMT Jan 15,2018

I came across a simple performance problem recently that ended up highlighting a problem with the 12c hybrid histogram algorithm. It was a problem that I had mentioned in passing a few years ago, but only in the context of Top-N histograms and without paying attention to the consequences. In fact I should have noticed the same threat in a recent article by Maria Colgan that mentioned the problems introduced in 12c by the option “for all columns size repeat”.

So here’s the context (note – all numbers used in this example are approximations to make the arithmetic obvious).  The client had a query with a predicate like the follwing:

    t4.columnA = :b1
and t6.columnB = :b2

The optimizer was choosing to drive the query through an indexed access path into t6, which returned ca. 1,000,000 rows before joining (two tables later) to t4 at which point all but a couple of rows remained – typical execution time was in the order of tens of minutes. A hint to start the join order with t4 /*+ leading(t4) */, meant that Oracle used an index to acquire two starting rows and reduced the response time to the classic “sub-second”.

The problem had arisen because the optimizer had estimated a cardinality of 2 rows for the index on t6 and the reason for this was that, on average, that was the correct number. There were 2,000,000 rows in the table with 1,000,000 distinct values. It was just very unlucky that one of the values appeared 1,000,000 times and that was the value the users always wanted to query – and there was no histogram on the column to tell the optimizer that there was a massive skew in the data distribution.

Problem solved – all I had to do was set a table preference for this table to add a histogram to this column and gather stats. Since there were so many distinct values and so much “non-popular” data in the table the optimizer should end up with a hybrid histogram that would highlight this value. I left instructions for the required test and waited for the email telling me that my suggestion was brilliant and the results were fantastic … I got an email telling me it hadn’t worked.

Here’s a model of the situation – I’ve created a table with 2 million rows and a column where every other row contains the same value but otherwise contains the rownum. Because the client code was using a varchar2() column I’ve done the same here, converting the numbers to character strings left-padded with zeros. There are a few rows (about 20) where the column value is higher than the very popular value.

rem     Script:         histogram_problem_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem     Last tested
rem       Fixed
rem       Broken, patch available
rem       Broken, patch available

create table t1
segment creation immediate
with generator as (
                rownum id
        from dual
        connect by
                level <= 2e4
        rownum  as id,
                when mod(rownum,2) = 0
                        then '999960'
                        else lpad(rownum,6,'0')
        end     as bad_col
        generator       v1,
        generator       v2
        rownum <= 2e6    -- typing error, ends up with 2 rows per non-popular value.

Having created the data I’m going to create a histogram on the bad_col – specifying 254 columns – then query user_tab_histograms for the resulting histogram (from which I’ll delete a huge chunk of boring rows in the middle):


                ownname         => 'TEST_USER',
                tabname         => 'T1',
                method_opt      => 'for columns bad_col size 254'


        column_name, histogram, sample_size
        table_name = 'T1'

column end_av format a12

        endpoint_number         end_pt,
        to_char(endpoint_value,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') end_val,
        endpoint_actual_value   end_av,
        endpoint_repeat_count   end_rpt
        table_name = 'T1'
and     column_name = 'BAD_COL'
order by

COLUMN_NAME          HISTOGRAM             Sample
-------------------- --------------- ------------
BAD_COL              HYBRID                 5,513
ID                   NONE               2,000,000

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
         1  303030303031001f0fe211e0800000 000001                1
        12  3030383938311550648a5e3d200000 008981                1
        23  303135323034f8f5cbccd2b4a00000 015205                1
        33  3032333035311c91ae91eb54000000 023051                1
        44  303239373236f60586ef3a0ae00000 029727                1
      2685  3938343731391ba0f38234fde00000 984719                1
      2695  39393235303309023378c0a1400000 992503                1
      2704  3939373537370c2db4ae83e2000000 997577                1
      5513  393939393938f86f9b35437a800000 999999                1

254 rows selected.

So we have a hybrid histogram, we’ve sampled 5,513 rows to build the histogram, we have 254 buckets in the histogram report, and the final row in the histogram is end point 5513 (matching the sample size). The first row of the histogram shows us the (real) low value in the column and the last row of the histogram reports the (real) high value. But there’s something very odd about the histogram – we know that ‘999960’ is the one popular value, occurring 50% of the time in the data, but it doesn’t appear in the histogram at all.

Looking more closely we see that every bucket covers a range of about 11 (sometimes 9 or 10) rows from the sample, and the highest value in each bucket appears just once; but the last bucket covers 2,809 rows from the sample with the highest value in the bucket appearing just once. We expect a hybrid histogram to have buckets which (at least initially) are all roughly the same size – i.e. “sample size”/”number of buckets” – with some buckets being larger by something like the amount that appears in their repeat count, so it doesn’t seem right that we have an enormous bucket with a repeat count of just 1. Something is broken.

The problem is that the sample didn’t find the low and high values for the column – although the initial full tablescan did, of course – so Oracle has “injected” the low and high values into the histogram fiddling with the contents of the first and last buckets. At the bottom end of the histogram this hasn’t really caused any problems (in our case), but at the top end it has taken the big bucket for our very popular ‘999960’ and apparently simply replaced the value with the high value of ‘999999’ and a repeat count of 1.

As an indication of the truth of this claim, here are the last few rows of the histogram if I repeat the experiment but, before gathering the histogram, delete the rows where bad_col is greater than ‘999960’. (Oracle’s sample is random, of course, and has changed slightly for this run.)

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
      2641  3938373731371650183cf7a0a00000 987717                1
      2652  3939353032310e65c1acf984a00000 995021                1
      2661  393938393433125319cc9f5ba00000 998943                1
      5426  393939393630078c23b063cf600000 999960             2764

Similarly, if I inserted a few hundred rows with a higher value than my popular value (in this case I thought 500 rows would be a fairly safe bet as the sample was about one in 360 rows) I got a histogram which started with a bucket about the popular bucket, so the problem of that bucket being hacked to the high value was less significant:

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
      2718  393736313130fe68d8cfd6e4000000 976111                1
      2729  393836373630ebfe9c2b7b94c00000 986761                1
      2740  39393330323515efa3c99771600000 993025                1
      5495  393939393630078c23b063cf600000 999960             2747
      5497  393939393938f86f9b35437a800000 999999                1

Bottom line, then: if you have an important popular value in a column and there aren’t very many rows with a higher value, you may find that Oracle loses sight of the popular value as it fudges the column’s high value into the final bucket.


I did consider writing a bit of PL/SQL for the client to fake a realistic frequency histogram, but decided that that wouldn’t be particularly friendly to future DBAs who might have to cope with changes. Luckily the site doesn’t gather stats using the automatic scheduler job and only rarely updates stats anyway, so I suggested we create a histogram on the column using an estimate_percent of 100. This took about 8 minutes to run – for reasons that I will go into in a moment – after which I suggested we lock stats on the table and document the fact that when stats are collected on this table it’s got to be a two-pass job – the normal gather with its auto_sample_size to start with, then a 100% sample for this column to gather the histogram:

                method_opt       => 'for columns bad_col size 254',
                estimate_percent => 100,
                cascade          => false

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
       125  39363839393911e01d15b75c600000 968999                0
       126  393834373530e98510b6f19a000000 984751                0
       253  393939393630078c23b063cf600000 999960                0
       254  393939393938f86f9b35437a800000 999999                0

129 rows selected.

This took a lot longer, of course, and produced an old-style height-balanced histogram. Part of the time came from the increased volume of data that had to be processed, part of it came from a suprise (which also appeared, in a different guise, in the code that created the original hybrid histogram).

I had specifically chosen the method_opt to gather for nothing but the single column. In fact whether I forced the “legacy” (height-balanced) code or the modern (hybrid) code, I got a full tablescan that did some processing of EVERY column in the table and then threw most of the results away. Here are fragments of the SQL – old version first:

select /*+
            no_parallel(t) no_parallel_index(t) dbms_stats
            cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
            xmlindex_sel_idx_tbl no_substrb_pad
       count("ID"), sum(sys_op_opnsize("ID")),
       count("BAD_COL"), sum(sys_op_opnsize("BAD_COL"))
       "TEST_USER"."T1" t

select /*+
           full(t)    no_parallel(t) no_parallel_index(t) dbms_stats
           cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
           xmlindex_sel_idx_tbl no_substrb_pad
       "TEST_USER"."T1" t  /* ACL,TOPN,NIL,NIL,RWID,U,U254U*/

The new code only used the substrb() functions on the bad_col, but all other columns in the table were subject to the to_char(count()).
The old code applied the count() and sys_op_opnsize() to every column in the table.

This initial scan was a bit expensive – and disappointing – for the client since their table had 290 columns (which means intra-block chaining as a minimum) and had been updated so much that 45% of the rows in the table had to be “continued fetches”. I can’t think why every column had to be processed like this, but if they hadn’t been that would have saved a lot of CPU and I/O since the client’s critical column was very near the start of the table.


This problem with the popular value going missing is a known issue, for which there is a bug number, but there is further work going on in the same area which means this particular detail is being rolled into another bug fix. More news when it becomes available.

Bear in mind that this problem also appears for Top-N (aka Top-Frequency) histograms – where both the lowest and highest buckets may be replaced with a bucket that reports the low-value and high-value for the column with a repeat-count of 1.

Update (Jan 2018)

This is now fixed under bug number “25994960: CARDINALITY MISESTIMATE FROM HYBRID HISTOGRAM” with a patch (of the same number) for


  1. […] that identifies columns where you expect to see Frequency or (pace the buggy behaviour described in a recent post) a Top-N histograms. The biggest problem I have is that I keep forgetting the exact syntax I need […]

    Pingback by Column Stats | Oracle Scratchpad — January 18, 2018 @ 2:22 pm GMT Jan 18,2018 | Reply

  2. […] plans if the a very popular value was the highest value captured in the sample.  There is a separate blog note on this problem – the problem will be fixed in 12.2 with a probable backport to […]

    Pingback by 12c Histograms pt.3 | Oracle Scratchpad — March 12, 2018 @ 8:00 am GMT Mar 12,2018 | Reply

  3. […] plans if the a very popular value was the highest value captured in the sample.  There is a separate blog note on this problem – the problem will be fixed in 12.2 with a probable backport to […]

    Pingback by 12c Histograms pt.2 | Oracle Scratchpad — March 12, 2018 @ 8:05 am GMT Mar 12,2018 | Reply

  4. […] patch in place – the only safe histograms in 12c are frequency histograms. I’ve written a separate blog note describing this […]

    Pingback by Upgrades | Oracle Scratchpad — September 6, 2018 @ 5:51 pm BST Sep 6,2018 | Reply

  5. […] histogram – a nice addition to the available options and one that (ignoring the bug for which a patch has been created) supplies the optimizer with better information about the data than the equivalent height-balanced […]

    Pingback by Hybrid Fake | Oracle Scratchpad — October 10, 2018 @ 3:12 pm BST Oct 10,2018 | Reply

  6. […] Hybrid/Top-N problem – a bug, fixed in 12.2 with a patch for 12.1. […]

    Pingback by Faking Histograms | Oracle Scratchpad — October 15, 2018 @ 1:37 pm BST Oct 15,2018 | Reply

  7. […] producing hybrid histograms (and that’s not just if you happen to get the patch that fixes the top-frequency/hybrid bug relating to high […]

    Pingback by Upgrade thread | Oracle Scratchpad — October 23, 2018 @ 7:50 pm BST Oct 23,2018 | Reply

  8. Hi Jonathan, we still got lot more issues to handle with the histograms. I guess the concept of having popular/non-popular values itself is questionable. A non-popular value to the CBO can be considered to be a popular value to the application. what if I have a few non-popular values which weren’t recorded in the histogram, but play a crucial role to the application expecting index range scan all the time, but gets FTS due to the wrong estimation by the CBO?

    Comment by Anonymous — April 14, 2021 @ 10:16 pm BST Apr 14,2021 | Reply

    • Anonymous,
      Thanks for the comment.
      I think you may be using the word popular in two different ways here.

      To the CBO “popular” means “a value that appears in a large number of rows”.
      To your application “popular” seems to mean “this is a value that people are often querying”.

      It’s has always been a common problem that the “interesting” data (from the users’ persective) and the “average” data when the CBO crunches the numbers look very different leading to execution plans that are sensible for the average case but bad from the users’ perspective.

      It’s a little hard to imagine exactly what your data might look like – and I do wonder if increasing the number of buckets from the default 254 to the maximum, and setting an explicit larger sample size for the histogram might help. But maybe you have something like this:

      a) a small number of VERY frequently used values
      b) a larger number of frequently used values
      c) a small number of rare values – which are the values that users likely to query.

      ** “small” and “larger” are very vague and flexible terms in this argument

      With a little bad luck you may have a histogram that captures all of category (a) and some of category (b), but misses a lot of category (b) – and category (c) doesn’t have a hope of being captured ever, they’ll just be hidden in the buckets somewhere.

      In this case the arithmetic would end up with one number for the average number of rows for the category (c) and missed category (b) – and maybe that’s quite a large number fairly close to the frequency (or half the frequency) of the average category (b) – hence the tablescan rather than index access.

      More buckets would increase the possibly precision.
      A higher sample size might increase the number of distinct values found, hence reduce the result of the “average non-popular frequency” calculation.

      Ultimately you may simply have to fake a histogram that gives Oracle enough information to get a good estimate of the rare values that the users like. (e.g. fake a fequency histogram –

      Jonathan Lewis

      Comment by Jonathan Lewis — April 23, 2021 @ 4:41 pm BST Apr 23,2021 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: