Here’s one I’ve just discovered while trying to build a reproducible test case – that didn’t reproduce because an internal algorithm has changed.

If you upgrade from 12c to 18c and have a number of * hybrid* histograms in place you may find that some execution plans change because of a change in the algorithm for producing hybrid histograms (and that’s not just if you happen to get the patch that fixes

**the top-frequency/hybrid bug**relating to high values).

Here’s a little test to demonstrate how I wasted a couple of hours trying to solve the wrong problem – first a simple data set:

rem rem Script: 18c_histogram_upgrade.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem drop table t2 purge; execute dbms_random.seed(0) create table t2( id number(8,0), n20 number(6,0), n30 number(6,0), n50 number(6,0), j2 number(6,0) ) ; insert into t2 with generator as ( select rownum id from dual connect by level <= 1e4 -- > comment to avoid WordPress format issue ) select rownum id, mod(rownum, 20) + 1 n20, mod(rownum, 30) + 1 n30, mod(rownum, 50) + 1 n50, 28 - round(abs(7*dbms_random.normal)) j2 from generator v1 where rownum <= 800 -- > comment to avoid WordPress format issue ; commit; begin dbms_stats.gather_table_stats( ownname => null, tabname => 'T2', method_opt => 'for all columns size 1 for columns j2 size 13' ); end; /

I’ve created a skewed data set which (we will see) has 22 distinct values and created a histogram of 13 buckets on it. This will be a hybrid histogram – but different versions of Oracle will produce different histograms (even though the data set is the same for both versions):

select j2, count(*) from t2 group by j2 order by j2 ; select endpoint_value value, endpoint_number, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) bucket_size, endpoint_repeat_count from user_tab_histograms where table_name = 'T2' and column_name = 'J2' order by endpoint_value ;

Here’s the dataset from 12.2.0.1 and 18.3.0.0

J2 COUNT(*) ---------- ---------- 1 1 8 3 9 1 10 5 11 4 12 8 13 14 14 9 15 11 16 22 17 34 18 31 19 36 20 57 21 44 22 45 23 72 24 70 25 87 26 109 27 96 28 41 22 rows selected. And here are the histograms - 12.2.0.1 then 18.3.0.0: VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT ---------- --------------- ----------- --------------------- 1 1 1 1 15 56 55 11 17 112 56 34 18 143 31 31 19 179 36 36 20 236 57 57 21 280 44 44 22 325 45 45 23 397 72 72 24 467 70 70 25 554 87 87 26 663 109 109 28 800 137 41 13 rows selected. VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT ---------- --------------- ----------- --------------------- 1 1 1 1 15 56 55 11 17 112 56 34 19 179 67 36 20 236 57 57 21 280 44 44 22 325 45 45 23 397 72 72 24 467 70 70 25 554 87 87 26 663 109 109 27 759 96 96 28 800 41 41 13 rows selected.

Both histograms have 13 buckets as requested; both are hybrid histograms as expected.

But why does 12c have the value 18 when 18c doesn’t, and why does 18c have the value 27 when 12c doesn’t ?

That’s the second time in two weeks I’ve had reproducible test cases not reproducing – thanks to an 18c upgrade.

### Update (See comments)

I had completely forgotten that a * previous defect* in the construction of hybrid (and Top-N) histograms had been addressed in 18.3 but needed a fix in 12.2 and a backport patch in 12.1.0.2.

Since the defect could “lose” a popular value in order to ensure that both the low and high values were captured in the histogram it’s not surprising that a fix could result in one of the popular values in a histogram dissappearing (after the upgrade) even when the gather had used a 100% sample. Quite possibly the algorithm used to ensure the presence of the high value has had a cascading effect down the histogram that can affect which popular values get into the histogram with repeat counts.

I think I’m going to have to grit my teeth and patch a 12.1.0.2, or update a 12.2.0.1 with exactly the right patch-set to find out.

**[It has now been confirmed by Nigel Bayliss that this is a side effect of the fix to the bug 25994960]**

My guess : if you sort values by size/count :

1 1

9 1

8 3

11 4

10 5

12 8

14 9

15 11

13 14

16 22

18 31

17 34

you see both versions left out (consolidated) the smaller values and kept the larger ones, up to 16.

18c also kicked out 18 (the next bigger one).

Possibly the 18c alogrithm makes sure first and last bucket are single value (1, 28) which forces 27 to get its own bucket and 18 was sacrificed as the smallest.

regards,

Racer I.

Comment by Racer I. — October 24, 2018 @ 9:09 am GMT Oct 24,2018 |

Racer I.

I hadn’t tried checking to see what could make the results vary, but I’ve just traced 12c and 18c and they both use the same SQL query to generate the initial results – which means they both initially report the same “popcnt” of 5 and “popfreq” of 434; so there must be some deliberate secondary processing that reduces the popular count to 4 with a frequency of 338.

The defect in earlier versions involved losing the low and high values in certain circumstances, so your thoughts about reserving slots for the low and high values would make sense from that perspective.

~~I’m waiting to see how the problem is finally handled, though, since the defect is still in this version of 18~~. Problem fixed in 18.3.0.0, so the different behaviour in this case may be a consequence of that fix.Comment by Jonathan Lewis — October 24, 2018 @ 3:24 pm GMT Oct 24,2018 |

Hi,

I can’t quite square any single algorithm with merging 28 (size 41) before 18/17/19 (31/34/36).

Revised guess :

Both algorithms set aside a bucket each for first/last value (1/28)

Both form X intermediate buckets for all other values

difference :

12.2 : X=12 (13-1, -> 18 and 27 live), followed by merging the last (27) into the end-bucket (28)

18c : X=11 (13-2, -> 18 folds, 27 lives) with no final merging

regards,

Comment by Racer I. — October 24, 2018 @ 12:22 pm GMT Oct 24,2018 |

[…] data in both versions of Oracle, I got different hybrid histograms (hence my complaint in a recent post.) The rest of this covers 18c in detail, but if you’re running 12c there are a couple of […]

Pingback by Join Cardinality – 4 | Oracle Scratchpad — October 25, 2018 @ 9:10 am GMT Oct 25,2018 |

[…] could be even less than that bucket size defined by the (F.2). You can see that if you look at the blog post by Jonathan Lewis. In this example, the bucket size calculated as same in the both version of […]

Pingback by Understanding Hybrid Histogram | Chinar Aliyev`s blog — November 6, 2018 @ 11:19 am GMT Nov 6,2018 |