Oracle Scratchpad

December 23, 2020

19c tweak

Filed under: CBO,Hints,Oracle,Upgrades — Jonathan Lewis @ 2:15 pm GMT Dec 23,2020

There are times when an upgrade makes a big difference to performance because an enhancement to the optimizer code path changes the costing of a plan that was always possible, but badly costed. I noticed an example of this while testing the code in the email I mentioned in last month’s posting on the “Incremental Sort” that Postgres 13 can do. Here’s a model to create some data and demonstrate the principle – the code is a modified (reduced) version of the code published by Phil Florent describing the feature.

rem
rem     Script:         fetch_first_postgres.sql
rem     author:         Phil Florent
rem     Dated:          6th Nov 2020
rem
rem     Last tested
rem             19.3.0.0        Uses index descending unhinted at low cost
rem             18.3.0.0        Used index desc efficiently if hinted, but high cost
rem             12.2.0.1        Used index desc efficiently if hinted, but high cost
rem

create table lancers(dtl timestamp, idg integer not null, perf integer);

insert into lancers(dtl, idg, perf)
with serie(i) as (
        select 25e4 from dual
        UNION ALL
        select i - 1 from serie where i > 1
)
select
        current_timestamp - (i / 1440),
        trunc(dbms_random.value * 1e5 + 1),
        case
                when dbms_random.value <= 0.001 then 50000 + trunc(dbms_random.value * 50000 + 1) 
                else trunc(dbms_random.value * 50000 + 1) 
        end
from serie
/

execute dbms_stats.gather_table_stats(user,'lancers',method_opt=>'for all columns size 1')

create index perf_i1 on lancers(perf, dtl);
alter table lancers modify (perf not null, dtl not null);

This is the basic statement I want to execute – but in some versions of Oracle it will have to be hinted to produce the execution plan I want to see.

select  
        idg, perf 
from  
        lancers 
order by
        perf desc  
fetch first 5 rows only
/

If you check the order by clause and the definition of the index perf_i1 you’ll see that Oracle could (in principle) walk the index in descending order, stopping after just 5 rows, to produce the result.

But here are the execution plans from 19.3.0.0, 18.3.0.0, and 12.2.0.1, with their plans pulled from memory and showing the rowsource execution statistics (hinted by gather_plan_statistics) to show you what happens – starting from the newest first:

19.3.0.0: (unhinted)
--------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |     8 (100)|      5 |00:00:00.01 |       9 |
|*  1 |  VIEW                         |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|     8   (0)|      5 |00:00:00.01 |       9 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |      5 |     3   (0)|      5 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

You can see an index_desc() hint in the output, but it has been commented out. The key feature to note is that the optimizer has found the path I was hoping to see, and it’s a low-cost path, although there is one oddity in the plan – the E-rows (cardinality estimate) for the table access doesn’t allow for the stopkey and, since there are no predicates in the query, reports the 250K rows that exist in the table.

For 18.3.0.0 I had to include the hint, and you’ll see why:

18.3.0.0 (hinted with index_desc)
---------------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)



Again we see the plan is possible, but the optimizer’s cardinality estimate for the hinted index scan is 250K rows – the full size of the index, and it has allowed for that in the cost of the query. So the cost of this plan is high and in the absence of the hint the optimizer would have used a full tablescan with sort.

Finally we get down to 12.2.0.1 – and I’ve shown the hinted and unhinted plans.

12.2.0.1 (hinted index_desc)
-----------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)


12.2.0.1 Unhinted
------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 1374242431

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |      1 |        |  1102 (100)|      5 |00:00:00.24 |     822 |       |       |          |
|*  1 |  VIEW                    |         |      1 |      5 |  1102  (10)|      5 |00:00:00.24 |     822 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|         |      1 |    250K|  1102  (10)|      5 |00:00:00.24 |     822 |  2048 |  2048 | 2048  (0)|
|   3 |    TABLE ACCESS FULL     | LANCERS |      1 |    250K|   132  (16)|    250K|00:00:00.13 |     822 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

As you can see, 12.2.0.1 and 18.3.0.0 behave exactly the same way when hinted – the path is acceptable, but the cost is high. Consequently when I remove the hint the optimizer switches to using a full tablescan with sort because it’s cost is lower (thanks, in part, to the pattern in the data) than the indexed access path.

Summary

Two thoughts to take away from this note.

  • First, there were two possible execution plans for the same query and the optimizer in versions below 19c was picking the one that was clearly a bad idea. The presence of alternatives, though, means that the patterns in the data, the index definition and statistics (especially the clustering_factor) the number of rows to fetch, and various other optimizer settings may mean that you find yourself in the unlucky position that the optimizer’s arithmetic is on the boundary between the two plans and it switches randomly between them from day to day.
  • Secondly, when you upgrade to 19c the optimizer seems to be more likely to pick the indexed access path for a query like this – and that will probably be a good thing, but in a few cases it might turn out to be a bad thing.

December 21, 2020

Why Why Why Why?

Filed under: Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 2:24 pm GMT Dec 21,2020

Here’s a little puzzle – and if you don’t see the answer almost immediately you did need to read this note. The question comes from a thread on the Oracle Groundbreakers’ Forum –

“Why I am getting 0020 instead of 2020 for below query?”

select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual.

Instead of sysdate if I give date like ‘5-dec-2020’ it’s giving correct. Any ideas why iso value for sysdate year is coming as 0020?

There are many things that people do wrong with dates and many ways in which Oracle tries to help so I’ll start with a general-purpose reminder: Oracle supplies a “date” datatype, if you want to store dates, compare dates, or do date arithmetic make sure you are using the date datatype.

(Technically, of course, Oracle’s date datatype is actually a “date with time to nearest second” type, so there are a couple of details you need to remember to avoid running into surprises that are due to the presence of the time component – for example you might use a constraint like “check(date_col = trunc(date_col)” to enforce date-only values for a column.)

Sysdate is a date, and the function to_date() expects its first parameter to be a character string; so Oracle implicitly converts sysdate to a character type with the to_char() function before it does anything else and it uses the session’s nls_date_format parameter to supply the formatting string. On my instance this parameter has the value ‘DD-MON-RR’ (an option created in an attempt to work around the “Y2K” problem – which some of you will remember).

So sysdate was converted (on the day the question was asked) to the character string ’10-DEC-20′, and when the to_date() function call tried to convert this back to a date type using the explicitly supplied format dd-mm-yyyy Oracle used leading zeros to pad the 2-digit year to a 4-digit year which made the year 0020 which is exactly what the OP saw.

Solution

To correct this code, take out the call to to_date().

SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
0020

SQL> select to_char(sysdate,'iyyy') from dual;

TO_C
----
2020

If you can’t correct the code then you might be able to work around the error by setting the nls_date_format to a more appropriate value. In fact the nls_date_format is one of those parameters that you probably ought to change from its default value the moment you set up your database. It’s just rather scary to do so if you’re running a system that has been around for a few years and may (accidentally) include some code that depends on the default setting to get the right answers or best execution plans.

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL>  alter session set nls_date_format='dd-Mon-rrrr';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_date('01-Dec-20','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-2020

SQL> select to_date('01-Dec-50','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-1950

I’ve included a couple of examples using the ‘rrrr’ (or ‘RRRR’) format for the year component. Oracle introduced the ‘RR’ format over 20 years ago as a best-guess workaround to Y2K problem. The two-character ‘RR’ format means values between 50 and 99 imply previous century, and values between 00 and 49 imply current century (as show in the last two simpler examples). It’s an option that should have been deprecated within a few years of its introduction and desupported soon after, but the terrible inertia of IT juggernaut means it’s still around. If you’re worried about the impact of changing your nls_date_format from ‘dd-mon-RR’ to ‘dd-Mon-yyyy’ you may feel a little safer switching to ‘dd-Mon-RRRR’ – whatever you do, though, you’ll almost certainly find examples where the code misbehaves because of the side-effects of the change in formatting.

One tiny detail you might have noticed in the original posting is that the user tested their code with the literal value ‘5-Dec-2020’, and got the result they wanted even though the format they had used to convert from character to date was ‘dd-mm-yyyy’. Oracle tries quite hard to cope with date conversions, as Connor McDonald pointed out several years ago.

While we’re on the topic of conversion it’s worth revisiting my comment about the date type including a time component. I’ve often seen expressions like to_date(to_char(sysdate,’dd-mon-yyyy’)) being used to ensure that a date that might include a time component is reduced to a “date-only” value (although that really means the time-component is “00:00:00”). If you need only the date component it’s far better to use trunc(date_expression) rather than this double type-conversion; there’s probably not a lot of savings in terms of simple CPU-usage, but (a) you might as well take it and (b) you might be able to give the optimizer a chance of getting a better cardinality estimate hence a better execution plan.

Summary

  • sysdate is a date type, don’t try to use to_date() on it.
  • to_char(date_expression) will use the nls_date_format value if you haven’t supplied an explicit format string so you should always include an explicitly chosen format expression in the call.
  • The nls_date_format defaults to a value that is bad on two counts: it expects a 2-digit year and uses RR rather than YY. You should be using four digits for the year, and the only argument for using RRRR is if you are temporarily in transition from RR to YYYY.

To misquote Napolean (the pig, Animal Farm): “4 Y’s good, 2 Y’s bad”. And it’s not going to change to “4 Y’s good, 2 Y’s better”.

October 21, 2020

Column Groups

Filed under: extended stats,Oracle,Statistics — Jonathan Lewis @ 12:14 pm BST Oct 21,2020

Here’s an odd little detail about the statistics of column groups. At first glance it’s counter-intuitive but it’s actually an “obvious” (once you’ve thought about it for a bit) consequence of the approximate_ndv() algorithm for gathering stats.

I’ll present it as a question:

I have a table with two columns: flag and v1. Although the column are not declared as non-null neither holds any nulls. If there are 26 distinct values for flag, and 1,000,000 distinct values for v1, what’s the smallest number of distinct values I should see if I create the column group (flag, v1) ?

The question is, of course, a little ambiguous – there’s the number of distinct values that the column (group) holds and the number that a fresh gather of statistics reports it as holding. Here are the stats from a test run of a simple script that creates, populates and gathers stats on my table:

select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

COLUMN_NAME                      NUM_DISTINCT
-------------------------------- ------------
FLAG                                       26
ID                                    1000000
V1                                     999040
SYS_STUQ#TO6BT1REX3P1BKO0ULVR9         989120

There are actually 1,000,000 distinct values for v1 (it’s a varchar2() representation of the id column), but the approximate_ndv() mechanism can have an error of (I believe) up to roughly 1.3%, so Oracle’s estimate here is a little bit off.

The column group (represented by the internal column defonition SYS_STUQ#TO6BT1REX3P1BKO0ULVR9) must hold (at least) 1,000,000 distinct values – but the error in this case is a little larger than the error in v1, with the effect that the number of combinations appears to be less than the number of distinct values for v1!

There’s not much difference in this case between actual and estimate, but there test demonstrates the potential for a significant difference between the estimate and the arithmetic that Oracle would do if the column group didn’t exist. Nominally the optimizer would assume there were 26 million distinct values (though in this case I had only created 1M rows in the table and the optimizer would sanity check that 26M).

So, although the difference between actual and estimate is small, we have to ask the question – are there any cases where the optimizer will ignore the column group stats because of a sanity check that “proves” the estimate is “wrong” – after all it must be wrong if the num_distinct is less than the num_distinct of one of the components. Then again maybe there’s a sanity check that only ignores the column group if the estimate is “wrong enough”, but allows for small variations.

I mention this only because an odd optimizer estimate has shown up recently on the Oracle-L mailing list, and the only significant difference I can see (at present) is that a bad plan appears for a partition where this column group anomaly shows up in the stats, but a good plan appears when the column group anomaly isn’t present.

Footnote:

If you want to recreate the results above, here’s the model I’ve used (tested on 19.3.0.0 and 11.2.0.4):

rem
rem     Script:         column_group_stats_5.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             11.2.0.4
rem 

execute dbms_random.seed(0)

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        chr(65 + mod(rownum,26))        flag,
        rownum                          id,
        lpad(rownum,10,'0')             v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6   -- > comment to avoid WordPress format issue
order by
        dbms_random.value
/


select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns(v1, flag) size 1'
        );
end;
/
 
select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

Footnote 2:

As an interesting little statistical quirk, if I defined the column group as (flag, v1) rather than (v1, flag) the estimate for the column group num_distinct was 1,000,000.

October 10, 2020

Interval Oddity

Filed under: CBO,Execution plans,Oracle,Partitioning — Jonathan Lewis @ 2:51 pm BST Oct 10,2020

Interval partitioning is a popular strategy for partitioning date-based data. It’s an enhanced variant of range partitioning that allows you to define a starting partition and an interval that should be used to derive the high values for all subsequent partitions – and Oracle doesn’t even have to create intervening partitions if you insert data that goes far beyond the current partition, it automatically creates exactly the right partition (with the correct high_value and correctly inferred lower boundary) for the incoming data and behaves as if the intervening partitions will become available when they’re needed at some later point in time. So no need for DBAs to work out actual partition high_values, no need to ensure that all the partitions you need out into the future have been pre-created, no more batch processes crashing with Oracle error ORA-14400: inserted partition key does not map to any partition.

But there’s a surprising difference between traditional range partitioning and the newer interval partitioning that will increase CPU usage in some cases and may (though I haven’t yet investigated this in sufficient detail to create an example) lead to variations in execution plans.

To demonstrate the difference I’m going to create two tables with the same structure and content then run the same query against them and show you the resulting execution plans. Here’s the code to create and populate the tables:

rem
rem     Script:         interval_or_range.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem


create table t_interval(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
interval (numtoyminterval(1,'MONTH'))
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy'))
        )
;

create table t_range(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy')),
                partition start_p2 values less than (to_date('01-Feb-2020','dd-mon-yyyy')),
                partition start_p3 values less than (to_date('01-Mar-2020','dd-mon-yyyy')),
                partition start_p4 values less than (to_date('01-Apr-2020','dd-mon-yyyy')),
                partition start_p5 values less than (to_date('01-May-2020','dd-mon-yyyy')),
                partition start_p6 values less than (to_date('01-Jun-2020','dd-mon-yyyy')),
                partition start_p7 values less than (to_date('01-Jul-2020','dd-mon-yyyy')),
                partition start_p8 values less than (to_date('01-Aug-2020','dd-mon-yyyy')),
                partition start_p9 values less than (to_date('01-Sep-2020','dd-mon-yyyy'))
        )
;

insert into t_range select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue  
;

insert into t_interval select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_RANGE',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_INTERVAL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve used composite partitioned tables in this example but the same anomaly appears with simple partitioning – which you can test by commenting out the “subpartion by ..” lines. The two tables have partitions defined to hold a month’s data. I’ve inserted a few rows into most of the partitions, and ensured that I haven’t accidentally attempted to insert data that falls outside the legal range of the table with the predefined partitions .

To show that the tables are nominally identical here’s the list of partitions with their high values and number of rows:

column table_name       format a15
column partition_name   format a15
column high_value       format a80

break on table_name skip 1

select
        table_name, partition_name, num_rows, high_value
from
        user_tab_partitions
where
        table_name in ('T_INTERVAL','T_RANGE')
order by
        table_name, partition_name
;


ABLE_NAME      PARTITION_NAME    NUM_ROWS HIGH_VALUE
--------------- --------------- ---------- --------------------------------------------------------------------------------
T_INTERVAL      START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10722              30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10723              29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10724              31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10725              30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10726              31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10727              30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10728              31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10729              28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

T_RANGE         START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P2                30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P3                29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P4                31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P5                30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P6                31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P7                30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P8                31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P9                28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


18 rows selected.

With this setup we can now run a simple query against the two tables using a where clause that is an exact match for the partition definition for the July data:

set serveroutput off

select 
        count(*) 
from 
        t_range 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

select 
        count(*) 
from 
        t_interval 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

I’ve left in two versions of the date predicates – the “ANSI” style, and a strictly formatted “to_char()” style that is still the approached used most commonly in Oracle systems. The effect on the execution plans is the same for both variations of the date declaration, and here are the two execution plans – reported from an instance of 19.3.0.0:

---------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |         |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|         |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|   3 |    TABLE ACCESS FULL    | T_RANGE |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
---------------------------------------------------------------------------------------------------


------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |            |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|            |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|*  3 |    TABLE ACCESS FULL    | T_INTERVAL |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(("ORDER_DATE">=TO_DATE(' 2020-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DATE"<TO_DATE(' 2020-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))

It’s quite possible that you won’t notice the difference these plans, even though I’ve put them directly one below the other; and it’s very likely that most people wouldn’t think about the difference if they didn’t have the two plans available at the same time to compare.

If you can’t spot the difference it’s because I’ve deliberately arranged them in a way that might fool you into not looking carefully enough.

If you’ve spotted the difference it’s probably because you’ve examined the Predicate Information section. There isn’t one for the the t_range example but there is one for the t_interval example – and it was a little sneaky of me to make it easy for you to assume that the one set of predicates I showed was common to the two plans.

Since the plans have been pulled from memory (v$sql_plan) it seems likely that they are truthful and the query based on the interval partitioning is actually checking every row in the partition against two date values. In my example that’s not going to make much difference to performance, but in a datawarehouse with millions of rows per partition there’s scope for a noticeable increase in CPU between the two queries – especially since the block reads are likely to be direct path, bypassing much of the usual (CPU -intensive) buffer cache activity.

Interestingly when I checked the 10053 trace files for the two queries the CPU cost for the pair was identical. Normally you expect to see at least a little CPU cost (even if it’s only tens of units out of millions) for checking a predicate. This raises the question – does the optimizer allow a cost for the t_range table for an event that isn’t going to happen, or is it reporting an event that isn’t going to happen for the t_interval table. [Edit: easy enough to check by doing the test on a simple heap table that clones the data from that one partition, and checking the calculated cost with and without the predicate]

There’s a further ramification to this anomaly, relating to the question on the Oracle-L list server that prompted the investigation. What happens on Exadata where the predicates can be offloaded to storage?

The original question wasn’t actually about the appearance (or not) of the predicates, it was about an unexpected cardinality estimate for a query involving two equi-partitioned tables, so this note has gone completely off-topic from the question; but another strange detail about the predicates showed up when I suggested the creation of a column group on the join columns. A (redundant) predicate that had been offloaded to storage stopped being offloaded; here, from the original posting with the original tables, are the two different Predicate Information sections that appeared – the first without the column group, the second after the column group had been created and its stats collected:

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - storage("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Predicate Information (identified by operation id):
---------------------------------------------------
   4 -
access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Notice how the storage() predicate that appears at operation 8 in the first set has disappeared from the second even though operation 6 manages to retain its storage() predicate throughout.

I’ve modelled a further example of odd behaviour using two pairs of tables – one pair using range/hash partitioning the other interval/hash partitioning. I won’t reproduce it here but the problem of redundant predicates appearing and then disappearing gets even stranger.

I haven’t yet produced an example where the unexpected predicate behaviour has affected the costs or cardinalities – but I’ve only spent a couple of hours playing around with well-formed examples: it’s possible that in badly formed examples (e.g. with statistical inconsistencies) the side effect could mean that two notionally identical queries produce different numbers and different plans because they end up with different predicates in the final transformed query.

Footnote

Following an email from David Kurtz, it occurred to me that I should have made it clear that the disappearance of predicates on the partition key is expected behaviour when the predicates are clearly synchronised with the partition boundaries. The behaviour for the interval partitioning is the oddity, the behaviour for the “normal” range partitioning is the standard.

July 10, 2020

Recursive WITH upgrade

Filed under: ANSI Standard,CBO,Execution plans,Oracle,Subquery Factoring,Upgrades — Jonathan Lewis @ 4:19 pm BST Jul 10,2020

There’s a notable change in the way the optimizer does cost and cardinality calculations for recursive subquery factoring that may make some of your execution plans change – with a massive impact on performance – as you upgrade to any version of Oracle from 12.2.0.1 onwards. The problem appeared in a question on the Oracle Developer Community forum a little while ago, with a demonstration script to model the issue.

I’ve copied the script – with a little editing – and reproduced the change in execution plan described by the OP. Here’s my copy of the script, with the insert statements that generate the data (all 1,580 of them) removed.

rem
rem     Script:         recursive_with_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem
rem     Notes:
rem     https://community.oracle.com/thread/4338248
rem
rem     The upgrade to 12.2.0.1 made this query much slower (on 15,000 rows)
rem     Setting OFE to 12.1.0.1 is a first possible fix for the issue.
rem     The scale is too small to see much difference in this case
rem

drop table test_folder purge;

create table test_folder(
        fldr_key                number(16,0)            not null        enable,                 
        fldr_id                 varchar2(255 byte)      not null        enable,                 
        fldr_desc_tx            varchar2(255 byte),                     
        par_fldr_key            number(16,0),                   
        seus_key                number(16,0)            not null        enable,                 
        fldr_private_flg        varchar2(1 byte)        not null        enable,                 
        last_updt_dt            date                    not null        enable,                 
        last_upby_seus_key      number(16,0)            not null        enable,                 
        lock_seq_nbr            number(9,0) default 0   not null        enable,                 
        content_guid            raw(16),                
        constraint test_folder_pk primary key (fldr_key)                
)       
;              

-- list of insert statements

alter table test_folder add constraint test_folder_fk  
        foreign key (par_fldr_key) references test_folder(fldr_key)
;  
  
create or replace force editionable view test_folder_vw (fldr_key) as   
with rec_path(fldr_key)  as (
        select  tf.fldr_key  
        from    test_folder tf where tf.par_fldr_key is null  
        union all  
        select  tf.fldr_key  
        from    test_folder tf, rec_path  
        where   rec_path.fldr_key = tf.par_fldr_key
)  
select fldr_key  
from rec_path   
; 

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'TEST_FOLDER',
                method_opt  => 'for all columns size 1'
        );
end;
/


select * from test_folder_vw where fldr_key = -41;  

I’ve run the test 3 times. First in 12.2.0.1 with no tweaking; then in 12.2.0.1 with the hint /*+ optimizer_features_enable(‘12.1.0.2’) */ and finally in a genuine 12.1.0.2 environment. In all three cases I enabled rowsource execution stats (‘alter session set statistics_level = all’) and pulled the plans from memory – with the following results

First, the base result from 12.1.0.2

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.03 |     604 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.03 |     604 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.03 |     604 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      68 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.02 |     536 |  1696K|  1696K| 1488K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|*  6 |     TABLE ACCESS FULL                     | TEST_FOLDER |      8 |   1419 |  11352 |00:00:00.01 |     536 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

Two points to note, in particular. First that the hash join has the recursive with pump as its first (build table) child and the table access full of test_folder as its second child (probe table); secondly that there is no value given for E-Rows for the recursive with pump.

Now the 12.2.0.1 plan:

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |   2524K|      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |   2524K|   1419 |00:00:00.01 |      23 |  1743K|  1743K| 1632K (0)|
|   5 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  6 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
|   7 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41)
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

There are three changes to notice in this plan – which (for the OP) was much slower than the 12.1.0.2 plan. First, the order of the hash join has changed, the recursive with pump is now the second child (probe table) in the join (and again shows no value for E-Rows); secondly that Oracle has introduced an extra operation – the buffer sort (reuse) – populated by the table access full – as the build table; thirdly (presumably the point of buffer sort (reuse) operation) the number of buffer visits has dropped from a few hundred to a couple of dozen.

Finally let’s check what happens if we set the OFE (optimizer_features_enable) to 12.1.0.2 while running 12.2.0.1

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.01 |      23 |  1797K|  1797K| 1573K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|   6 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  7 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   7 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

In these conditions the recursive with pump has gone back to being the build table (first child); but it’s worth noting that the 12.2 buffer sort (reuse) is still in place – saving us a few hundred buffer gets (and, for a bigger table, a number of disc reads possibly). Downgrading the optimizer_features_enable has given us the plan we needed, but this we’ve got an example that shows that hacking the parameter isn’t a guarantee that we will get exactly the path we used to get in the older version.

The story so far.

It seems that we can address the performance problem that the OP had by setting the optimzer_feature_enable to the older version – possibly through a hint embedded in the SQL, perhaps through an SQL Baseline or SQL Patch. Maybe we’ll have to have a logon trigger that sets the parameter for particular users or, worst case scenario, maybe we’ll have to set the parameter at the system level. Given how undesirable the last option could be it would be nice to know exactly what is causing the change in plan.

As a basic clue – if the order of tables in a hash join reverses itself this usually means that the byte (not row) estimates have changed. The optimizer will use the table with the lower byte count as the build table in a hash join. So the recursive with pump – whose row and byte estimates don’t appear – must have produced larger numbers in 12.2.0.1.

A change in the 12.2 plan that I haven’t yet mentioned is the E-rows for the hash join; it’s gone up from 640 (12.1.0.2) to 2.5 million! So let’s repeat the tests with the CBO (10053) trace file enabled and see if we can find a number like 2524K appearing as a join estimate in the trace file. Having created the two trace files (in 12.2.0.1, one with the OFE set backwards) I executed the following grep command against the trace files:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc

I’d set the tracefile_identifier to ‘ofe’ and ‘base’ respectively for the 12.1.0.2 and 12.2.0.1 tests, and here are the results:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
1166:Join Card - Rounded: 640 Computed: 639.941176

grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc
1195:Join Card - Rounded: 640 Computed: 639.941176
1391:Join Card - Rounded: 2544 Computed: 2543.865546
1576:Join Card - Rounded: 10112 Computed: 10111.865546
1737:Join Card - Rounded: 40193 Computed: 40193.075630
1898:Join Card - Rounded: 159759 Computed: 159758.731092
2059:Join Card - Rounded: 635008 Computed: 635008.462185
2220:Join Card - Rounded: 2524023 Computed: 2524023.394958
2269:Join Card - Rounded: 2524023 Computed: 2524023.394958

That’s an interesting clue. Something seems to be calculating a larger and larger value in the 12.2.0.1 trace, starting with the hash join cardinality that appeared in 12.1.0.2 had, growing by a factor of nearly 4 each time, and ending with the hash join cardinality we saw in the 12.2.0.1 plan.

Taking a closer look at the content of the 12.2.0.1 trace file it turned out that every stage in that escalation was Oracle recalculating the cost and cardinality of joining test_folder (the table) and rec_path (the “with” subquery) using the figures from the previous join calculation as the new base figures for rec_path. In effect the optimizer was calculating the cost of following the recursive subquery down to its 7th level of recursion.

Side note: in agreement with my comment about the smaller (in bytes) rowsource being used as the build table, the initial join order started as (test_folder, rec_path) in the first two iterations, but switched to (rec_path, test_folder) from the 3rd iteration onwards.

So we’ve identified the mechanics that cause the change in plan; the question now is: why 7 iterations to the final cost? (Briefly I did a quick check to see how many circles of hell there were in Dante’s Inferno – but it’s 9 (or 10 depending how you count). A quick check of v$parameter (and the x$ tables for the hidden parameters) revealed the following:

Name                                     Value
------------------------------------ ---------
_recursive_with_branch_iterations            7

Setting this parameter to 1 in the session, or adding the hint /*+ opt_param(‘_recursive_with_branch_iterations’ 1) */ to the query resulted in the 12.1.0.2 plan appearing in 12.2.0.1 – and this is a much less intrusive way of getting the plan we’re interested in than setting the entire OFE back to 12.1.0.2. One might even set the parameter in the spfile (after receiving approval from Oracle Corp., of course) given how precisely targetted it is (and know that it doesn’t switch off that nice little buffering trick.)

Summary

From 12.2 onwards the optimizer does recursive recosting of recursive “with” subqueries. This means the cost and cardinality estimates of a plan may change and the impact may cause a significant change in performance – it certainly did for the OP.

The change seems to be driven by the hidden parameter _recursive_with_branch_iterations, which was introduced in 12.2.0.0 with a default value of 7. Setting this parameter to 1 reverts the optimizer to the pre-12.2 behaviour. If you run into a problem of recursive “with” subqueries changing plans and causing performance problems on an upgrade from pre-12.2 to a newer version of Oracle then it would be worth investigating this parameter as the least intrusive way of reverting back to the original plans.

Update (July 2020)

I’ve just been prompted to check MOS for any references to the hidden parameter – and discovered a note that was published in September 2018, updated ub Feb 2019.  It’s amazing how easy it can be to find an answer on MOS when you already know what the answer is ;) Document id 2443466.1 Oracle 12.2.0.1 CBO calculating high cost/CPU for queries with recursive sub-query (Doc ID 2443466.1)

This gives two workarounds to the problem of a change in cost in 12.2 – set the optimizer_features_enable to 12.1.0.2, or set the hidden parameter to 1. It references two bugs (one a duplicate of the other, both apparently unpublished):

  • Bug 23515289 : PERFORMANCE REGRESSION OBSERVED WITH RECURSIVE WITH SERIAL PLAN
  • Bug 24566985 : UPG: QUERY PERFORMANCE ON ALL_TSTZ_TABLES 160 TIMES SLOWER THAN 11.2.0.4

and the Permanent Fix for the problem is to install the patch for Bug 24566985 on 12.2.0.1

 

July 9, 2020

Execution Plans

Filed under: Execution plans,extended stats,Histograms,Oracle,Performance,Problem Solving,Statistics,Troubleshooting — Jonathan Lewis @ 4:54 pm BST Jul 9,2020

This is an example from the Oracle Developer Community of using the output of SQL Monitor to detect a problem with object statistics that resulted in an extremely poor choice of execution plan.

A short time after posting the original statement of the problem the OP identified where he thought the problem was and the general principle of why he thought he had a problem – so I didn’t have to read the entire execution plan to work out a strategy that would be (at least) a step in the right direction of solving the performance problem.

This note, then, is just a summary of the five minute that I spent confirming the OP’s hypothesis and explaining how to work around the problem he had identified. It does, however, give a little lead-in to the comments I made to the OP in order to give a more rounded picture of what his execution plan wass telling us.

So here’s the top half of the plan (which is the first subquery of a “concatenation”) with the first few predicates:

===============================================================================================================================================================================================================================  
| Id |                 Operation                  |            Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity |             Activity Detail             |  
|    |                                            |                            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |               (# samples)               |  
===============================================================================================================================================================================================================================  
|  0 | SELECT STATEMENT                           |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  1 |   CONCATENATION                            |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |          |                                         |  
|  2 |    FILTER                                  |                            |         |      |     12191 |     +4 |     1 |     864K |      |       |         |       |     0.03 | Cpu (4)                                 |  
|  3 |     FILTER                                 |                            |         |      |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  4 |      NESTED LOOPS                          |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.02 | Cpu (3)                                 |  
|  5 |       NESTED LOOPS                         |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.07 | Cpu (8)                                 |  
|  6 |        NESTED LOOPS                        |                            |     241 | 251K |     12232 |     +4 |     1 |      26M |      |       |         |       |     0.05 | Cpu (6)                                 |  
|  7 |         NESTED LOOPS                       |                            |    5407 | 233K |     12242 |     +4 |     1 |      86M |      |       |         |       |          |                                         |  
|  8 |          MERGE JOIN CARTESIAN              |                            |       1 |   35 |     12242 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
|  9 |           TABLE ACCESS BY INDEX ROWID      | REF1                       |       1 |    3 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 10 |            INDEX RANGE SCAN                | REF1_PK                    |       1 |    2 |     12242 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 11 |           BUFFER SORT                      |                            |      84 |   32 |     12242 |     +4 |     1 |     1000 |      |       |         |  104K |          |                                         |  
| 12 |            TABLE ACCESS BY INDEX ROWID     | STAGE                      |      84 |   32 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 13 |             INDEX RANGE SCAN               | STAGE_IDX1                 |      84 |    4 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 14 |          PARTITION RANGE ITERATOR          |                            |    8292 | 232K |     12232 |     +4 |  1000 |      86M |      |       |         |       |          |                                         |  
| 15 |           TABLE ACCESS STORAGE FULL        | TAB1                       |    8292 | 232K |     12245 |     +1 |  1000 |      86M | 103M | 521GB |   1.96% |    7M |    51.81 | gc buffer busy acquire (1)              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: cache buffers chains (1)         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (1196)                              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (2) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | reliable message (5)                    |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (2827)  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell smart table scan (1977)            |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | read by other session (304)             |  
| 16 |         PARTITION RANGE ITERATOR           |                            |       1 |   12 |     12191 |     +4 |   86M |      26M |      |       |         |       |     0.42 | Cpu (51)                                |  
| 17 |          TABLE ACCESS BY LOCAL INDEX ROWID | TAB2                       |       1 |   12 |     12191 |     +4 |   86M |      26M |   4M |  28GB |         |       |    32.14 | gc cr grant 2-way (20)                  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc cr request (2)                       |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc remaster (6)                         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (319)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (4) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: gc element (2)                   |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (3563)  |  
| 18 |           INDEX RANGE SCAN                 | TAB2_IX1                   |     166 |    3 |     12210 |     +2 |   86M |      26M |   1M |  11GB |         |       |    15.17 | Cpu (292)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1557)  |  
| 19 |        INDEX UNIQUE SCAN                   | MTD_PK                     |       1 |    1 |     12242 |     +4 |   26M |      26M |  292 |   2MB |         |       |     0.17 | Cpu (20)                                |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1)     |  
| 20 |       TABLE ACCESS BY INDEX ROWID          | REF2                       |       1 |    2 |     12191 |     +4 |   26M |      26M |    7 | 57344 |         |       |     0.11 | Cpu (13)                                |  
| 21 |      TABLE ACCESS BY INDEX ROWID           | CONTROLTAB                 |       1 |    1 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 22 |       INDEX UNIQUE SCAN                    | CONTROLTAB_PK              |       1 |      |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 23 |     MINUS                                  |                            |         |      |       102 |     +4 |    25 |        3 |      |       |         |       |          |                                         |  
| 24 |      TABLE ACCESS BY INDEX ROWID           | CUST_ORG_PK                |       1 |    3 |       942 |     +4 |    25 |       10 |      |       |         |       |          |                                         |  
| 25 |       INDEX UNIQUE SCAN                    | MC_PK                      |       1 |    2 |       942 |     +4 |    25 |       25 |      |       |         |       |          |                                         |  
| 26 |      SORT UNIQUE NOSORT                    |                            |       1 |    4 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 27 |       TABLE ACCESS BY INDEX ROWID          | REF1                       |       1 |    3 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 28 |        INDEX RANGE SCAN                    | REF1_PK                    |       1 |    2 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  


Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter( EXISTS ( (SELECT /*+ INDEX_RS_ASC ("CUST_ORG_PK" "MC_PK") */ "CUST_ID" FROM "USER1"."CUST_ORG_PK"  "CUST_ORG_PK" 
               WHERE "CUST_ID"=:B1 AND "CUST_ORG_PK"."INDICATR"='Y') MINUS (SELECT /*+ INDEX_RS_ASC ("REF1" "REF1_PK") */ 
               TO_NUMBER("VAL") FROM "USER1"."REF1" "REF1" WHERE "PUSER"='ZZZ' AND "EDATE" .ge. TRUNC(SYSDATE@!) AND TO_NUMBER("VAL")=:B2  
               AND "SDATE" .le. TRUNC(SYSDATE@!))))  
   3 - filter( EXISTS (SELECT /*+ INDEX_RS_ASC ("CONTROLTAB" "CONTROLTAB_PK") */ 0 FROM  "USER2"."CONTROLTAB" "CONTROLTAB" WHERE
              "CONTROLTAB"."CNTRLID"=9999 AND  NVL("CONTROLTAB"."STATUS",'F')='S'))  
   9 - filter("REF1"."EDATE" .ge. TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE" .le. TRUNC(SYSDATE@!))  
  13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')  

Note: various inequality symbols changed to .le. / .ge. to avoid WordPress format issue.

The first thing to note is that the “Time (active)” shown at the top line is about 12,000 seconds – so it’s a long running query. Your next observation – before you look at the shape of the plan – might be to note that operations 15, 17 and 18 between them record thousands of seconds of time, mostly I/O time but including 1,200 seconds of CPU time. This might draw your eye to the part of the plan that tells you what you are doing at these operations and why you are doing it.

Looking at the detail – operation 15 is a full tablescan that is the child of a partition range iterator (operation 14), and that iterator is the second child of a nested loop join (operation 7). Why is the optimizer so out of balance that it thinks a table scan of (possibly) multiple partitions of a partitioned table is a good candidate for the second child of a nested loop?! The answer comes from the first child – the  Merge Join Cartesian at operation 8 has been given a cardinality estimate of just one row. When the cardinality estimate is one for the first table in a join then it doesn’t matter whether Oracle uses a nested loop join or a hash join, whatever happens next is only supposed to happen once.

Unfortunately when we compare “Rows (Estim)” and “Rows (Actual)” for the operation we see that the Merge Join Cartesian produced 1,000 rows – so the partition tablescan was going to happen 1,000 times (which we can confirm from the “Execs” column of operation 14). As a first step, then, we need to ensure that the optimizer gets a better estimate of the number of rows produced by the Cartesian join. So lets look at its child operations.

  • Operation 9 (Table Access of REF1) is predicted to return one row – with “Rows (actual)” = 1.
  • Operation 11 (Buffer Sort of data from STAGE1) is predicted to return 84 rows – with “Rows (actual)” = 1,000

Since the number of rows produced by a Cartesian join should be the product of the number of rows of the two inputs this tells us that the optimizer’s estimate of the number of rows from REF1 has been rounded up to 1 from a very small fraction (less than 1/84). If we can correct this then we may get Oracle to change the awful nested loop to an acceptable hash join. Wven if we can’t fix this mis-estimate we may be able to do something that improves the estimate for STAGE1 to something sufficienlty large that it will trigger the switch to a hash join. So let’s look at the predicates for these two tables.

REF1 predicates

   9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  

The index range scan is based on an access predicate (with no filter predicate), so it looks like there’s a nearly perfect three-column index on REF1, but the optimizer can’t use the number of distinct keys in the index to get a good estimate of cardinality because one of the predicates is range-based. So the arithmetic will look at the three predicates separately and multiply up their selectivities. (It’s possible, of course, that this might be the first three columns of a 4, or more, column index.)

It’s a reasonable guess that the number of distinct combinations of (puser, name) will be much smaller than num_distinct(puser) * num_distinct(name) – so one strategy that might help increase the table’s cardinality estimate is to create extended statistics on the column group (puser, name).

Another reasonable guess is that the number of distinct values for the two columns is (relatively) small, with some skew to the distribution (name = ‘CODE’ looks particularly susceptible to being a commonly occurring value) – so perhaps we need a histogram on one or both of the columns (which would then require a histogram to be created on the column group as well if we wanted the optimizer to use the column group). We’d also have to make sure that the queried values didn’t fall outside the known low/high values for the columns if we wanted the column group to be used.

STAGE1 Predicates

13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')

This is the access(-only) predicate for the index stage_idx1, and there are no filter predicates when we reach the table. So stage_idx1 might be a two-column index on the table that we are using completely, or it may be an index with more columns that we are using only incompletely. We can see that the cardinality estimate is out by a factor of 12 (84 predicted, 1,000 actual) so if this is the complete index (which would allow Oracle to use the distinct_keys value to estimate cardinality) there must be an uneven data distribution in the values; but if this is just the first two columns of a longer index then we might benefit from extended stats (viz: another column group) on this pair of columns.

Again, even if we create a column group, or take automatic advantage of the distinct_keys figure, the predicate STAT=’I’ (is that state, status?) looks a little dangerous – status columns tend to have a small number of distinct values with a signficant skew to the distribution of values – so we may want to create a histogram on the STAT column, which would then require us to create a histogram on the column group if we also wanted the column group stats to have an effect.

What Happened Next?

I made the suggestions about column groups and histogram to the OP – without the level of detail that I’ve given in the explanations above – and got the following response:

You are spot on.

There does exists frequency histogram on column NAME(having 14 distinct value) and STAT(having 7 distinct values) of table STAGE. Also there already exists a frequency histogram on column PUSER and height balanced histogram on column NAME of table REF1. But still they were not helping the existing query.

By creating a column group on both on ref1 (puser, name) and stage(name, stat) with histogram for both i.e. ‘FOR COLUMNS SIZE 254’. The merge Cartesian removed automatically from the path by optimizer and its doing a hash join now with TAB1 finishing the query in quick time.

Summary

When the cardinality (rows) estimate for an operation drops to one (which might mean much less than one and rounded up) then all hell can break loose and the choice of parent operation – and its cardinality estimate – might be insanely stupid, leading to a catastrophically poor execution plan.

Very low cardinality estimates are often the result of multiplying individual column selectivities to produce an estimated selectivity that is unrealistic (much too small) when compared with the actual number of distinct combinations that exist in the table. In such cases creating a column group, (possibly with a histogram) may be all you need to do to get a better cardinality estimate and a vastly improved execution plan.

 

June 17, 2020

Lower Cost Ignored

Filed under: CBO,Oracle — Jonathan Lewis @ 3:49 pm BST Jun 17,2020

This is an update on a post I wrote nearly 10 years ago describing how the optimizer could choose to ignore a lower cost indexed access path and use a higher cost index if the lower cost were based on guesswork. The original article article used (select {constant} from dual) as a way of supplying an “unpeekable bind” in a predicate, but that mechanism stopped working 11gR2, hence the update.

The upate also goes into a little more detail about event 38036 which can be used to modify this behaviour by defining a “cut-off” percentage where Oracle will switch back to using the lower cost path.

We start with the code to generate the data – including, in this case – a table that I can query to supply “hidden constants” to the optimizer:

em
rem     Script:         index_range_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id 
        from dual 
        connect by 
                rownum <= 10000 -- > comment to avoid wordpress format issue
)
select
        rownum                  id,
        9 + mod(rownum-1,3)     scattered,
        trunc(rownum / 500)     clustered,
        lpad(mod(rownum,2),10)  ind_pad,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 40000 -- > comment to avoid wordpress format issue
;

create index t1_equi  on t1(scattered, ind_pad, clustered);
create index t1_range on t1(clustered, scattered);

create table driver(name varchar2(20), value number(4,0));
create index drv_i1 on driver(name);

insert into driver values('Forty',40);
insert into driver values('FortyOne',41);
commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'DRIVER',
                method_opt       => 'for all columns size 1'
        );
end;
/


The driver table models a pattern that I see fairly frequently of a table holding a set of “constants” that are subject to infrequent changes. (An alternative strategy is to have a PL/SQL package of constants and a function to return a value when supplied with a name.)

The key columns in the table are

  • scattered: 3 distinct values evenly but randomly scattered across the table – not a column you would index separately
  • clustered: 500 consecutive rows each for 80 distinct values, so very well clustered data

There are two indexes on the main data table

  • t1_range (clustered scattered) — roughly (80 * 3 =) 240 distinct keys
  • t1_equi (scattered, ind_pad, clustered) — roughly (3 * 2 * 80) = 480 distinct keys

If I execute the query: “select * from t1 where clustered between 40 and 41 and scattered = 10” Oracle uses the t1_range index to return an estimated 491 rows at a cost of 36. The estimates aren’t too far out since in my case the query returned 334 rows in 26 buffer visits (when my SQL*Plus arraysize was 500).

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |   496 |  8432 |    36   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1       |   496 |  8432 |    36   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_RANGE |   496 |       |     5   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("CLUSTERED">=40 AND "SCATTERED"=10 AND "CLUSTERED".le.41)
       filter("SCATTERED"=10)

But what happens if we decide to keep the start and end values for clustered in the driver table:


set autotrace traceonly explain

select 
        /*  index(t1 t1_range) */
        small_vc
from
        t1
where
        scattered = 10
and     clustered between (select value from driver where name = 'Forty')
                  and     (select value from driver where name = 'FortyOne')
;

set autotrace off



You’ll notice the “hint that isn’t a hint” I’ve tested two versions of the query, one without a hint and one where I inserted the necessary “+” to have a hint instead of a comment. Here are the execution plans (with “<=” edited to “.le.” in the Predicate Information):


=======
Default
=======
-------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |         |    33 |   561 |    67   (3)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1      |    33 |   561 |    63   (4)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                    | T1_EQUI |    60 |       |    56   (4)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER  |     1 |    11 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | DRV_I1  |     1 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER  |     1 |    11 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | DRV_I1  |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("SCATTERED"=10 AND "CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER"
              WHERE "NAME"='Forty') AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE
              "NAME"='FortyOne'))
       filter("CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='Forty')
              AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='FortyOne'))
   4 - access("NAME"='Forty')
   6 - access("NAME"='FortyOne')

=============
Hint t1_range
=============
--------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |    33 |   561 |    10   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1       |    33 |   561 |     6   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                    | T1_RANGE |    60 |       |     2   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER   |     1 |    11 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | DRV_I1   |     1 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER   |     1 |    11 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | DRV_I1   |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='Forty')
              AND "SCATTERED"=10 AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE
              "NAME"='FortyOne'))
       filter("SCATTERED"=10)
   4 - access("NAME"='Forty')
   6 - access("NAME"='FortyOne')

The big difference occurs at operation 2. You should notice two things – first, the estimate of rows is wrong by a factor of 10, but that’s not the point I want to chase. Secondly by default the optimizer has  selected the plan using the (far) more expensive index. This is a deliberately coded choice and if you have access to MOS then you should check Document ID 4112254.8: “CBO may not choose best index for single table access with unpeeked bind/function”.

Rerun the test (unhinted) after issuing.

alter session set events '38036 trace name context forever, level 10'

You will find that the optimizer automatically takes the path using the lower cost index. For values of 10 or higher Oracle will pick the lower cost index, for values of 9 or less the optimizer will stick with the higher cost index.

The question, of course, is what’s special about the value 10. Given the clue in the MOS document, and the costs from the plans above, and allowing for some fiddling with the arithmetic we can note the following:

  • Cost of using the expensive index for the table access is 63 (From the 10053 trace it’s actually 62.525996)
  • Cost of using the cheaper index for the table access if 6 (Again, from the 1003 trace, 6.037011)
  • Express 6.037011 / 62.525996 as a percentage and you get 9.655 which rounds to 10.

Probably not a coincidence! (Though it might take a few more test to decide whether it’s round() or ceiling())

So if the optimizer is picking the wrong index, and the arithmetic for the one it should be using is dictated by guessing, then calculate the ratio of the two costs for the table access, express as a percentage and round up (and add some if you want to) then set the event.

I’m not really sure how much use you could, or should, make of this event. Possibly it’s one of those things you use temporarily to allow you to generate an SQL Baseline (the event setting doesn’t get captured as an “opt_param()” hint unlike other fiddles of this type that you might use). Maybe there are a few special cases in batch jobs were the optimizer is a little inconsistent about which index to pick and the event is relevant and could be set through an ‘alter session’ command. It’s nice to know it’s there – but may be hard to use in practice.

Footnote

If you were wondering about the cardinality estimates of 60 rowids from the index and 33 rows from the table. This is the effect of the standard  “guesses for unpeeked binds” the optimizer uses. For bounded (between) range on an index the selectivity is 0.0045, for a table it’s 0.0025; in this case we then have to include a factor of 1/3 in the arithmetic to allow for the “scattered=10” predicate:

40,000 * 0.0045 * 1/3 = 60

40,000 * 0.0025 * 1/3 = 33.33….

 

 

 

 

June 5, 2020

Analytic cost error

Filed under: CBO,Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 1:06 pm BST Jun 5,2020

Here’s a surprising costing error that was raised on the Oracle Developer Forum a few days ago. There’s a glitch in the cost atributed to sorting when an analytic over() clause – with corresponding “window sort” operation – makes a “sort order by” operation redundant. Here’s a script to generate the data set I’ll use for a demonstration with a template for a few queries I’ll be running against the data.


rem
rem     Script:         window_sort_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                    id,
        cast(lpad(rownum,30,'0') as varchar2(30)) vc30,
        cast(lpad(rownum,65,'0') as varchar2(65)) vc65,
        lpad('x',100,'x')                         padding
from
        generator
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

select
--      row_number() over (order by vc65) rn1,
--      row_number() over (order by vc30) rn2,
--      vc30,
--      vc65,
        id
from
        t1
-- order by
--      vc65
--      vc30
/


I’m (optionally) using the row_number() analytic function over the entire data set and for each row_number() I include in the select list Oracle will have to sort the data; I’ve also got an (optional) order by on the two columns that appear in the row_number() functions and that may introduce some sorting as well. Here, for example, are a few examples of the queries I might run:


prompt  ===========================================
prompt  Select vc30, order by vc30
prompt  ===========================================

explain plan for
select 
        vc30,
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ==========================================
prompt  Select row_number over vc30 - no ordering
prompt  ==========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ===========================================
prompt  Select row_number over vc30 - order by vc65
prompt  ===========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

In the first query we select and sort vc30 so (approximately) we sort 10,000 rows x 30(-ish) bytes for 300K of sort space. In the second query we generate the row_number() based on sorting vc30 – the size of the output is much smaller (it’s only 10,000 numbers between 1 and 10,000) but to generate those numbers we do have to select and sort vc30, so the workload (predicted and actual) will probably be similar to that of the firsrt query. In the final query we have to select and sort vc30 to generate the row_number() but we also have to select (without reporting) and sort vc65 in order to report the results in the right order – so we should expect the workload to be roughly 3 times the size (approximately 10,000 * (30 + 65) bytes). Here, from 12.2.0.1, are the execution plans (with a little cosmetic tidying):

===========================================
Select vc30, order by vc30
===========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  SORT ORDER BY     |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

==========================================
Select row_number over vc30 - no ordering
==========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

===========================================
Select row_number over vc30 - order by vc65
===========================================
-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  SORT ORDER BY      |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

As expected, the execution plans are roughly consistent with the estimates I gave for volume of data – the agreement between the query with order by vc30 and the query with over(order by vc30), and the increased load of ordering by vc65 when selecting the row_number(over vc30) is good.

So let’s see what the plan looks like when we select row_number(over vc30) and then order by vc30. If the optimizer is smart it will recognise that it’s possible to adopt a strategy that allows it to take advantage of the sorting from the over() clause to avoid a separate sort order by:


explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|    42   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|    42   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|    42   (5)|
----------------------------------------------------------------

The plan shows us that Oracle has used the 10gR2 “order by elimination” feature to bypass the need for a “sort order by” operation because it knows the data will be arriving in the right order from the “Window Sort” operation. Ynfortunately it also shows us that Oracle has lost the cost of doing the Window Sort!

Let’s try another experiment – let’s generate two different row_number() columns, with and without ordering:


prompt  =====================================
prompt  Select Both row_numbers - no ordering
prompt  =====================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ======================================
prompt  Select Both row_numbers order by vc30
prompt  ======================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ============================================
prompt  Select Both row_numbers order by vc65
prompt  ============================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

You’ll notice that I’ve included a request for the projection information in the plans for these examples so that you can see what columns are passed up from each operation to its parent. Again, though, we’ll start by focusing on just the costs:


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC65"[VARCHAR2,65], "VC30"[VARCHAR2,30],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC30")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC65")[22]
   2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22],
       "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The first query – without an order by” clause – reports a cost of 463; add an order by clause and the cost drops to 253 (and the “order by” clause doesn’t appear as a sort order by operation in the plan). The cost differential between the ordered and “unordered” plans , by the way, is 210 (and from there down to the base tablescan is another 211) – and here’s another way to see that number (+/- 1) appearing:


explain plan for
select
        vc65,
        id
from
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   986K|       |   253   (3)|
|   1 |  SORT ORDER BY     |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
------------------------------------------------------------------------

The cost of the query with no “order by” clause is basically the cost of a table scan plus two sorts of (vc65, vc30, plus a few bits). When you add in an “order by” clause the optimizer discards the “order by” clause and then subtracts one of the sort costs as well.

CBO trace file

Every time I say something about 10053 (CBO) trace files I feel compelled to remind everyone that I rarely look at them, and then it’s usually because I think I know there’s a bug and where I’ll find it in the trace. That’s exactly the case here.

I’m expecting to see two differences in the trace files between the “no order” query, and a query where I’ve added in an “order by” clause. One difference is that one trace file will have an “OBYE” (eliminate order by) comment which won’t be in the other trace, one trace file will have a “cost for SORT” calculation which won’t be in the other.

So here are the relevant bits – first from the query without the order by clause:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 463.384707  card: 10000.000000  bytes: 1010000.000000
***********************

And from one of the plans with an order by:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE performed.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 252.512458  card: 10000.000000  bytes: 1010000.000000
***********************

As you can see, the first (no order) trace file has two sort calculations under WiF sort, (Window Function?) while the second (order by) trace file reports “OBYE performed” and then loses one of its WiF sorts.

Note: If I had ordered by ID I would have seen two calculations of cost under the heading of WiF sort and a third calculation below that with the heading ORDER BY sort. Unfortunately when I ran the test to check this I also found that the OBYE report said: “OBYE performed” when it wasn’t relevant and there wasn’t an “order by” available for elimination.

Interestingly I tested to see if I could change the optimizer’s behaviour by adding the no_eliminate_oby(@sel$1) hint to the “order by” queries but the GENERAL PLANS section didn’t change, even though the trace file report: “OBYE: OBYE bypassed: hinted”, and the “Hint Report” from the Oracle 19i execution plan acknowledge the hint as legal and used.

Summary

If you have an “order by” clause in a query block that includes analytic functions and the optimizer decides that it can eliminate the “order by” and rely on the side effect of an analytic over() clause you may find that the cost of the query block is reduced by the cost of one of the Window Sort operations. (Technically this might lead to cases where the optimizer then made some poor choices in overall shape of the execution plan – though such cases might be very rare given that this costing error doesn’t affect the cardinality estimates.)

Lagniappe

In the last set of tests I added in the projection information as a simple example of a case where it can help you understand a little more of what the plan is supposed to achieve.  If you examine the last two query plans carefully (select both row_number() values and order by vc30 / vc65 respectively) Operation 2 of the first plan reports:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

while operation 2 of the second plan reports:

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

It’s not until you look at these two lines that the plans show any differences – operations 1 and 2 simply say “Window Sort” without giving any clue about which window sort is for which over() clause. The projection information, though, tells you which way around the over() clauses operated – in the first query the over(order by vc65) is applied to the result of the tablescan first, while in the second query it’s the over(order by vc30) that is applied first.

Lagniappe 2

There’s another little oddity you might spot when you look at the projection information and think about the WiF sort costs from the unordered query. The columns passed from operation 3 to operation 2 are:

 3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The columns passed from operation 2 to operation 1 are one of:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

Operation 2 sorts the data from operation 3, and operation 1 sorts the data from operation 2 – but the columns arriving from operation 2 have an extra row_number()[22] added to them. So when you look in the trace file at the two Wif Sort calculations why do they both show:

Blocks to Sort: 150 Row size: 122 Total Rows: 10000

Shouldn’t one of them show a Row size that’s (at least) 22 longer than the other ?

 

 

 

 

 

 

April 27, 2020

Execution Plans

Filed under: Execution plans,Oracle — Jonathan Lewis @ 11:57 am BST Apr 27,2020

In a recent blog note I made the point that there is a very simple rule (“first child first”) for reading execution plans if the query (as written or after transformation by the optimizer) consists of a single “query block”. However, if you have a plan that is reporting multiple query blocks you have to be careful that you identify the boundaries of the individual query blocks and manage to link them together correctly.

In this note I’m going to work through an example that appeared on the Oracle Developer Forum a couple of years ago where someone produced a query that may have fooled them into misreading the execution plan. It’s a very good example of a production plan that is sufficiently complex to be interesting and make a couple of useful points but still produces a plan that is short enough to pick apart in just a couple of pages of text.

The initial question was about an oddity (actually a bug) in the optimizer’s cardinality estimate for one of the plan operations, but my answer to the question produced a follow-up question which said:

“Okay so it’s an error, was wondering why for one of the subqueries it’s using a hash join (7,8) and the other a nested loop.”

Here’s the query that produced this question. Before jumping straight to the plan it’s a good idea to take a close look at the query, take note of any optimizer strategies you might see, any features of the query that might make a big difference to the optimizer strategies, and any details that might mean the plan doesn’t do what you might expect.

Since the Oracle version can have a big impact on what the optimizer can do, I’ll point out that this was running on 12.1.0.2


select  /*+ gather_plan_statistics */ 
        * 
from 
        dwh_prod.ip_spells a
where 
        not exists (
                select  1
                from
                        dwh_prod.ip_episodes e
                inner join 
                        dwh_prod.ip_diagnosis d 
                on
                        d.episode_id = e.episode_id
                where
                        a.spell_id = e.spell_id
                and     (
                           substr(d.diagnosis_code,1,1) = 'C'
                        or substr(d.diagnosis_code,1,3) between 'D37' and 'D48'
                        or substr(d.diagnosis_code,1,1)  = 'V'
                        or d.diagnosis_code = 'Z511'
                )
        )
and     exists (
                select  1
                from
                        dwh_prod.ip_episodes e
                left join 
                        dwh_prod.ip_diagnosis d 
                on
                        d.episode_id = e.episode_id
                where
                        a.spell_id = e.spell_id
                and     (
                            e.episode_order = '1'
                        and substr(d.diagnosis_code,1,1) <> 'O'
                        and d.diagnosis_sequence = 1
                        and e.main_specialty_code not in ('501','560','610')
                        )
                )
and     substr(a.admission_method_code,1,1) = '2'
and     a.admission_dttm >= to_date('01-jan-2011', 'dd-mon-yyyy')
and     administrative_category_code = '01'

Point to note:

  • It’s a fairly simple query – one table, with two correlated subqueries.
  • The correlating predicate for the first (not exists) subquery is at line 15
  • The correlating predicate for the second (exists) subquery is at line 32
  • Both subqueries are joins between two tables, and the two tables are the same in both cases.
  • The first subquery is an “inner join” (line 10), the second subquery is a “left join” (line 27)
  • There’s an error in the code at line 36!
    • the predicate “d.diagnosis_sequence = 1” will eliminate any ip_episode (e) rows that have been preserved by the outer join
    • so the optimizer will automatically treat the outer join as an inner join
  • We might see either subquery running as a filter subquery
  • We might see either subquery unnested into an inline view – with, or without, “pushed predicate”
  • We might then see complex view merging turn the “exists” subquery into a semi-join and/or the “not exists” into an anti-join

That last comment is something that makes it easy to jump to conclusions while reading the plan and head off in the wrong direction. So let’s take a careful look at the execution plan – which is an actual run-time plan reporting the rowsource execution statistics:


SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(format=>'ALLSTATS LAST'));

Plan hash value: 2721384176
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                    |      1 |        |     50 |00:00:06.13 |     283K|  94413 |       |       |          |
|*  1 |  FILTER                               |                    |      1 |        |     50 |00:00:06.13 |     283K|  94413 |       |       |          |
|*  2 |   HASH JOIN ANTI                      |                    |      1 |    156 |     51 |00:00:05.70 |     282K|  94247 |   143M|  7865K|  143M (0)|
|   3 |    PARTITION RANGE ALL                |                    |      1 |  15592 |    425K|00:00:00.65 |   79553 |      0 |       |       |          |
|*  4 |     TABLE ACCESS FULL                 | IP_SPELLS          |     24 |  15592 |    425K|00:00:00.58 |   79553 |      0 |       |       |          |
|   5 |    VIEW                               | VW_SQ_1            |      1 |    530K|    464K|00:00:04.74 |     203K|  94247 |       |       |          |
|*  6 |     HASH JOIN                         |                    |      1 |    530K|    464K|00:00:04.65 |     203K|  94247 |    26M|  3954K|   34M (0)|
|*  7 |      TABLE ACCESS FULL                | IP_DIAGNOSIS       |      1 |    528K|    464K|00:00:03.12 |     109K|      0 |       |       |          |
|   8 |      TABLE ACCESS FULL                | IP_EPISODES        |      1 |   2491K|   2495K|00:00:00.44 |   94253 |  94247 |       |       |          |
|   9 |   NESTED LOOPS SEMI                   |                    |     51 |      1 |     50 |00:00:00.36 |     514 |    166 |       |       |          |
|* 10 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_EPISODES        |     51 |      1 |     51 |00:00:00.15 |     229 |     92 |       |       |          |
|* 11 |     INDEX RANGE SCAN                  | IDX02_IP_EPISODES  |     51 |      1 |     76 |00:00:00.13 |     153 |     51 |       |       |          |
|* 12 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_DIAGNOSIS       |     51 |   2688K|     50 |00:00:00.21 |     285 |     74 |       |       |          |
|* 13 |     INDEX RANGE SCAN                  | IDX01_IP_DIAGNOSIS |     51 |      5 |    174 |00:00:00.21 |     153 |     74 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NOT NULL)
   2 - access(A.SPELL_ID=ITEM_1)
   4 - filter((SUBSTR(A.ADMISSION_METHOD_CODE,1,1)='2' AND A.ADMISSION_DTTM>=TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND ADMINISTRATIVE_CATEGORY_CODE='01'))
   6 - access(D.EPISODE_ID=E.EPISODE_ID)
   7 - filter((SUBSTR(D.DIAGNOSIS_CODE,1,1)='C' OR SUBSTR(D.DIAGNOSIS_CODE,1,1)='V' OR (SUBSTR(D.DIAGNOSIS_CODE,1,3)>='D37' AND
              SUBSTR(D.DIAGNOSIS_CODE,1,3)<='D48') OR D.DIAGNOSIS_CODE='Z511'))
  10 - filter((E.EPISODE_ORDER=1 AND E.MAIN_SPECIALTY_CODE<>'501' AND E.MAIN_SPECIALTY_CODE<>'560' AND
              E.MAIN_SPECIALTY_CODE<>'610'))
  11 - access(E.SPELL_ID=:B1)
  12 - filter((D.DIAGNOSIS_SEQUENCE=1 AND SUBSTR(D.DIAGNOSIS_CODE,1,1)<>'O'))
  13 - access(D.EPISODE_ID=E.EPISODE_ID)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan
   - 6 Sql Plan Directives used for this statement
   -

Take note, by the way, that we’ve been told that the plan was “adaptive” and we’ve used 6 SQL Plan Directives, which would (probably) have been telling the optimizer to use dynamic sampling of the data in various ways to get to the best possible plan.

Given that the query starts as three separate query blocks and uses two of the tables twice it would have been nice to see the Query Block / Object Alias information for this query as that would have made it just a little easier to connect the plan to the query.

At first sight we can spot that we’ve got a hash join anti at operation 2 and a nested loop semi at operation 9 and given my earlier comments about how the optimizer can treat subqueries it would be easy to jump to the conclusion that the anti represented the “not exists” subquery and the semi was the “exists” subquery and work onwards from there. However operation 1 is a filter, and when we check the Predicate Information for operation 1 it holds the slighltly cryptic content “filter (IS NOT NULL)” this is an example of how filter subqueries “go missing” from the predicate information when you pull the execution plan from memory. Operation 1 is a multi-child filter with operation 2 and operation 9 as its child operations; for each row it receives from operation 2 it calls operation 9. If we could see the full predicate information for operation 1 we would see that it was the existence subquery. The nested loop semi isn’t the result of Oracle executing the existence subquery as a sem-join, it’s a semi-join that has appeared for some other reason – that we will pursue in a little while.

Let’s examine operation 2 (and its descendents) in detail.  It’s a hash join so it’s first child will be used as the build table and it’s second child will be used as the probe table.  The first child is a full tablescan (operation 4)  reading every partition of a range partitioned (operation 3)  table – IP_SPELLS table, which is the one table in the main body of the query. The second child is a view called VW_SQ_1 – an example of an internally named view that can appear when Oracle unnests, but doesn’t merge, a subquery – it represents the body of the “not exists” subquery. Oracle has used a hash join with IP_DIAGNOSIS as the build table and IP_EPISODES as the probe table.

To make things a little easier to comprehend I’ve created tables and indexes that let me emulate this query and plan, using the alias format option to report the query block names and fully qualified aliases that were finally used. Then I’ve edited the output to put the query block names and aliases beside the operation they refer to (rather than leaving the in a block under the body of the plan) and I’ve inserted line-breaks to help highlight the separate query blocks . This is the result:

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name               | Query Block  / Alias                |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                    |                                     |
|*  1 |  FILTER                               |                    | SEL$2B969259                        |
|*  2 |   HASH JOIN ANTI                      |                    |                                     |
|   3 |    PARTITION RANGE ALL                |                    |                                     |
|*  4 |     TABLE ACCESS FULL                 | IP_SPELLS          | SEL$2B969259 / A@SEL$4              |

|   5 |    VIEW                               | VW_SQ_1            | SEL$8D33959D / VW_SQ_1@SEL$F49409E0 |
|*  6 |     HASH JOIN                         |                    | SEL$8D33959D                        |
|*  7 |      TABLE ACCESS FULL                | IP_DIAGNOSIS       | SEL$8D33959D / D@SEL$1              |
|   8 |      TABLE ACCESS FULL                | IP_EPISODES        | SEL$8D33959D / E@SEL$1              |

|   9 |   NESTED LOOPS SEMI                   |                    | SEL$2B0A33EA                        |
|* 10 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_EPISODES        | SEL$2B0A33EA / E@SEL$3              |
|* 11 |     INDEX RANGE SCAN                  | IDX02_IP_EPISODES  | SEL$2B0A33EA / E@SEL$3              |
|* 12 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_DIAGNOSIS       | SEL$2B0A33EA / D@SEL$3              |
|* 13 |     INDEX RANGE SCAN                  | IDX01_IP_DIAGNOSIS | SEL$2B0A33EA / D@SEL$3              |
----------------------------------------------------------------------------------------------------------

As I’ve said before – if an execution plan is too complex to read in one go you can pick virtually any line of the plan and examine that line and its descendants in isolations as a way of getting started. Now I want to point out that if you can see the final query blocks this simply then any operations that starts a query block is a useful starting point for reading part of the plan because each query block has been optimized separately, and once you’ve understood how that block operates you can replace it (mentally) with a one line “here’s a rowsource”. So we might separate this plan into pieces as following:


----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name               | Query Block  / Alias                |
----------------------------------------------------------------------------------------------------------
|   5 |    VIEW                               | VW_SQ_1            | SEL$8D33959D / VW_SQ_1@SEL$F49409E0 |
|*  6 |     HASH JOIN                         |                    | SEL$8D33959D                        |
|*  7 |      TABLE ACCESS FULL                | IP_DIAGNOSIS       | SEL$8D33959D / D@SEL$1              |
|   8 |      TABLE ACCESS FULL                | IP_EPISODES        | SEL$8D33959D / E@SEL$1              |
----------------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name               | Query Block  / Alias                |
----------------------------------------------------------------------------------------------------------
|   9 |   NESTED LOOPS SEMI                   |                    | SEL$2B0A33EA                        |
|* 10 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_EPISODES        | SEL$2B0A33EA / E@SEL$3              |
|* 11 |     INDEX RANGE SCAN                  | IDX02_IP_EPISODES  | SEL$2B0A33EA / E@SEL$3              |
|* 12 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_DIAGNOSIS       | SEL$2B0A33EA / D@SEL$3              |
|* 13 |     INDEX RANGE SCAN                  | IDX01_IP_DIAGNOSIS | SEL$2B0A33EA / D@SEL$3              |
----------------------------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name               | Query Block  / Alias                |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                    |                                     |
|*  1 |  FILTER                               |                    | SEL$2B969259                        |
|*  2 |   HASH JOIN ANTI                      |                    |                                     |
|   3 |    PARTITION RANGE ALL                |                    |                                     |
|*  4 |     TABLE ACCESS FULL                 | IP_SPELLS          | SEL$2B969259 / A@SEL$4              |
|   5 |    Rowsource "not exists"             | VW_SQ_1            | SEL$8D33959D / VW_SQ_1@SEL$F49409E0 |
|   9 |   Rowsource "exists"                  |                    | SEL$2B0A33EA                        |
----------------------------------------------------------------------------------------------------------

  • The first sub-plan is the unnested “not exists” subquery – which is a very simple plan to analyze.
  • The second sub-plan is the “exists” subquery – which is a very simple plan to analyze
  • The third sub-plan (with the substitutes for the other two query blocks) says: do a hash (anti) join between IP_SPELLS and the “not exists” rowsource and for each row that is produced create and check the “exists” rowsource to see if it produces a match, and forward the row if it does.

In this particular case the shape of the two subsidiary query blocks, and the pattern that we use to stitch the pieces together makes it look as if “first child first” applies across the whole plan. This, I think, is a key reason why people have had difficulty interpreting complex plans in the past. If every plan you’ve previously examined looks as if “first child first” always works you’re going to be baffled by a plan where the combination of query blocks seems to breaks that rule.

Putting the first pieces together, this is what happens as the FILTER operation calls its first child:

  • Operation 1 (FILTER) calls operation 2 (hash join anti) which calls operation 3 (partition range all) which calls operation 4 (table access full of IP_SPELLS)
  • Operation 4 is the first operation to generate a rowsource,which is passes up to operation 3.
  • Operation 3 immediately passes the rowsource on up to operation 2 (making operation 3 the second operation to generate a rowsource)
  • Operation 2 creates its build table then calls operation 5 (VIEW – its second rowsource) to supply the probe table.
  • Operation 5 calls operation 6 (hash join) which calls its first child, operation 7 (table scan full of IP_DIAGNOSIS)
  • Operation 7 is the third  operation to generate a rowsource, which it passes up to operation 6 to use as its build table.
  • Operation 6 creates its build table from the rowsource then calls operation 8 (table access full of IP_EPISODE)
  • Operation 8 is the fourth operation to generate a rowsource, which it passes up to operation 6 as its probe table
  • Operation 6 probes its build table as rows arrive from operation 8, and passes join results up to operation 5
  • Operation 6 is the fifth operation to generate a rowsource
  • Operation 5 immediately passes the rows up to its parent (operation 2), making it the sixth operaton to generate a rowsource.
  • Operation 2 is an “anti” hash join, so as each row arrives from operation 5 it probes its build table and if it finds a match it marks the row in the build table as one to be discarded. When operation 2 has exhausted the input from operation 5 it can pass any unmarked rows up to its parent (operation 1), making it the seventh operation to generate a rowsource.

So we’re now at operation 1 with an incoming rowsource which is all the rows in IP_SPELLS where a certain type of match in the IP_DIAGNOSIS and IP_EPISODE tables does not exist. This is how the order looks so far if we strip the plan to a minimum and edit in an “order” column:

----------------------------------------------------------------------------
| Id  | Operation                             | Name               | Order |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                    |       |
|*  1 |  FILTER                               |                    |       |
|*  2 |   HASH JOIN ANTI                      |                    |     7 |
|   3 |    PARTITION RANGE ALL                |                    |     2 |
|*  4 |     TABLE ACCESS FULL                 | IP_SPELLS          |     1 |
|   5 |    VIEW                               | VW_SQ_1            |     6 |
|*  6 |     HASH JOIN                         |                    |     5 |
|*  7 |      TABLE ACCESS FULL                | IP_DIAGNOSIS       |     3 |
|   8 |      TABLE ACCESS FULL                | IP_EPISODES        |     4 |
----------------------------------------------------------------------------

So now we come to the part of the plan that matches the “exists” subquery. For each row that operation 1 (FILTER) receives from its first child it calls operation 9 to run the “exists” subquery, which is a correlated subquery with a join between IP_EPISODES and IP_DIAGNOSIS. So why does Oracle use a nested loop join in the subquery rather than a hash join, and why. in particular, does that nested loop turn into a semi-join? What’s the difference between the “not exists” and the “exists”?

Imagine that for every row in IP_EPISODES there are 10 matching rows in IP_DIAGNOSIS.  When we check for “not exists” correlated to IP_EPISODES we will have to join to every single matching IP_DIAGNOSIS row because it might be the 10th which fails the test because it matches some extra filter predicate on IP_DIAGNOSIS. So the join is going to be a “high volume” join (hence a hash join is likely to become more appropriate).  On the other hand when we test for “exists” we may get lucky and only get as far as the first IP_DIAGNOSIS for an IP_EPISODES to prove existence – so it makes more sense to adopt a nested loop so that we can stop on the first match – and that, in fact, is why the nested loop in this case is a nested loop semi, it’s a code path specially designed to stop early.  (In fact, it’s also capable of using the same caching mechanism as scalar subquery caching so it can be even more efficient than just “stop on first match” – it can even stop before trying because “I’ve done that one before”).

So for each row that operation 1 (FILTER) receives from its first child it calls its second child (operation 9) to see if it will return a  row.

  • Operation 9 (nested loop semi) calls operation 10 (table access by rowid) which calls operation 11 (index range scan)
  • Operation 11 will be the eighth operation to return a rowsource, containing rowids, to operation 10
  • Operation 10 will be the ninth operation to return a rowsource, containing row data, to operation 9
  • For each row it receives operation 9 will call its second child (operation 12 (table access by rowid)) which calls operation 13 (index range scan)
  • Operation 13 will be the tenth operation to return a rowsource, containing rowids, to operation 12
  • Operation 12 will be the eleventh operation to return a rowsource, containing row data, to operation 9
  • Operation 9 will stop on the first row it receives and pass it up to opereation 1, making operation 9 the twelfth operation to return a rowsource
  • Operation 1, if it receives a row from operation 9, will pass the current row to the client interface, making it the thirteenth operation to return a rowsource.

So the final ordering is:

----------------------------------------------------------------------------
| Id  | Operation                             | Name               | Order |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                    |       |
|*  1 |  FILTER                               |                    |    13 |
|*  2 |   HASH JOIN ANTI                      |                    |     7 |
|   3 |    PARTITION RANGE ALL                |                    |     2 |
|*  4 |     TABLE ACCESS FULL                 | IP_SPELLS          |     1 |
|   5 |    VIEW                               | VW_SQ_1            |     6 |
|*  6 |     HASH JOIN                         |                    |     5 |
|*  7 |      TABLE ACCESS FULL                | IP_DIAGNOSIS       |     3 |
|   8 |      TABLE ACCESS FULL                | IP_EPISODES        |     4 |
|   9 |   NESTED LOOPS SEMI                   |                    |    12 |
|* 10 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_EPISODES        |     9 |
|* 11 |     INDEX RANGE SCAN                  | IDX02_IP_EPISODES  |     8 |
|* 12 |    TABLE ACCESS BY INDEX ROWID BATCHED| IP_DIAGNOSIS       |    11 |
|* 13 |     INDEX RANGE SCAN                  | IDX01_IP_DIAGNOSIS |    10 |
----------------------------------------------------------------------------

Numbers

As a closing point it’s worth picking up a couple of numbers.

One number that raised the original question is the E-Rows of 2688K at operation 12 – it’s obviously wrong give that it’s supposed to be the number of rows you get from the table after getting 5 rowids from the index range scan at operation 13. Odd little glitches like this seem to appear from time to time as new internal features are tweaked and odd bits of code aren’t made totally consistent. It’s a pity that it’s there, but if an odd cardinality glitch hasn’t been echoed into the cost (which could result in a change in execution plan) it’s not really important. Since I created a model of this query to get at the query block names I’ve also run the model against 19.3 and the error is still present in that version.

You will note that the E-rows at operation 2 is 156 – when you compare it with the 15,592 for operation 3 you can see that it’s a classic 1% guess (in this case for the effects of a “not exists” subquery).

It’s also significant that the E-Rows for operation 3 is only 15,592 when the A-Rows is 425K: the error is fairly large and if there’s a performance problem with this query this error might be the first point of investigation. Maybe we need a histogram on administrative_category_code, or need to do something to help Oracle with the (1% guess from) substr(admission_method_code,1,1) – but we may have some trouble because you can’t mix and match virtual columns and column groups in 12.1.0.2

 

January 13, 2020

Collections

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 2:31 pm GMT Jan 13,2020

This is a note I drafted in September 2015 and only rediscovered a couple of days ago while searching for something I was sure I’d written about collections and/or table functions. The intention of collections and table functions is that they should behave like tables when you use them in a query – but there are cases where a real table and something cast to a table() aren’t treated the same way by the optimizer – and this 4-year old note (which is still valid in 2020 for 19c) is one of those cases.

 

There was a question – with test case – on Oracle-L recently [ed: now more than 4 years ago] about the behaviour of a query that changed plans as you switched from using a global temporary table to a collection – why was Oracle doing something inefficient with the collection. The answer was: “Bad luck, it’s a limitation in the optimizer”.  (Sub-text: collections are a pain).

The test case was short and simple so I thought I’d post it – with an h/t to Patrick Jolliffe who presented the probem and Timur Akhmadeev and Stefan Koehler who explained the problems.

Here’s the script (with a little cosmetic editing) to create the necessary objects and data:

rem
rem     Script:         collections.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2015
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create or replace type number_table is table of number;
/

create table test_objects as select * from all_objects;
create /* unique */ index test_objects_idx on test_objects(object_id);

exec dbms_stats.gather_table_stats(null, 'test_objects');

create global temporary table gtt_test_objects (object_id number);
insert into gtt_test_objects values (1);


In this example I’ve created a type which is a simple table of number. In a more general case you might create a simple object type, and then a type that was a table of that object type, then you might create a function that returned a variable of that table type, or a function that was declared to return the table type “pipelined” and uses the “pipe row” instruction in the code to return one value of the simple object type at a time. Whichever variation you used you could then use the table() operator to tell Oracle to treat the content of the table type as if it were a relational table. (In recent versions of Oracle the table() operator is redundant).

Here’s the first query, which uses the global temporary table in an “IN” subquery, followed by its execution plan – again with a little cosmetic editing and the addition of query block names across the board:


prompt  ==================================
prompt  Query using global temporary table
prompt  ==================================

select  
        /*+ qb_name(main) */ 
        null 
from    (
        select
                /*+ qb_name(inline) */
                distinct object_id 
        from    test_objects
        ) 
where   object_id in (
                select 
                        /*+
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        gtt_test_objects.object_id
                from
                        gtt_test_objects 
        )
;


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                  |      1 |        |      0 |00:00:00.01 |       5 |       |       |          |
|   1 |  VIEW                 | VM_NWVW_1        |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   2 |   SORT UNIQUE NOSORT  |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   3 |    NESTED LOOPS       |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   4 |     SORT UNIQUE       |                  |      1 |      1 |      1 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      TABLE ACCESS FULL| GTT_TEST_OBJECTS |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|*  6 |     INDEX RANGE SCAN  | TEST_OBJECTS_IDX |      1 |      1 |      0 |00:00:00.01 |       2 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"="GTT_TEST_OBJECTS"."OBJECT_ID")

As you can see I’ve set statistics_level to all, and used dbms_xplan.display_cursor() to pull the actual execution plan from memory. This plan tells us that the optimizer unnested the IN subquery to generate a unique set of values and used that unique set to drive a nested loop join into the test_objects table (with an index-only probe). Moreover, before this step, the optimizer used complex view merging and cost-based query transformation to postpone the “distinct” from the original query to do the join before distinct. The E-rows at operation 5 also tells us that the optimizer “knew” that there was only one row in the GTT – it took note of my cardinality() hint.

Now we replace with gtt_test_objects table with the collection – casting it to a table() and giving Oracle the same cardinality() hint – as follows:


select 
        /*+ 
                qb_name(main)
--              no_use_hash_aggregation(@sel$1)
        */ 
        null
from    (
        select  
                /*+ inline */
                distinct object_id 
        from    test_objects
        )
where   object_id in (
                select 
                        /*+ 
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        column_value object_id
                from
                        table(number_table(1)) gtt_test_objects
        )
;

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.08 |     132 |       |       |          |
|   1 |  MERGE JOIN SEMI                        |                  |      1 |      1 |      0 |00:00:00.08 |     132 |       |       |          |
|   2 |   SORT JOIN                             |                  |      1 |  56762 |      1 |00:00:00.08 |     132 |  1470K|   606K| 1306K (0)|
|   3 |    VIEW                                 |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |       |       |          |
|   4 |     HASH UNIQUE                         |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  4122K|  2749K| 3418K (0)|
|   5 |      INDEX FAST FULL SCAN               | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
|*  6 |   SORT UNIQUE                           |                  |      1 |      1 |      0 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"=VALUE(KOKBF$))
       filter("OBJECT_ID"=VALUE(KOKBF$))

The second plan is completely different. The optimizer has unnested the subquery to produce a join, but instead of using the unique set of values that it generated from the collection to drive a nested loop it’s decide to do a merge semi-join, which has entailed an expensive fast full scan of the test_objects_idx index to acquire all the key values first.

I tried to make the optimizer use the collection to drive a nested loop, adding some carefully targeted hints to force the join order and dictate a nested loop join with pushed predicate: but the optimizer wouldn’t push the “obvious” join predicate and continued to do an index fast full scan and sort of the text_object_idx. If you’re interested here are the hints and the resulting plan:

/*+
        qb_name(main)
        leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
        use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)
        push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
*/

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.03 |     132 |       |       |          |
|   1 |  NESTED LOOPS                           |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   2 |   SORT UNIQUE                           |                  |      1 |      1 |      1 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|*  4 |   VIEW                                  |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   5 |    SORT UNIQUE                          |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  2604K|   728K| 2314K (0)|
|   6 |     INDEX FAST FULL SCAN                | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_ID"=VALUE(KOKBF$))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 6 (U - Unused (1))
---------------------------------------------------------------------------
0 - SEL$102722C0
- qb_name(subq)

1 - SEL$8969F1C9
- leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
- qb_name(main)

1 - SEL$8969F1C9 / from$_subquery$_001@MAIN
U - push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
- use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)

5 - SEL$1
- inline

In the previous post we had a “NOT IN” subquery against a collection/pipelined table function that couldn’t even be unnested (even in 19c); in this example we have an IN subquery that does unnest but then can’t drive a nested loop efficiently because the optimizer won’t push the collection values into the distinct view, and won’t do complex view merging to avoid having to do that join predicate pushdown. Collections and table functions() just don’t play nicely with the optimizer!

In fact this plan also shows one of those “generic” approaches in the optimizer that allows a human operator to see a special case that could have been further optimized: if the optimizer had used a sort unique rather than a hash unique at operation 4 then the sort join at operation 2 would have been redundant – with an overall reduction in memory and CPU usage that I managed to get in a separate test by adding the hint /*+ no_use_hash_aggregation(@sel$1) */ to the query. (Since operation 6 is also a sort unique the merge join semi could, in principle, have become a merge join with no risk of producing duplicates – but the semi-join code path is probably a little more efficient, anyway, and a balance has to be struck between the risk of introducing complexity for a special case and the potential frequency and scale of the benefit it might produce.)

Conclusion

You can often see collections and table functions behaving very like tables when you use them in the from clause of queries – but there are some restrictions on the transformations that the optimizer can use when your query isn’t using “real” tables.

Footnote

There are many ways that you can play around with this starting model to investigate where the boundaries might be. For example, if I make the index on test_objects unique the plan changes to a simple nested loop driven by the unnested collection (there’s no longer a non-mergeable view in the way). If I eliminate the distinct from the original query the same thing happens (for the same reason). If I force the join order to start with the collection (using the leading() hint) but don’t hint a nested loop Oracle produces (at least in my case) a hash join with a Bloom filter that minimised the memory and and CPU requirement.

I mentioned at the start that Timur Akhmadeev and Stefan Koehler supplied explanations for what was going on behind the scenes. Critically Stefan also referenced one of two posts from the Oracle blog on complex view merging and its restrictions: part 1, part 2.

The related problem that led me to re-discover and complete this note is at this URL (published a couple of days ago).

January 10, 2020

Collection limitation

Filed under: Execution plans,Hints,Oracle,Troubleshooting — Jonathan Lewis @ 5:43 pm GMT Jan 10,2020

The ODC SQL and PL/SQL forum came up with an example a couple of days ago that highlighted an annoying limitation in the optimizer’s handling of table functions. The requirement was for a piece of SQL that would generate “installments” information from a table of contract agreements and insert into another table any installments that were not yet recorded there.

The mechanism to turn a single row of contract data into a set of installments was a (optionally pipelined) table function that involved some business logic that (presumably) dealt with the timing and size of the installments. The final SQL to create the data that needed to be inserted was reported as follows (though it had clearly been somewhat modified):

select 
        y.*,
        trunc(sysdate,'mm'),
        user
from 
        table_a a
join 
        table(
                my_function(
                        a.loan_acct_nbr, 
                        a.start_dt,
                        a.maturity_dt,
                        a.num_of_terms
                )
        ) y
on 
        a.loan_acct_nbr = y.loan_acct_nbr
where 
        a.yr_mnth_cd = to_char(add_months(sysdate,-1),'yyyymm')       -- last month
and     a.loan_typ   = 'ABC'
and     not exists ( 
                select  1 
                from
                        final_load_table l
                where
                        l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd 
        )
;

In this query, table_a is the set of contracts, final_load_table is the set of installments, and my_function() is the pipelined function returning a table of installments derived from the start date, maturity date, and number of installments for a contract. The query needs a “not exists” subquery to eliminate any installments that are already known to the database. Once this query is operating efficiently it could be used either to drive a PL/SQL loop or (generally the better strategy) to do a simple “insert as select”.

We were told that the function would return around 60 rows per contract; that the starting size of the final_load_table would be about 60M rows, and the size of the result set from the initial join would be about 60M or 70M rows (which suggests about 1M rows acquired from table_a).

The owner of this problem seemed to be saying that the query had returned no data after 4 or 5 hours – which suggests (a) the optimizer chose a bad plan and/or (b) the PL/SQL function is working very inefficiently and/or (c) the join had generated a vast amount of data but the effect of the subquery was to discard all of it .

Clearly we needed to see an execution plan (preferably from SQL Monitor) and be given have some idea of how much of the 60M/70M rows predicted for the join would be discarded because it already existed.

The owner did send us an execution plan – which included a very obvious threat and suggested a couple of optimizer errors – but supplied it as a picture rather than a flat text.

You’ll notice, of course, that the tables and columns have changed from the original statement. More significantly, though, there are serious problems with the numbers – the estimated row count for the basic join is only 5,326 rather than 50 Million which, at first sight, is probably why the optimizer has decided that a filter subquery (operation 1) to do an index-only probe (operation 5) is a good way of handling the subquery. Perhaps if the estimates had been a little more accurate (e.g. through the addition of a couple of column groups or, if necessary, by opt_estimate() or cardinality() hints) the subquery would have been unnested and turned into a hash anti-join.

I suggested a quick test of a suitable cardinality() hint – but ran up a little model to check that I’d got the hint right – and found that I had but it wasn’t going to help. So I decided to write up the model (and a possible solution for the owner of the problem) in this blog note.

Here’s the code to create the set of objects I’m going to work with. The naming follows the naming in the original statement of the problem suggested by the owner:


rem
rem     Script:         table_function_plan.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table table_a (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        start_dt        date,
        maturity_dt     date    ,
        number_of_terms number,
        constraint ta_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

execute dbms_random.seed(0)

insert /*+ append */
into    table_a
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        sysdate-(365-mod(rownum,365)),
        sysdate+(1500+mod(rownum,365)),
        60
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table final_load_table_l(
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200),
        constraint lt_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

insert /*+ append */ into final_load_table_l
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        lpad(rownum,10),
        lpad('x',200,'x')
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'table_a',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'final_load_table_l',
                method_opt  => 'for all columns size 1'
        );
end;
/

create type my_row_type as object (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200)
);
/

create type my_table_type as table of my_row_type;
/

create  or replace function my_function (
        i_loan_acct_nbr         in      number,
        i_yr_mnth_cd            in      varchar2,
        i_start_dt              in      date,
        i_maturity_dt           in      date,
        i_number_of_terms       in      number
)       return  my_table_type pipelined
as
begin
        for i in 1..i_number_of_terms loop
                pipe row (
                        my_row_type(
                                i_loan_acct_nbr,
                                to_char(i_start_dt+32*i,'yyyymm'),
                                i,
                                lpad('x',200,'x')
                        )
                );
        end loop;
        return;
end;
/

I was planning to create some large tables – hence the option to generate 16M rows from my generator CTEs – but I discovered the critical problem almost as soon as I had some data and code in place, so I didn’t need to go large.

I’ve had to create an object type and table type in order to create a pipelined function that returns the table type by piping rows of the object type. The data I’ve created, and the way the function generates data probably doesn’t bear much resemblance to the real system of course, but I don’t think it needs to be very close to make the crucial point.

Here’s the text of the select statement the OP wants to run, with the execution plan I got from my data set after running the query and pulling the plan from memory:

alter session set statistics_level = all;

select 
        /*+ find this 1 */
        y.* 
from 
        table_a a, 
        table(my_function(
                a.loan_acct_nbr,
                a.yr_mnth_cd,
                a.start_dt,
                a.maturity_dt,
                a.number_of_terms
        )) y
where 
        a.yr_mnth_cd = '202001'
and     not exists (
                select  
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd
        )
;

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |  5059K(100)|  14580 |00:00:00.15 |   16330 |
|*  1 |  FILTER                             |             |      1 |        |            |  14580 |00:00:00.15 |   16330 |
|   2 |   NESTED LOOPS                      |             |      1 |   6283K| 10421   (8)|  14580 |00:00:00.11 |     335 |
|*  3 |    TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     297 |
|   4 |    COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    14   (8)|  14580 |00:00:00.10 |      38 |
|*  5 |   INDEX UNIQUE SCAN                 | LT_PK       |  14580 |      1 |     1   (0)|      0 |00:00:00.02 |   15995 |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NULL)
   3 - filter("A"."YR_MNTH_CD"='202001')
   5 - access("L"."LOAN_ACCT_NBR"=:B1 AND "L"."YR_MNTH_CD"=:B2)


I’ve put in a hint to tell the optimizer to unnest the subquery – and it didn’t. Oracle does not ignore hints (unless they’re illegal, or out of context, or the optimizer never got to them, or you’ve found a bug) so why did Oracle appear to ignore this hint? There’s a really nice feature in 19.3 execution plans – you can request a hint report for a plan, and here’s the hint report for this query (ignore the bits about “find this” being an error):


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1), E - Syntax error (2))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  find
         E -  this

   5 -  SEL$3
         U -  unnest / Invalid correlated predicates


I put in an /*+ unnest */ hint to unnest the subquery, and I’ve been told that the predicates are not valid. The only thing about them that could be invalid is that they come from a pipelined function that has returned an object type. The pipelined function does not behave exactly like a table. But this gives me a clue about forcing the unnest to happen – hide the pipelined function inside a non-mergeable view.


select
        /*+ find this 2 */
        v.*
from    (
        select  /*+ no_merge */
                y.* 
        from 
                table_a a, 
                table(my_function(
                        a.loan_acct_nbr,
                        a.yr_mnth_cd,
                        a.start_dt,
                        a.maturity_dt,
                        a.number_of_terms
                )) y
        where 
                a.yr_mnth_cd = '202001'
        )       v
where   not exists (
                select
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = v.loan_acct_nbr
                and     l.yr_mnth_cd    = v.yr_mnth_cd
        )
/


------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        | 10628 (100)|  14580 |00:00:00.12 |     387 |       |       |          |
|*  1 |  HASH JOIN RIGHT ANTI                |             |      1 |   6283K| 10628  (10)|  14580 |00:00:00.12 |     387 |  1878K|  1878K| 2156K (0)|
|   2 |   INDEX FAST FULL SCAN               | LT_PK       |      1 |  10000 |     6  (17)|  10000 |00:00:00.01 |      91 |       |       |          |
|   3 |   VIEW                               |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.11 |     296 |       |       |          |
|   4 |    NESTED LOOPS                      |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.10 |     296 |       |       |          |
|*  5 |     TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     296 |       |       |          |
|   6 |     COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    13   (0)|  14580 |00:00:00.10 |       0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."LOAN_ACCT_NBR"="V"."LOAN_ACCT_NBR" AND "L"."YR_MNTH_CD"="V"."YR_MNTH_CD")
   5 - filter("A"."YR_MNTH_CD"='202001')

I’ve wrapped the basic join between table_a and the table function in an inline view called (unimaginatively) v, and I’ve added a /*+ no_merge */ hint to that inline view. So the main query becomes a select from a non-mergeable view with a “not exists” subquery applied to a couple of view columns. And Oracle thinks that that’s okay – and my unnest hint encouraged the optimizer to use a hash anti-join.

So here’s an option for the owner of the problem – but with a few outstanding questions: will a rewrite of their query in this form do the same, will the performance of the select be good enough, and will the subsequent “insert as select” keep the same driving plan.

There’s one significant detail to worry about: the build table in this hash (anti-)join is going to be constructed from 50M (load_acct_bfr, yr_mnth_cd) pairs – which means the workarea requirement is likely to be about 1.2GB for an optimial (i.e. in-memory) hash join; otherwise the join may spill to disc and do a lot of I/O – probably as a one-pass hash join.

(Did you notice,by the way, that the word storage appeared at operation 3 in the original plan?  That suggests a nice big Exadata box; however, there’s no storage predicate in the Predicate Information section for that operation and you would have thought that lease_type = ‘US_SSP’ would be pushed to storage, so maybe this is a ZFS Pillar backing a less exotic server.)

Conclusion

Some (if not all) types of correlated subqueries behave badly if the correlation predicates involve table functions. But you may be able to work around the issue by hiding part of the query, including the table function, inside a non-mergeable inline view before applying the subquery to the view.

Footnote

When I realised that the table function was causing a problem unnesting I remembered that I had written about a similar problem a few years ago – after searching for a little while I discovered a draft note that I had started in September 2015 but had not quite finished; so I’ll be finishing it off and publishing it some time in the next few days. (Update: now published)

 

 

January 3, 2020

push_having_to_gby() – 2

Filed under: 18c,Execution plans,Hints,Oracle — Jonathan Lewis @ 11:31 am GMT Jan 3,2020

The problem with finding something new and fiddling with it and checking to see how you can best use it to advantage is that you sometimes manage to “break” it very quickly. In yesterday’s blog note I introduced the /*+ push_having_to_gby(@qbname) */ hint and explained why it was a useful little enhancement. I also showed a funny little glitch with a missing predicate in the execution plan.

Today I thought I’d do something a little more complex with the example I produced yesterday, and I’ve ended up with a little note that’s not actually about the hint, it’s about something that appeared in my initial testing of the hint, and then broke when I pushed it a little further. Here’s a script to create data for the new test:

rem
rem     Script:         push_having_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

insert into t1 values (2, lpad(2,10,'0'), lpad('x',50,'x'));
commit;

alter table t1 modify id not null;
create index t1_i1 on t1(id) nologging;

create table t2 as select * from t1;
create index t2_i1 on t2(id) nologging;

I’ve created two tables here, one a clone of the other, with one id value out of 1 million having two rows. As we saw yesterday it’s quite simple to write some SQL that uses an index full scan on the t1_i1 index to check for duplicate id values without doing a massive sort or hash aggregation:


set serveroutput off
alter session set statistics_level = all;

select
        /*+
                qb_name(driver)
                index(@driver t1@driver)
        */
        id 
from
        t1
where   id is not null
group by 
        id 
having  
        count(1) > 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |      1 |        |      1 |00:00:00.87 |    2229 |   2228 |
|   1 |  SORT GROUP BY NOSORT|       |      1 |  50000 |      1 |00:00:00.87 |    2229 |   2228 |
|   2 |   INDEX FULL SCAN    | T1_I1 |      1 |   1000K|   1000K|00:00:00.40 |    2229 |   2228 |
-------------------------------------------------------------------------------------------------

As we saw yesterday this plan simply walks the index in order keeping track of a “running count” and doesn’t allocate a large PGA to sort a million rows of data, but there’s no asterisk by any operation telling us that there’s a predicate being checked, and no Predicate Information section to report the “count(1) > 1” predicate that we know exists (and is used, since the query produces the right answer).

Having ascertained that there is one duplicated id in the table, let’s join to the (clone) t2 table to list the rows for that id – and lets use the initial query as an inline view:

select
        /*+ 
                qb_name(main)
        */
        t2.v1
from    (
        select
                /*+
                        qb_name(driver)
                        index(@driver t1@driver)
                        no_use_hash_aggregation(@driver)
                */
                id 
        from
                t1
        where   id is not null
        group by 
                id 
        having  
                count(1) > 1
        )                       v1,
        t2
where
        t2.id = v1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      2 |00:00:00.76 |    2234 |     87 |       |       |          |
|   1 |  NESTED LOOPS                |       |      1 |  50000 |      2 |00:00:00.76 |    2234 |     87 |       |       |          |
|   2 |   NESTED LOOPS               |       |      1 |        |      2 |00:00:00.75 |    2232 |     28 |       |       |          |
|   3 |    VIEW                      |       |      1 |  50000 |      1 |00:00:00.75 |    2228 |      0 |       |       |          |
|*  4 |     SORT GROUP BY            |       |      1 |  50000 |      1 |00:00:00.75 |    2228 |      0 |    53M|  2539K|   47M (0)|
|   5 |      INDEX FULL SCAN         | T1_I1 |      1 |   1000K|   1000K|00:00:00.26 |    2228 |      0 |       |       |          |
|*  6 |    INDEX RANGE SCAN          | T2_I1 |      1 |        |      2 |00:00:00.01 |       4 |     28 |       |       |          |
|   7 |   TABLE ACCESS BY INDEX ROWID| T2    |      2 |      1 |      2 |00:00:00.01 |       2 |     59 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(COUNT(*)>1)
   6 - access("T2"."ID"="V1"."ID")

As you can see from this plan, I didn’t get the “sort group by nosort” that I wanted – even though the inline view was not merged. In fact, you’ll notice the /*+ no_use_hash_aggregation() */ hint I had to include to get a sort group by rather than a hash group by. The logic behind resolving this query block changed significantly when it went into a more complex query.

Having tried adding several other hints (blocking nlj_prefetch, nlj_batching, batched index access, setting cardinality to 1, first_rows(1) optimisation) I finally came down to using a materialized CTE (common table expression / “with” subquery):

with v1 as (
        select
                /*+
                        qb_name(driver)
                        index(@driver t1@driver)
                        materialize
                */
                id 
        from
                t1
        where
                id is not null
        group by 
                id 
        having  
                count(1) > 1
)
select
        /*+ 
                qb_name(main)
        */
        t2.v1
from    
        v1,
        t2
where
        t2.id = v1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |      2 |00:00:00.86 |    2236 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |      2 |00:00:00.86 |    2236 |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D66F8_E3B235A |      1 |        |      0 |00:00:00.86 |    2229 |
|   3 |    SORT GROUP BY NOSORT                  |                            |      1 |  50000 |      1 |00:00:00.86 |    2228 |
|   4 |     INDEX FULL SCAN                      | T1_I1                      |      1 |   1000K|   1000K|00:00:00.39 |    2228 |
|   5 |   NESTED LOOPS                           |                            |      1 |  50000 |      2 |00:00:00.01 |       6 |
|   6 |    NESTED LOOPS                          |                            |      1 |        |      2 |00:00:00.01 |       4 |
|   7 |     VIEW                                 |                            |      1 |  50000 |      1 |00:00:00.01 |       0 |
|   8 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D66F8_E3B235A |      1 |  50000 |      1 |00:00:00.01 |       0 |
|*  9 |     INDEX RANGE SCAN                     | T2_I1                      |      1 |        |      2 |00:00:00.01 |       4 |
|  10 |    TABLE ACCESS BY INDEX ROWID           | T2                         |      2 |      1 |      2 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - access("T2"."ID"="V1"."ID")

You’ll notice that the hinting is back to the bare minimum – with only the addition of the /*+ materialize */ hint in the CTE. You’ll also notice that the “count(1) > 1” predicate is still missing. But critically we do have the index full scan leading into a sort group by nosort and no huge memory allocation.

The price we have to pay is that we do direct path writes to the temporary tablespace to materialize the CTE and db file scattered reads to read the data back. But since this example is aimed at a large data set returning a small result set this may be a highly appropriate trade off.

It’s possible that a detailed examination of the 10053 trace file would give us a clue about why Oracle can find the sort group by nosort when the query block is a materialized CTE but not when it’s an inline view – but I’m happy to leave that investigation to someone else and just leave this here as a warning that sometimes (even in 19c) there’s a difference between a non-merged view path and a materizlied subquery path.

 

July 1, 2019

opt_estimate 4

Filed under: CBO,Execution plans,Hints,Oracle,Statistics — Jonathan Lewis @ 1:18 pm BST Jul 1,2019

In the previous article in this series on the opt_estimate() hint I mentioned the “query_block” option for the hint. If you can identify a specify query block that becomes an “outline_leaf” in an execution plan (perhaps because you’ve deliberately given an query block name to an inline subquery and applied the no_merge() hint to it) then you can use the opt_estimate() hint to tell the optimizer how many rows will be produced by that query block (each time it starts). The syntax of the hint is very simple:


opt_estimate(@{query block name}  query_block  rows={number of rows})

As with other options for the hint, you can use scale_rows=, min=, max= as alternatives (the last seems to be used in the code generated by Oracle for materialized view refreshes) but the simple “rows=N” is likely to be the most popular. In effect it does the same as the “non-specific” version of the cardinality() hint – which I’ve suggested from time to time as a way of telling the optimizer the size of a data set in a materialized CTE (“with” subquery), e.g.


set serveroutput off

with demo as (
        select  /*+
                        qb_name(mat_cte)
                        materialize
                        cardinality(@mat_cte 11)
--                      opt_estimate(@mat_cte query_block rows=11)
                */
                distinct trunc(created)    date_list
        from    all_objects
)
select  * from demo
;

select * from table(dbms_xplan.display_cursor);
    

Regardless of whether you use the opt_estimate() or cardinality() hint above, the materialized temporary table will be reported with 11 rows. (Note that in this case where the hint is inside the query block it applies to the “@mat_cte” isn’t necessary).

In the previous article I generated some data with a script called opt_est_gby.sql to show you the effects of the group_by and having options of the opt_estimate() hint and pointed out that there were case where you might also want to include the query_block option as well. Here’s a final example query showing the effect, with the scale_rows feature after creating a table t2 as a copy of t1 but setting pctfree 75 (to make a tablescan more expensive) and creating an index on t2(id):


create table t2 pctfree 75 as select * from t1;
create index t2_i1 on t2(id);

select
        t2.n1, t1ct
from
        t2,
        (
        select  /*+
                        qb_name(main)
                        opt_estimate(@main group_by scale_rows=4)
                        opt_estimate(@main having scale_rows=0.4)
                        opt_estimate(@main query_block scale_rows=0.5)
                */
                mod(n1,10), count(*) t1ct
        from    t1
        group by
                mod(n1,10)
        having
                count(*) > 100
        ) v1
where
        t2.id = v1.t1ct
;

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     8 |   168 |    27   (8)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |     8 |   168 |    27   (8)| 00:00:01 |
|   2 |   NESTED LOOPS               |       |     8 |   168 |    27   (8)| 00:00:01 |
|   3 |    VIEW                      |       |     8 |   104 |    10  (10)| 00:00:01 |
|*  4 |     FILTER                   |       |       |       |            |          |
|   5 |      HASH GROUP BY           |       |     8 |    32 |    10  (10)| 00:00:01 |
|   6 |       TABLE ACCESS FULL      | T1    |  3000 | 12000 |     9   (0)| 00:00:01 |
|*  7 |    INDEX RANGE SCAN          | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(COUNT(*)>100)
   7 - access("T2"."ID"="V1"."T1CT")


I’ve inlined the last query (with the two opt_estimate() hints) that I used in the previous article, and added a third opt_estimate() hint to that inline view. In this case I didn’t have to add a no_merge() hint because the numbers worked in my favour but to be safe in a production environment that’s a hint that I should have included.

You may recall that the hash group by on its own resulted in a prediction of 200 rows, and with the having clause the prediction dropped to 10 rows (standard 5%). With my three opt_estimate() hints in place I should see the effects of the following arithmetic:


group by      200       * 4   = 800
having        5% of 800 * 0.4 =  16
query block   16        * 0.5 =   8

As you can see, the cardinality prediction for the VIEW operation is, indeed, 8 – so the combination of hints has worked. It’s just a shame that we can’t see the three individual steps in the arithmetic as we walk the plan.

A Warning

As always I can only repeat – hinting is not easy; and “not easy” usually translates to “not stable / not safe” (and thanks to a Freudian slip while typing: “not sage”. You probably don’t know how do it properly, except in the very simplest cases, and we don’t really know how Oracle is interpreting the hints (particularly the undocumented ones). Here’s an example of how puzzling even the opt_estimate(query_block) hint can be – as usual starting with some data:

rem
rem     Script:         opt_estimate_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2017
rem

create table t1
as
select * from all_objects;

create table t2
as
select * from all_objects;

As you can see, I’ve been a bit lazy with this example (which I wrote a couple of years ago) and it uses all_objects as a convenient source of data. Unfortunately this means you won’t necessarily be able to reproduce exactly the results I’m about to show you, which I did on a small instance of 12.2.0.1. I’m going to examine four versions of a simple query which

  • restricts the rows from t1,
  • finds the unique set of object_types in that subset of t1
  • then joins to t2 by object_type

select
        /*+ 
                qb_name(main)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                merge(@inline)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                opt_estimate(@inline query_block rows=14)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                merge(@inline)
                opt_estimate(@inline query_block rows=14)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;

The first version is my unhinted baseline (where, in my case, Oracle doesn’t use complex view merging), the second forces complex view merging of the inline aggregate view, then queries 3 and 4 repeat queries 1 and 2 but tell the optimizer that the number of distinct object_type values  is 14 (roughly half the actual in may case). But there is an oddity in the last query – I’ve told the optimizer how many rows it should estimate for the inline view but I’ve also told it to get rid of the inline view and merge it into the outer query block; so what effect is that going to have? My hope would be that the hint would have to be ignored because it’s going to apply to a query block that doesn’t exist in the final plan and that makes it irrelevant and unusable. Here are the four execution plans:


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      | 61776 |  4464K|   338   (7)| 00:00:01 |
|*  1 |  HASH JOIN           |      | 61776 |  4464K|   338   (7)| 00:00:01 |
|   2 |   VIEW               |      |    27 |   351 |   173   (9)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |    27 |   486 |   173   (9)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T1   | 59458 |  1045K|   164   (4)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T2   | 61776 |  3680K|   163   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."OBJECT_TYPE"="V1"."OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


--------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           | 61776 |  5308K|       |  1492   (2)| 00:00:01 |
|   1 |  VIEW                  | VM_NWVW_1 | 61776 |  5308K|       |  1492   (2)| 00:00:01 |
|   2 |   HASH UNIQUE          |           | 61776 |  5489K|  6112K|  1492   (2)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|           | 61776 |  5489K|       |   330   (5)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T1        | 59458 |  1045K|       |   164   (4)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2        | 61776 |  4403K|       |   163   (4)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."OBJECT_TYPE"="OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      | 32032 |  2314K|   338   (7)| 00:00:01 |
|*  1 |  HASH JOIN           |      | 32032 |  2314K|   338   (7)| 00:00:01 |
|   2 |   VIEW               |      |    14 |   182 |   173   (9)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |    14 |   252 |   173   (9)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T1   | 59458 |  1045K|   164   (4)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T2   | 61776 |  3680K|   163   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."OBJECT_TYPE"="V1"."OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


--------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |    14 |  1232 |       |  1492   (2)| 00:00:01 |
|   1 |  VIEW                  | VM_NWVW_1 |    14 |  1232 |       |  1492   (2)| 00:00:01 |
|   2 |   HASH UNIQUE          |           |    14 |  1274 |  6112K|  1492   (2)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|           | 61776 |  5489K|       |   330   (5)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T1        | 59458 |  1045K|       |   164   (4)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2        | 61776 |  4403K|       |   163   (4)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."OBJECT_TYPE"="OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The first plan tells us that most of the rows in t1 have created > 1st March 2017 and there are (estimated) 27 distinct values for object_type; and there are 61,776 rows in t2 (which is basically the same as t1), and none of them are eliminated by the join on object_type from the inline view.

The second plan (with the forced complext view merging) shows Oracle changing the view with “distinct” into a (right) semi-join between t2 and t1 with the internal view name of VM_NWVW_1 – and the cardinality is correct.

The third plan shows that my hint telling the optimizer to assume the original inline view produces 14 rows has been accepted and, not surprisingly, when we claim that we have roughly half the number of object_type values the final estimate of rows in the join is roughly halved.

So what happens in the fourth plan when our hint applies to a view that no longer exists? I think the optimizer should have discarded the hint as irrelevant the moment it merged the view. Unfortunately it seems to have carried the hint up into the merged view and used it to produce a wildly inaccurate estimate for the final cardinality. If this had been a three-table join this is the sort of error that could make a sensible hash join into a third table become an unbelievably stupid nested loop join. If you had thought you were doing something incredibly clever with (just) the one opt_estimate() hint, the day might come when a small change in the statistics resulted in the optimizer using a view merge strategy you’d never seen before and producing a catastrophic execution plan in (say) an overnight batch that then ran “forever”.

Hinting is hard, you really have to be extremely thorough in your hints and make sure you cover all the options that might appear. And then you might still run into something that looks (as this does) like a bug.

Footnote

Here’s a closing thought: even if you manage to tell the optimizer exactly how many rows will come out of a query block to be joined to the next table in the query, you may still get a very bad plan unless you can also tell the optimizer how many distinct values of the join column(s) there are in that data set. Which means you may also have to learn all about the (even more undocumented) column_stats() hint.

 

June 28, 2019

opt_estimate 3

Filed under: CBO,Execution plans,Hints,Oracle,Statistics — Jonathan Lewis @ 1:12 pm BST Jun 28,2019

This is just a quick note to throw out a couple of of the lesser-known options for the opt_estimate() hint – and they may be variants that are likely to be most useful since they address a problem where the optimizer can produce consistently bad cardinality estimates. The first is the “group by” option – a hint that I once would have called a “strategic” hint but which more properly ought to be called a “query block” hint. Here’s the simplest possible example (tested under 12.2, 18.3 and 19.2):


rem
rem     Script:         opt_est_gby.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2019
rem 

create table t1
as
select
        rownum                  id,
        mod(rownum,200)         n1,
        lpad(rownum,10,'0')     v1,
        rpad('x',100)           padding
)
from
        dual
connect by
        level <= 3000
;

set autotrace on explain

prompt  =============================
prompt  Baseline cardinality estimate
prompt  (correct cardinality is 10)
prompt  Estimate will be 200
prompt  =============================

select  /*+
                qb_name(main)
        */
        mod(n1,10), count(*) 
from    t2 
group by 
        mod(n1,10)
;

I’ve generated a table of 3,000 rows with a column n1 holding 15 rows each of 200 distinct values. The query then aggregates on mod(n1,10) so it has to return 10 rows, but the optimizer doesn’t have a mechanism for inferring this and produces the following plan – the Rows value from the HASH GROUP BY at operation 1 is the only thing we’re really interested in here:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   200 |   800 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |   200 |   800 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

It looks as if the optimizer’s default position is to use num_distinct from the underlying column as the estimate for the aggregate. We can work around this in the usual two ways with an opt_estimate() hint. First, let’s tell the optimizer that it’s going to over-estimate the cardinality by a factor of 10:


select  /*+
                qb_name(main)
                opt_estimate(@main group_by, scale_rows = 0.1)
        */
        mod(n1,10), count(*) 
from    t1 
group by 
        mod(n1,10)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    20 |    80 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |    20 |    80 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

The hint uses group_by as the critical option parameter, and then I’ve used the standard scale_rows=nnn to set a scaling factor that should be used to adjust the result of the default calculation. At 10% (0.1) this gives us an estimate of 20 rows.

Alternatively, we could simply tell the optimizer how many rows we want it to believe will be generated for the aggregate – let’s just tell it that the result will be 10 rows.

select  /*+
                qb_name(main)
                opt_estimate(@main group_by, rows = 10)
        */
        mod(n1,10), count(*) 
from    t1 
group by 
        mod(n1,10)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    10 |    40 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |    10 |    40 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

We use the same group_by as the critical parameter, with rows=nnn.

Next steps

After an aggregation there’s often a “having” clause so you might consider using the group_by option to fix up the cardinality of the having clause if you know what the normal effect of the having clause should be. For example: “having count(*) > NNN” will use the optimizer’s standard 5% “guess” and “having count(*) = NNN” will use the standard 1% guess. However, having seen the group_by options I took a guess that there might be a having option to the opt_estimate() hint as well, so I tried it – with autotrace enabled here are three queries, first the unhinted baseline (which uses the standard 5% on my having clause) then a couple of others with hints to tweak the cardinality:

select  /*+
                qb_name(main)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

select  /*+
                qb_name(main)
                opt_estimate(@main having scale_rows=0.4)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

select  /*+
                qb_name(main)
                opt_estimate(@main group_by scale_rows=2)
                opt_estimate(@main having scale_rows=0.3)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

The first query gives us the baseline cardinality of 10 (5% of 200). The second query scales the having cardinality down by a factor of 0.4  (with means an estimate of 4). The final query first doubles the group by cardinality (to 400), then scales the having cardinality (which would have become 20) down by a factor of 0.3 with the nett effect of producing a cardinality of 6. Here are the plans.

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |    10 |    40 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |   --  10
|   2 |   HASH GROUP BY     |      |    10 |    40 |    10  (10)| 00:00:01 |   -- 200
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     4 |    16 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |    --   4
|   2 |   HASH GROUP BY     |      |     4 |    16 |    10  (10)| 00:00:01 |    -- 200
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     6 |    24 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |    --   6
|   2 |   HASH GROUP BY     |      |     6 |    24 |    10  (10)| 00:00:01 |    -- 400
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

It’s a little sad that the FILTER operation shows no estimate while the HASH GROUP BY operation shows the estimate after the application of the having clause. It would be nice to see the plan reporting the figures which I’ve added at the end of line for operations 1 and 2.

You may wonder why one would want to increase the estimate for the group by then reduce it for the having. While I’m not going to go to the trouble of creating a worked example it shouldn’t be too hard to appreciate the idea that the optimizer might use complex view merging to postpone a group by until after a join – so increasing the estimate for a group by might be necessary to ensure that that particular transformation doesn’t happen, while following this up with a reduction to the having might then ensure that the next join is a nested loop rather than a hash join. Of course, if you don’t need to be this subtle you might simply take advantage of yet another option to the opt_estimate() hint, the query_block option – but that will (probably) appear in the next article in this series.

 

June 25, 2019

opt_estimate 2

Filed under: CBO,Execution plans,Hints,Oracle,Statistics — Jonathan Lewis @ 8:22 pm BST Jun 25,2019

This is a note that was supposed to be a follow-up to an initial example of using the opt_estimate() hint to manipulate the optimizer’s statistical understanding of how much data it would access and (implicitly) how much difference that would make to the resource usage. Instead, two years later, here’s part two – on using opt_estimate() with nested loop joins. As usual I’ll start with a little data set:


rem
rem     Script:         opt_est_nlj.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2017
rem

create table t1
as
select 
        trunc((rownum-1)/15)    n1,
        trunc((rownum-1)/15)    n2,
        rpad(rownum,180)        v1
from    dual
connect by
        level <= 3000 --> hint to avoid wordpress format issue
;

create table t2
pctfree 75
as
select 
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from    dual
connect by
        level <= 3000 --> hint to avoid wordpress format issue
;

create index t1_i1 on t1(n1);
create index t2_i1 on t2(n1);

There are 3,000 rows in each table, with 200 distinct values for each of columns n1 and n2. There is an important difference between the tables, though, as the rows for a given value are well clustered in t1 and widely scattered in t2. I’m going to execute a join query between the two tables, ultimately forcing a very bad access path so that I can show some opt_estimate() hints making a difference to cost and cardinality calculations. Here’s my starting query, with execution plan, unhinted (apart from the query block name hint):

select
        /*+ qb_name(main) */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |    44   (3)| 00:00:01 |
|*  1 |  HASH JOIN                           |       |   225 | 83700 |    44   (3)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS FULL                  | T2    |  3000 |   541K|    42   (3)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N1"="T1"."N2")
   3 - access("T1"."N1"=15)

You’ll notice the tablescan and hash join with t2 as the probe (2nd) table and a total cost of 44, which largely due to the tablescan cost of t2 (which I had deliberately defined with pctfree 75 to make the tablescan a little expensive). Let’s hint the query to do a nested loop from t1 to t2 to see why the hash join is preferred over the nested loop:


alter session set "_nlj_batching_enabled"=0;

select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |   242   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |   225 | 83700 |   242   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |  2775 |    16   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |    15 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve done two slightly odd things here – I’ve set a hidden parameter to disable nlj batching and I’ve used a hint to block nlj prefetching. This doesn’t change the arithmetic the optimizer uses, but it does mean the presentation of the nested loop goes back to the original pre-9i form which makes it a little easier to see costs and cardinalities adding and multiplying their way through the plan. I do not do this in production systems.

As you can see, the total cost is 242 with this plan and most of the cost is due to the indexed access into t2. The optimizer has correctly estimated that each probe of t2 will acquire 15 rows and that those 15 rows will be scattered across 15 blocks, so the join cardinality comes to 15 * 15 = 255 and the cost comes to: 2 (t1 cost) + (15 (t1 rows) * 16 (t2 unit cost)) = 242.

So let’s tell the optimizer that its estimated cardinality for the index range scan is wrong.


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |   225 | 83700 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |   225 | 83700 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |  2775 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve used the hint opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06).

The form is: (@qb_name   nlj_index_scan,   target_table_alias   (list of possible driving tables),   target_index,   numeric_adjustment).

The numeric_adjustment could be rows=nnn or, as I have here, scale_rows=nnn; the target_index has to be specified by name rather than list of columns, and the list of possible driving tables should be a comma-separated list of fully-qualified table aliases. There’s a similar nlj_index_filter option which I can’t demonstrate in this post because it probably needs an index of at least two-columns before it can be used.

The things to note in this plan are: the index range scan at operation 5 has now has a cardinality (Rows) estimate of 1 (that’s 0.06 * the original 15). This hasn’t changed the cost of the range scan (because that cost was already one before we applied the opt_estimate() hint) but, because the cost of the table access is dependent on the index selectivity the cost of the table access is down to 2 (from 16). On the other hand the table cardinality hasn’t dropped so now it’s not consistent with the number of rowids predicted by the index range scan. The total cost of the query has dropped to 32, though, which is: 2 (t1 cost) + (15 (t1 rows) * 2 (t2 unit cost)).

Let’s try to adjust the prediction that the optimizer makes about the number of rows we fetch from the table. Rather than going all the way to being consistent with the index range scan I’ll dictate a scaling factor that will make it easy to see the effect – let’s tell the optimizer that we will get one-fifth of the originally expected rows (i.e. 3).


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    47 | 17484 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    47 | 17484 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     3 |   555 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

By adding the hint opt_estimate(@main table, t2@main, scale_rows=0.20) we’ve told the optimizer that it should scale the estimated row count down by a factor of 5 from whatever it calculates. Bear in mind that in a more complex query the optimizer might decide to follow the path we expected and that factor of 0.2 will be applied whenever t2 is accessed. Notice in this plan that the join cardinality at operation 1 has also dropped from 225 to 47 – if the optimizer is told that its cardinality (or selectivity) calculation is wrong for the table the numbers involved in the selectivity will carry on through the plan, producing a different “adjusted NDV” for the join cardinality calculation.

Notice, though, that the total cost of the query has not changed. The cost was dictated by the optimizer’s estimate of the number of table blocks to be visited after the index range scan. The estimated number of table blocks hasn’t changed, it’s just the number of rows we will find there that we’re now hacking.

Just for completion, let’s make one final change (again, something that might be necessary in a more complex query), let’s fix the join cardinality:


select
        /*+
                qb_name(main)
                leading(t1 t2)
                use_nl(t2)
                index(t2)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
                opt_estimate(@main join(t2 t1)   ,                      scale_rows=0.5)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     2 |   370 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

I’ve used the hint opt_estimate(@main join(t2 t1), scale_rows=0.5) to tell the optimizer to halve its estimate of the join cardinality between t1 and t2 (whatever order they appear in). With the previous hints in place the estimate had dropped to 47 (which must have been 46 and a large bit), with this final hint it has now dropped to 23. Interestingly the cardinality estimate for the table access to t2 has dropped at the same time (almost as if the optimizer has “rationalised” the join cardinality by adjusting the selectivity of the second table in the join – that’s something I may play around with in the future, but it may require reading a 10053 trace, which I tend to avoid doing).

Side not: If you have access to MoS you’ll find that Doc ID: 2402821.1 “How To Use Optimizer Hints To Specify Cardinality For Join Operation”, seems to suggest that the cardinality() hint is something to use for single table cardinalities, and implies that the opt_estimate(join) option is for two-table joins. In fact both hints can be used to set the cardinality of multi-table joins (where “multi” can be greater than 2).

Finally, then, let’s eliminate the hints that force the join order and join method and see what happens to our query plan if all we include is the opt_estimate() hints (and the qb_name() and no_nlj_prefetch hints and remember we’vs disabled “nlj batching“).

select
        /*+
                qb_name(main)
                no_nlj_prefetch(t2)
                opt_estimate(@main nlj_index_scan, t2@main (t1), t2_i1, scale_rows=0.06)
                opt_estimate(@main table         , t2@main     ,        scale_rows=0.20)
                opt_estimate(@main join(t2 t1)   ,                      scale_rows=0.5)
        */
        t1.v1, t2.v1
from    t1, t2
where
        t1.n1 = 15
and     t2.n1 = t1.n2
;

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                        |       |    23 |  8556 |    32   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    15 |  2805 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     2 |   370 |     2   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN                  | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"=15)
   5 - access("T2"."N1"="T1"."N2")

Note
-----
   - this is an adaptive plan

WIth a little engineering on the optimizer estimates we’ve managed to con Oracle into using a different path from the default choice. Do notice, though, the closing Note section (which didn’t appear in all the other examples): I’ve left Oracle with the option of checking the actual stats as the query runs, so if I run the query twice Oracle might spot that the arithmetic is all wrong and throw in some SQL Plan Directives – which are just another load of opt_estimate() hints.

In fact, in this example, the plan we wanted became desirable as soon as we applied the nlj_ind_scan fix-up as this made the estimated cost of the index probe into t2 sufficiently low (even though it left an inconsistent cardinality figure for the table rows) that Oracle would have switched from the default hash join to the nested loop on that basis alone.

Closing Comment

As I pointed out in the previous article, this is just scratching the surface of how the opt_estimate() hint works, and even with very simple queries it can be hard to tell whether any behaviour we’ve seen is actually doing what we think it’s doing. In a third article I’ll be looking at something prompted by the most recent email I’ve had about opt_estimate() – how it might (or might not) behave in the presence of inline views and transformations like merging or pushing predicates. I’ll try not to take 2 years to publish it.

 

« Previous PageNext Page »

Website Powered by WordPress.com.