Oracle Scratchpad

March 8, 2021

Join Elimination redux

Filed under: Bugs,CBO,Join Elimination,Oracle,Transformations — Jonathan Lewis @ 12:58 pm GMT Mar 8,2021

This note is a followup to a post from a few years back (originally dating further back to 2012) where I described an inconsistency that appeared when join elimination and deferrable constraints collided. The bug resurfacted recently in a new guise in a question on the Oracle Developer forum with a wonderful variation on the symptons that ultimately gave a good clue to underlying issue. The post included a complete working example of the anomaly, but I’ll demonstrate it using a variation of my 2012/2017 code. We start with a pair of tables with referential integrity defined between them:

rem
rem     Script:         join_eliminate_bug_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2021
rem
rem     Last tested 
rem             19.8 (LiveSQL) 
rem

create table parent (
        id      number(4),
        name    varchar2(10),
        constraint par_pk primary key (id)
        deferrable initially immediate
)
;

create table child(
        id_p    number(4)       
                constraint chi_fk_par
                references parent,
        id      number(4),
        name    varchar2(10),
        constraint chi_pk primary key (id_p, id) 
)
;

insert into parent values (1,'Smith');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

commit;

begin
        dbms_stats.gather_table_stats(user,'child');
        dbms_stats.gather_table_stats(user,'parent');
end;
/

You’ll notice that I’ve created the primary key constraint on parent as “deferrable initially immediate”. So let’s write some code that defers the constraint, inserts some duplicate data executes a join between the two tables:

set serveroutput off
set constraint par_pk deferred;

insert into parent (id,name) values (1,'Smith');

select
        /*+ initially immediate  PK */
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);

Since I’ve now got two rows with id = 1 in parent the query ought to return duplicates for every row in child where id_p = 1, but it doesn’t. Here’s the output from the query and the execution plan:

     ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally

2 rows selected.


PLAN_TABLE_OUTPUT
--------------------------------------------------
SQL_ID  gy6h8td4tmdpg, child number 0
-------------------------------------
select  /*+ initially immediate  PK */  chi.* from  child chi,  parent
par where  par.id = chi.id_p

Plan hash value: 2406669797

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| CHILD |     2 |    24 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

The optimizer has applied “join elimination” to take parent out of the transformed query, so no duplicates. Arguably this is the wrong result.

Let’s roll back the insert and change the experiment – let’s change the constraint on the parent primary key so that it’s still deferrable, but initially deferred then repeat the insert and query:

rollback;
alter table child drop constraint chi_fk_par;
alter table parent drop constraint par_pk;

alter table parent add constraint par_pk primary key (id) deferrable initially deferred;
alter table child add constraint chi_fk_par foreign key(id_p) references parent;

insert into parent (id,name) values (1,'Smith');

select
        /*+ initially deferred  PK */
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);

In this case we don’t need to “set constraint par_pk deferred”, it’s implicitly deferred by definition and will only be checked when we commit any transaction. Would you expect this to make any difference to the result? This is what we get:

      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          1 Simon
         1          2 Sally
         1          2 Sally

4 rows selected.


PLAN_TABLE_OUTPUT
--------------------------------------------------
SQL_ID  8gvn3mzr8uv0h, child number 0
-------------------------------------
select  /*+ initially deferred  PK */  chi.* from  child chi,  parent
par where  par.id = chi.id_p

Plan hash value: 1687613841

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |       |       |     2 (100)|          |
|   1 |  NESTED LOOPS      |        |     2 |    30 |     2   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| CHILD  |     2 |    24 |     2   (0)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN | PAR_PK |     1 |     3 |     0   (0)|          |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("PAR"."ID"="CHI"."ID_P")

When the parent primary key is initially deferred then join elimination doesn’t take place – so we get two copies of each child row in the output. (This is still true even if we add the “rely” option to the parent primary key constraint).

Debug Analysis

As I said at the top of the article, this does give us a clue about the source of the bug. A check of the dictionary table cdef$ (constraint definitions) shows the following notes for column cdef$.defer:

  defer         number,                     /* 0x01 constraint is deferrable */
                                              /* 0x02 constraint is deferred */
                                /* 0x04 constraint has been system validated */
                                 /* 0x08 constraint name is system generated */
etc...

With my examples the “initially immediate” constraint reported defer = 5, for the “initially deferred” constraint it reported the value 7. It looks as if the optimizer code to handle join elimination look only at the static definition of the constraint (bit 0x02) and doesn’t consider the possibility that if bit 0x01 is set it should also check the session state to see if the constraint has been temporarily deferred.

Conclusion

If you are going to implement deferrable constraints be very careful about tracking exactly how you use them, and be aware that if you execute arbitrary queries in mid-transaction then you may find that the results are not exactly what you expect. In fact, though it’s not demonstrated here, different forms of SQL to that should express the same requirement may not give the same results.

February 16, 2021

Adaptive error

Filed under: CBO,dbms_xplan,Oracle,Statistics — Jonathan Lewis @ 5:41 pm GMT Feb 16,2021

There’s a thread on the Oracle Database Forum at present where someone has supplied a script to create some data that’s guaranteed to reproduce wrong results (provided your system stats and optimizer parameters are at their default values). They’ve even supplied a link to the script on LiveSQL (opens in new window) – which is running 19.8 – to demonstrate the problem.

I’ve tested on 12.2.0.1 and 19.3.0.0 and the problem occurs in both versions – though with my setup the initial plan that returned the wrong results didn’t re-optimize to a plan with the correct results in 12.2.0.1.

I’ve included a script at the end of the note to create the data set but I’ll describe some of the objects as we go along – starting with a query that gives the correct result, apparently because it’s been hinted to do so:

execute dbms_stats.delete_system_stats

set linesize 255
set pagesize  60
set trimspool on

alter session set statistics_level='all';
set serveroutput off

select 
        /*+ use_hash(dwf) */ 
        count(*) count_hash 
from 
        test_dwf_sapfi  dwf
where
         exists (
                select  1 
                from    test_sapfi_coicar_at5dat11      coi
                where   coi.datumzprac = 20200414
                and     to_char(coi.datuct,'yyyymmdd') = dwf.datumucetnipom_code
        );

select * from table(dbms_xplan.display_cursor(format=>'cost outline allstats last partition hint_report adaptive'));


test_dwf_sapfi is a table with a single numeric column datumucetnipom_code, the table is list partitioned by that column with 61 partitions. Each partition is defined to hold a single value. The number is designed to look like a date in the format YYYYMMDD.

test_sapfi_coicar_at5dat11 is a table with two columns (datuct, datumzprac). The first column is a date column with data covering a range of 60 dates, the second column is a numeric column and the table is list partioned on that column. All the data in the table is in one partition of that table and the column holds the same value for every row (again it’s a number that looks like a date).

There are 15,197 rows in each table, and the test_dwf_sapfi data has been created as a copy (with a suitable to_number(to_char()) formatting change from the test_sapfi_coicar_at5dat11 table.

Here’s the execution plan from 19c:

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |      1 |        |   328 (100)|       |       |      1 |00:00:00.02 |     155 |     69 |       |       |          |
|   1 |  SORT AGGREGATE              |                            |      1 |      1 |            |       |       |      1 |00:00:00.02 |     155 |     69 |       |       |          |
|*  2 |   HASH JOIN RIGHT SEMI       |                            |      1 |    253 |   328   (1)|       |       |  15197 |00:00:00.02 |     155 |     69 |  2352K|  2352K| 2110K (0)|
|   3 |    PART JOIN FILTER CREATE   | :BF0000                    |      1 |    152 |    13   (0)|       |       |  15197 |00:00:00.01 |      25 |      0 |       |       |          |
|   4 |     PARTITION LIST SINGLE    |                            |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |      0 |       |       |          |
|   5 |      TABLE ACCESS FULL       | TEST_SAPFI_COICAR_AT5DAT11 |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |      0 |       |       |          |
|   6 |    PARTITION LIST JOIN-FILTER|                            |      1 |  15197 |   314   (1)|:BF0000|:BF0000|  15197 |00:00:00.01 |     130 |     69 |       |       |          |
|   7 |     TABLE ACCESS FULL        | TEST_DWF_SAPFI             |     60 |  15197 |   314   (1)|:BF0000|:BF0000|  15197 |00:00:00.01 |     130 |     69 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "DWF"@"SEL$1")
      FULL(@"SEL$5DA710D3" "COI"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "DWF"@"SEL$1" "COI"@"SEL$2")
      USE_HASH(@"SEL$5DA710D3" "COI"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$5DA710D3" "COI"@"SEL$2")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (N - Unresolved (1))
---------------------------------------------------------------------------
   7 -  SEL$5DA710D3 / DWF@SEL$1
         U -  use_hash(dwf)



You’ll notice there’s no “adaptive” information in the report, and there’s no “Note” section saying it’s an adaptive plan. You might also note that the plan looks as if it’s doing a hash join into “dwf” but the “Hint Report” tells us that the hint has not been used and the “Outline Information” tells us that the plan has actually arrived as the result of the combination /*+ use_hash(coi) swap_join_inputs(coi)” */. In fact this is the default plan (on my system) that would have appeared in the complete absence of hints.

The result of the count(*) should be 15,197 – and you can see that this plan has produced the right answer when you check the A-Rows value for operation 2 (the hash join right semi that generates the rowsource for the sort aggregate).

The adaptive anomaly

So now we try again but with a hint to generate a nested loop join and it gives us the wrong result (8) and an oddity in the plan. I’ve reported the body of the plan twice, the first version includes the adaptive information the second is the tidier plan we get by omitting the ‘adaptive’ format option:

select 
        count(*) count_nl 
from 
        test_dwf_sapfi  dwf
where 
        exists (
                select
                        /*+
                                use_nl (coi)
                        */
                        1
                from    test_sapfi_coicar_at5dat11      coi
                where   coi.datumzprac = 20200414
                and     to_char(coi.datuct,'yyyymmdd') = dwf.datumucetnipom_code
        )
;

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                    | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT             |                            |      1 |        |   329 (100)|       |       |      1 |00:00:00.01 |     154 |       |       |          |
|     1 |  SORT AGGREGATE              |                            |      1 |      1 |            |       |       |      1 |00:00:00.01 |     154 |       |       |          |
|- *  2 |   HASH JOIN                  |                            |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.01 |     154 |  3667K|  1779K|          |
|     3 |    PART JOIN FILTER CREATE   | :BF0000                    |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.01 |     154 |       |       |          |
|     4 |     NESTED LOOPS             |                            |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.01 |     154 |       |       |          |
|-    5 |      STATISTICS COLLECTOR    |                            |      1 |        |            |       |       |     60 |00:00:00.01 |      25 |       |       |          |
|     6 |       SORT UNIQUE            |                            |      1 |    152 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |  4096 |  4096 | 4096  (0)|
|     7 |        PARTITION LIST SINGLE |                            |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|     8 |         TABLE ACCESS FULL    | TEST_SAPFI_COICAR_AT5DAT11 |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|     9 |      PARTITION LIST ITERATOR |                            |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.01 |     129 |       |       |          |
|  * 10 |       TABLE ACCESS FULL      | TEST_DWF_SAPFI             |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.01 |     129 |       |       |          |
|-   11 |    PARTITION LIST JOIN-FILTER|                            |      0 |  15197 |   314   (1)|:BF0000|:BF0000|      0 |00:00:00.01 |       0 |       |       |          |
|-   12 |     TABLE ACCESS FULL        | TEST_DWF_SAPFI             |      0 |  15197 |   314   (1)|:BF0000|:BF0000|      0 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                            |      1 |        |   329 (100)|       |       |      1 |00:00:00.01 |     154 |       |       |          |
|   1 |  SORT AGGREGATE            |                            |      1 |      1 |            |       |       |      1 |00:00:00.01 |     154 |       |       |          |
|   2 |   PART JOIN FILTER CREATE  | :BF0000                    |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.01 |     154 |       |       |          |
|   3 |    NESTED LOOPS            |                            |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.01 |     154 |       |       |          |
|   4 |     SORT UNIQUE            |                            |      1 |    152 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |  4096 |  4096 | 4096  (0)|
|   5 |      PARTITION LIST SINGLE |                            |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   6 |       TABLE ACCESS FULL    | TEST_SAPFI_COICAR_AT5DAT11 |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   7 |     PARTITION LIST ITERATOR|                            |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.01 |     129 |       |       |          |
|*  8 |      TABLE ACCESS FULL     | TEST_DWF_SAPFI             |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.01 |     129 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$5DA710D3" "DWF"@"SEL$1")
      USE_NL(@"SEL$5DA710D3" "DWF"@"SEL$1")
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "COI"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "COI"@"SEL$2" "DWF"@"SEL$1")
      SEMI_TO_INNER(@"SEL$5DA710D3" "COI"@"SEL$2")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))
  10 - filter("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   8 -  SEL$5DA710D3 / COI@SEL$2
         U -  use_nl (coi)

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)

Points to note here:

  • The most important item to note is that at operation 3 (of the tidy plan) we can see that the nested loop reports A-Rows as 8, it’s the wrong result.
  • Then there’s the oddityy that operation 2 is a “part join filter create” that shouldn’t be there for a nested loop, that’s a hash join feature that allows the Pstart/Pstop columns to report partition pruning by Bloom filter (“:BFnnnn”), but we’re running a nested loop join which can pass in the partition key, so we see KEY/KEY as the Pstart/Pstop.
  • The third thing we can pick up is that the 8 rows in our nested loop rowsource are echoed in the A-Rows for the 60 executions of the partition table scans of test_dwf_sapfi at operations 7 abd 8 in the reduced plan – it’s probably not a complete coincidence that the nested loop join is passing the partition keys in partition key order (sort unique at operation 4) and there are 8 rows in the last populated partition of test_dwf_sapfi,
  • Finally we note from the Hint Report that the hint, as supplied, was not used, and the outlne shows us that the path was actually “leading(coi dwf) use_nl(dwf)”.

The really fascinating thing about this execution plan is that it contains a hint that was not used – but the plan changed from the default plan to a slightly more expensive plan.

If at first you don’t succeed

There’s just one more surprise to reveal – we had an adaptive plan, which tends to mean the optimizer plays towards a nested loop join but hedges its bets to be able to swing to a hash join in mid-plan. This suggests that the real-time stats collector thought there wasn’t much data and a nested loop was good – but what happens when I run exactly the same query again? In my 12c system the answer was nothing changed, but in my 19c system a new plan appeared:

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |      1 |        |   331 (100)|       |       |      1 |00:00:00.01 |     154 |       |       |          |
|   1 |  SORT AGGREGATE              |                            |      1 |      1 |            |       |       |      1 |00:00:00.01 |     154 |       |       |          |
|*  2 |   HASH JOIN                  |                            |      1 |    120K|   331   (2)|       |       |  15197 |00:00:00.01 |     154 |  2171K|  2171K| 1636K (0)|
|   3 |    PART JOIN FILTER CREATE   | :BF0000                    |      1 |  15197 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |       |       |          |
|   4 |     SORT UNIQUE              |                            |      1 |  15197 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |  4096 |  4096 | 4096  (0)|
|   5 |      PARTITION LIST SINGLE   |                            |      1 |  15197 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   6 |       TABLE ACCESS FULL      | TEST_SAPFI_COICAR_AT5DAT11 |      1 |  15197 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   7 |    PARTITION LIST JOIN-FILTER|                            |      1 |  15197 |   314   (1)|:BF0000|:BF0000|  15197 |00:00:00.01 |     129 |       |       |          |
|   8 |     TABLE ACCESS FULL        | TEST_DWF_SAPFI             |     60 |  15197 |   314   (1)|:BF0000|:BF0000|  15197 |00:00:00.01 |     129 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "COI"@"SEL$2")
      FULL(@"SEL$5DA710D3" "DWF"@"SEL$1")
      LEADING(@"SEL$5DA710D3" "COI"@"SEL$2" "DWF"@"SEL$1")
      USE_HASH(@"SEL$5DA710D3" "DWF"@"SEL$1")
      SEMI_TO_INNER(@"SEL$5DA710D3" "COI"@"SEL$2")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   6 -  SEL$5DA710D3 / COI@SEL$2
         U -  use_nl (coi)

Note
-----
   - statistics feedback used for this statement

This is the output with the ‘adaptive’ format in place – but the plan isn’t adaptive – the optimizer has used statistics feedback (formerly cardinality feedback)to work out a better plan. The hint is still unused of course but when we check the plan we can see that

  • it has got the right answer – the hash join at operation 2 reports 15,197 rows
  • the “partition join” Bloom filter created at operation 3 has been used for the Pstart/Pstop at operations 7 and 8
  • even though the hint has not been used the plan is (again) not the same as the default plan, we’ve got a hash join with Bloom filter while the default plan had a hash join right semi after a sort unique of the test_sapfi_coicar_at5dat11 data with an overall lower cost.

What Happened ?

Clearly there is a bug. It’s a slightly sensitive bug, and all I had to do to eliminate it was to gather stats on the underlying tables. (You’ll find in the table creation script at the end of this note that there are basically no object stats on the “big” partitioned table, which is presumably why the adaptive stuff came into play and allowed the bug to surface, and why 19c statistics feedback produced a new plan on the second execution)

It may be rather difficult for an outsider to pin down what’s going wrong and bypass the bug. One of the first ideas that appeared on the forum was that the Bloom filter pruning was breaking something – but when I added the hint /*+ opt_param(‘_bloom_pruning_enabled’,’false’) */ to the query all I got was basically the same nested loop plan without the Bloom filter creation and still ended up with the wrong result.

Finally, here’s a plan I got when I hinted query correctly to force the nested loop join with test_dwf_sapfi as the inner (second) table in the join (in other words I hinted the plan that had been giving me the wrong results):

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |      1 |        |   405 (100)|       |       |      1 |00:00:00.01 |     154 |       |       |          |
|   1 |  SORT AGGREGATE           |                            |      1 |      1 |            |       |       |      1 |00:00:00.01 |     154 |       |       |          |
|   2 |   NESTED LOOPS            |                            |      1 |  38491 |   405   (1)|       |       |  15197 |00:00:00.01 |     154 |       |       |          |
|   3 |    SORT UNIQUE            |                            |      1 |    152 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |  4096 |  4096 | 4096  (0)|
|   4 |     PARTITION LIST SINGLE |                            |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   5 |      TABLE ACCESS FULL    | TEST_SAPFI_COICAR_AT5DAT11 |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   6 |    PARTITION LIST ITERATOR|                            |     60 |    253 |     5   (0)|   KEY |   KEY |  15197 |00:00:00.01 |     129 |       |       |          |
|*  7 |     TABLE ACCESS FULL     | TEST_DWF_SAPFI             |     60 |    253 |     5   (0)|   KEY |   KEY |  15197 |00:00:00.01 |     129 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - filter("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2
---------------------------------------------------------------------------
   1 -  SEL$5DA710D3
           -  leading(@sel$5da710d3 coi@sel$2 dwf@sel$1)

   7 -  SEL$5DA710D3 / DWF@SEL$1
           -  use_nl(@sel$5da710d3 dwf@sel$1)

Compare this with the plan I got by using the wrong hint, resulting in the adaptive plan, but with Bloom filter pruning disable:

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                       | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                            |      1 |        |   329 (100)|       |       |      1 |00:00:00.05 |     154 |       |       |          |
|   1 |  SORT AGGREGATE           |                            |      1 |      1 |            |       |       |      1 |00:00:00.05 |     154 |       |       |          |
|   2 |   NESTED LOOPS            |                            |      1 |  38491 |   329   (1)|       |       |      8 |00:00:00.05 |     154 |       |       |          |
|   3 |    SORT UNIQUE            |                            |      1 |    152 |    13   (0)|       |       |     60 |00:00:00.01 |      25 |  4096 |  4096 | 4096  (0)|
|   4 |     PARTITION LIST SINGLE |                            |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   5 |      TABLE ACCESS FULL    | TEST_SAPFI_COICAR_AT5DAT11 |      1 |    152 |    13   (0)|     2 |     2 |  15197 |00:00:00.01 |      25 |       |       |          |
|   6 |    PARTITION LIST ITERATOR|                            |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.05 |     129 |       |       |          |
|*  7 |     TABLE ACCESS FULL     | TEST_DWF_SAPFI             |     60 |    253 |   314   (1)|   KEY |   KEY |      8 |00:00:00.05 |     129 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - filter("DWF"."DATUMUCETNIPOM_CODE"=TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("COI"."DATUCT"),'yyyymmdd')))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (1))
---------------------------------------------------------------------------
   0 -  STATEMENT
           -  opt_param('_bloom_pruning_enabled','false')

   5 -  SEL$5DA710D3 / COI@SEL$2
         U -  use_nl (coi)

It’s the same plan (with the same plan hash value though I haven’t shown that) – it has the same predicates, and does the same amount of work, But when the optimizer gets to this plan through the adaptive pathway the run-time engine produces the wrong results (note A-Rows = 8 at operation 2), while if the plan is forced by a correct set of hints the run-time engine produces the right path.

As you might guess, another way to bypass the problem was to disable adaptive plans – but when I did that the only way to get the nested loop path was through correct hinting anyway.

Test it yourself

Here’s a script to create the test data:

rem
rem     Script:         bloom_bug_02.sql
rem     Author:         Michal Telensky / Jonathan Lewis
rem     Dated:          Feb 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     See also:
rem     https://community.oracle.com/tech/developers/discussion/4480469/reproducible-testcase-for-wrong-results
rem     https://livesql.oracle.com/apex/livesql/s/jzc2uyw6ecf2z2ul35nyrxelv
rem

drop table test_dwf_sapfi;
drop table test_sapfi_coicar_at5dat11;
purge recyclebin;

--
-- Don't do this unless it's a private system
-- Many sites seem to have the defaults anyway
--

execute dbms_stats.delete_system_stats

create table test_sapfi_coicar_at5dat11(
        datuct date,
        datumzprac number(8,0)
 ) 
row store compress advanced 
partition by list (datumzprac) (
        partition p20000101 values (20000101)
)
;

alter table test_sapfi_coicar_at5dat11 add partition p20200414 values (20200414);

insert /*+ append */ into test_sapfi_coicar_at5dat11
select date'2019-11-20' datuct, 20200414 datumzprac from dual connect by level <   2 union all
select date'2019-12-20' datuct, 20200414 datumzprac from dual connect by level <   2 union all
select date'2019-12-29' datuct, 20200414 datumzprac from dual connect by level <   4 union all
select date'2020-01-01' datuct, 20200414 datumzprac from dual connect by level <  55 union all
select date'2020-01-08' datuct, 20200414 datumzprac from dual connect by level <   3 union all
select date'2020-01-13' datuct, 20200414 datumzprac from dual connect by level <   8 union all
select date'2020-01-14' datuct, 20200414 datumzprac from dual connect by level <  117 union all
select date'2020-01-15' datuct, 20200414 datumzprac from dual connect by level <  65 union all
select date'2020-01-30' datuct, 20200414 datumzprac from dual connect by level <   2 union all
select date'2020-01-31' datuct, 20200414 datumzprac from dual connect by level <  12 union all
select date'2020-02-01' datuct, 20200414 datumzprac from dual connect by level <  20 union all
select date'2020-02-05' datuct, 20200414 datumzprac from dual connect by level <   4 union all
select date'2020-02-10' datuct, 20200414 datumzprac from dual connect by level <   5 union all
select date'2020-02-12' datuct, 20200414 datumzprac from dual connect by level <   2 union all
select date'2020-02-17' datuct, 20200414 datumzprac from dual connect by level <   2 union all
select date'2020-02-21' datuct, 20200414 datumzprac from dual connect by level <   16 union all
select date'2020-02-29' datuct, 20200414 datumzprac from dual connect by level <   37 union all
select date'2020-03-01' datuct, 20200414 datumzprac from dual connect by level < 1851 union all
select date'2020-03-02' datuct, 20200414 datumzprac from dual connect by level <  227 union all
select date'2020-03-03' datuct, 20200414 datumzprac from dual connect by level <   75 union all
select date'2020-03-04' datuct, 20200414 datumzprac from dual connect by level <   19 union all
select date'2020-03-05' datuct, 20200414 datumzprac from dual connect by level <  107 union all
select date'2020-03-06' datuct, 20200414 datumzprac from dual connect by level <  163 union all
select date'2020-03-07' datuct, 20200414 datumzprac from dual connect by level <   72 union all
select date'2020-03-08' datuct, 20200414 datumzprac from dual connect by level <   78 union all
select date'2020-03-09' datuct, 20200414 datumzprac from dual connect by level <  187 union all
select date'2020-03-10' datuct, 20200414 datumzprac from dual connect by level <  124 union all
select date'2020-03-11' datuct, 20200414 datumzprac from dual connect by level <   92 union all
select date'2020-03-12' datuct, 20200414 datumzprac from dual connect by level <  137 union all
select date'2020-03-13' datuct, 20200414 datumzprac from dual connect by level <  397 union all
select date'2020-03-14' datuct, 20200414 datumzprac from dual connect by level <   52 union all
select date'2020-03-15' datuct, 20200414 datumzprac from dual connect by level <   16 union all
select date'2020-03-16' datuct, 20200414 datumzprac from dual connect by level <  622 union all
select date'2020-03-17' datuct, 20200414 datumzprac from dual connect by level <  215 union all
select date'2020-03-18' datuct, 20200414 datumzprac from dual connect by level <  299 union all
select date'2020-03-19' datuct, 20200414 datumzprac from dual connect by level <  265 union all
select date'2020-03-20' datuct, 20200414 datumzprac from dual connect by level <  627 union all
select date'2020-03-21' datuct, 20200414 datumzprac from dual connect by level <   52 union all
select date'2020-03-22' datuct, 20200414 datumzprac from dual connect by level <   60 union all
select date'2020-03-23' datuct, 20200414 datumzprac from dual connect by level <  168 union all
select date'2020-03-24' datuct, 20200414 datumzprac from dual connect by level <  255 union all
select date'2020-03-25' datuct, 20200414 datumzprac from dual connect by level <  185 union all
select date'2020-03-26' datuct, 20200414 datumzprac from dual connect by level <  240 union all
select date'2020-03-27' datuct, 20200414 datumzprac from dual connect by level <  663 union all
select date'2020-03-28' datuct, 20200414 datumzprac from dual connect by level <   88 union all
select date'2020-03-29' datuct, 20200414 datumzprac from dual connect by level <  771 union all
select date'2020-03-30' datuct, 20200414 datumzprac from dual connect by level <  328 union all
select date'2020-03-31' datuct, 20200414 datumzprac from dual connect by level < 1675 union all
select date'2020-04-01' datuct, 20200414 datumzprac from dual connect by level <  641 union all
select date'2020-04-02' datuct, 20200414 datumzprac from dual connect by level <  251 union all
select date'2020-04-03' datuct, 20200414 datumzprac from dual connect by level <   84 union all
select date'2020-04-06' datuct, 20200414 datumzprac from dual connect by level <  325 union all
select date'2020-04-07' datuct, 20200414 datumzprac from dual connect by level <  366 union all
select date'2020-04-08' datuct, 20200414 datumzprac from dual connect by level <  459 union all
select date'2020-04-09' datuct, 20200414 datumzprac from dual connect by level < 2470 union all
select date'2020-04-10' datuct, 20200414 datumzprac from dual connect by level <   16 union all
select date'2020-04-11' datuct, 20200414 datumzprac from dual connect by level <   16 union all
select date'2020-04-12' datuct, 20200414 datumzprac from dual connect by level <   24 union all
select date'2020-04-13' datuct, 20200414 datumzprac from dual connect by level <  130 union all
select date'2020-04-14' datuct, 20200414 datumzprac from dual connect by level <    9  -- > change this value and the final (wrong) result changes in synch
/

commit
/

--
-- There are no indexes, so this method_opt collects fewer stats than expected
-- No column stats on the partition(s), only partition row and block stats
-- It does get basic column stats at the table level.
--

declare
        schema_name varchar2(128);
begin
        select sys_context('userenv', 'current_schema') into schema_name from dual;

        dbms_stats.gather_table_stats(
                ownname          => schema_name,
                tabname          => 'test_sapfi_coicar_at5dat11',
                partname         => 'p20200414',
                estimate_percent => dbms_stats.auto_sample_size,
                method_opt       => 'for all indexed columns size auto'
        );
end;
/

create table test_dwf_sapfi (
        datumucetnipom_code number(8,0) not null enable
) 
row store compress advanced 
partition by list (datumucetnipom_code) (
        partition p20000101 values (20000101) 
)
/

begin
        for i in (
                select  distinct to_char(datuct, 'yyyymmdd') datumucetnipom_code 
                from    test_sapfi_coicar_at5dat11 
                order by 
                        1
        ) loop
                execute immediate 
                        'alter table test_dwf_sapfi add partition p' || 
                                i.datumucetnipom_code || 
                                ' values (' || i.datumucetnipom_code || ')'
                ;
        end loop;
end;
/


insert  /*+ append */ into test_dwf_sapfi 
select  to_number(to_char(datuct, 'yyyymmdd')) 
from    test_sapfi_coicar_at5dat11 
where   datumzprac = 20200414
;

commit;

--
--  The problems (seem to) go away if you collect stats
--

-- execute dbms_stats.gather_table_stats(user,'test_dwf_sapfi',granularity=>'global')


set serveroutput off
set linesize 255
set pagesize 60
set trimspool on

alter session set statistics_level='all';

prompt  ===================================
prompt  plan with incorrect use_hash() hint
prompt  ===================================

select 
        /*  use_hash(dwf) */ 
        count(*) count_hash 
from 
        test_dwf_sapfi  dwf
where
         exists (
                select  1 
                from    test_sapfi_coicar_at5dat11      coi
                where   coi.datumzprac = 20200414
                and     to_char(coi.datuct,'yyyymmdd') = dwf.datumucetnipom_code
        );

select * from table(dbms_xplan.display_cursor(format=>'cost outline allstats last partition hint_report adaptive'));

set serveroutput on
spool off

January 26, 2021

Index Hints

Filed under: CBO,dbms_xplan,Execution plans,Hints,Ignoring Hints,Indexing,Oracle — Jonathan Lewis @ 4:28 pm GMT Jan 26,2021

At the end of the previous post on index hints I mentioned that I had been prompted to complete a draft from a few years back because I’d been sent an email by Kaley Crum showing the optimizer ignoring an index_rs_asc() hint in a very simple query. Here, with some cosmetic changes, is the example he sent me.

rem
rem     Script: index_rs_kaley.sql
rem     Dated:  Dec 2020
rem     Author: Kaley Crum
rem
rem     Last tested
rem             19.3.0.0
rem

create table range_scan_me(
        one,
        letter 
)
compress
nologging
as
with rowgen_cte as (
        select  null
        from    dual
        connect by level <=  11315
)
select
        1 one,
        case 
                when rownum <=  64e5     then 'A'
                when rownum  =  64e5 + 1 then 'B'
                when rownum <= 128e5     then 'C' 
        end     letter
from 
        rowgen_cte a
cross join 
        rowgen_cte b 
where 
        rownum <= 128e5
;

create index one_letter_idx on range_scan_me(one, letter) nologging;

The table has 12.8 million rows. Of the two columns the first always holds the value 1, the second has one row holding the value ‘B’, and 6.4M rows each holding ‘A’ and ‘C’. On my laptop it took about 20 seconds to create the table and 26 seconds to create the index; using a total of roughly 376 MB (29,000 blocks for the index, 18,500 blocks for the (compressed) table).

Since this is running on 19,3 Oracle will have created basic statistics on the table and index as it created them. Significantly, though, the statistics created during data loading do note include histograms so the optimizer will not know that ‘B’ is a special case, all it knows is that there are three possible values for letter.

Time now to query the data:

et serveroutput off
alter session set statistics_level=all;

select 
        /*+ index_rs_asc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last'));

I’ve told the optimizer to use an index range scan, using the “description” method to specify the index I want it to use. The hint is definitely valid, and the index can definitely be used in this way to get the correct result. But here’s the execution plan:

------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.01 |       8 |      4 |
|*  1 |  INDEX SKIP SCAN | ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |      4 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_asc(t1 (one, letter))

The plan gives us two surprises: first it ignores (and reports that it is ignoring) a perfectly valid hint. Secondly it claims to be using an index skip scan even though the common understanding of a skip scan is that it will be used when “the first column of the index doesn’t appear in the where clause”.

We can infer that the plan is truthful because it has taken only 8 buffer visits to get the result – that’s probably a probe down to the (1,’B’) index entry, then another probe to see if the last index leaf block has any entries in it where column one is greater than 1.

But there are a couple of little oddities about this “ignoring the index” line. First, if we hadn’t hinted the query at all it would have done a tablescan, so the “index” bit of the hint is being obeyed even if the “rs” bit isn’t. Then there’s this:

select 
        /*+ index_rs_desc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

-------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |      1 |        |      1 |00:00:00.01 |       8 |
|*  1 |  INDEX SKIP SCAN DESCENDING| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_desc(t1 (one, letter))

If we change the index_rs_asc() to index_rs_desc(), the optimizer still ignores the “range scan” bit of the hint, but honours the “descending” bit – we get an index skip scan descending.

Of course this example is a very extreme case – nevertheless it is a valid example of the optimizer behaving in a way that doesn’t seem very user-friendly. If we add ‘outline’ to the format options for the call to dbms_xplan.display_cursor() we’ll find that the index_ss_asc() and index_ss_desc() hints have been substituted for our attempted index_rs_asc() and index_rs_desc().

So, if we really are confident that an index range scan would work a lot better than an index skip scan what could we do. We could try telling it to use an index (posibly even an index range scan ascending), but not to do an index skip scan. Let’s test that and include the Outline Information in the execution plan:

select 
        /*+ index(t1) no_index_ss(t1) */
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
;


select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last outline'));


---------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.78 |   14290 |
|*  1 |  INDEX RANGE SCAN| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.78 |   14290 |
---------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index(t1)
           -  no_index_ss(t1)

It worked – we can see the index range scan, and we can see in the Buffers column of the plan why an index range scan was a bad idea – it’s taken 14,290 buffer visits to get the right result. If you check the index size I mentioned further up the page (, and think about how I defined the data, you’ll realise that Oracle has started an index range scan at the leaf block holding (1,B’) – which is half way along the index – and then walked every leaf block from there to the end of the index in an attempt to find any index entries with column one greater than 1.

The other thing to notice here is that the hint in the Outline Information is given as:

INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))

This was the hint that appeared in the outline whether I used the index() hint or the index_rs_asc() hint in the query. Similarly, when I tried index_desc() or index_rs_desc() as the hint the outline reported index_desc() in both cases.

If I try adding just this hint to the query the plan goes back to a skip scan. It’s another case where the hints in the Outline Information (hence, possibly, an SQL Plan Baseline) don’t reproduce the plan that the outline claims to be describing.

Summary

Does Oracle ignore hints?

It looks as if the answer is still no, except it seems to think that a skip scan is just a special case of a range scan (and, from the previous article, a range scan is just a special case of a skip scan). So if you want to ensure that Oracle uses your preferred index strategy you may have to think about including various “no_index” hints to block the indexes you don’t want Oracle to use, and then no_index_ss() and no_index_ffs() to make sure it doesn’t use the wrong method for the index you do want to use. Even then you may find you don’t have quite enough options to block every index option that you’d like to block.

January 25, 2021

Index Hints

Filed under: CBO,dbms_xplan,Hints,Ignoring Hints,Index skip scan,Indexing,Oracle — Jonathan Lewis @ 4:59 pm GMT Jan 25,2021

I’ve lost count of the number of times I’ve reminded people that hinting (correctly) is hard. Even the humble /*+ index() */ hint and its close relatives are open to misunderstanding and accidental misuse, leading to complaints that “Oracle is ignoring my hint”.

Strange though it may seem, I’m still not 100% certain of what some of the basic index hints are supposed to do, and even the “hint report” in the most recent versions of dbms_xplan.display_xxx() hasn’t told me everything I’d like to know. So if you think you know all about hints and indexing this blog note is for you.

I’ll start with a brief, and approximate, timeline for the basic index hints – starting from 8.0

Version Hint
8.0index
8.1index_asc, index_desc, index_ffs, no_index
9.0index_ss, index_ss_asc, index_ss_desc
10.1no_index_ffs, no_index_ss
11.1index_rs_asc, index_rs_desc
Saving these for laterchange_dupkey_error_index, domain_index_filter, domain_index_no_sort, domain_index_sort, ignore_row_on_dupkey_index, index_combine, index_join, index_stats, local_indexes, num_index_keys, parallel_index, use_invisible_indexes, use_nl_with_index, xmlindex_rewrite, xmlindex_rewrite_in_select, xmlindex_sel_idx_tbl

For completeness I’ve included the more exotic index-related hints in the list (without a version), and I’ve even highlighted the rarely seen use_nl_with_index() hint to remind myself to raise a rhetorical question about it at the end of this piece.

In this list you’ll notice that the only hint originally available directed the optimizer to access a table by index, but in 8.1 that changed so that we could

  1. tell the optimizer about indexes it should not use
  2. specify whether the index access should use the index in ascending or descending order
  3. use an index fast full scan.

In 9i Oracle then introduced the index skip scan, with the option to specify whether the skip scan should be in ascending or descending order. The index_ss hint seems to be no more than a synonym for the index_ss_asc hint (or should that be the other way round); ss far as I can tell the index_ss() hint will not produce a descending skip scan.

You’ll note that there’s no hint to block an index skip scan, until the hint no_index_ss() appears in 10g along with the no_index_ffs() hint to block the index fast full scan. Since 10g Oracle has got better at introducing both the “positive” and “negative” versions of a hint whenever it introduces any hints for new optimizer mechanisms.

Finally we get to 11g and if you search MOS you may still be able to find the bug note (4323868.8) that introduced the index_rs_asc() and index_rs_desc() hints for index range scan ascending and descending.

From MOS Doc 4323868.8: “This fix adds new hints to enforce that an index is selected only if a start/stop keys (predicates) are used: INDEX_RS_ASC INDEX_RS_DESC”

This was necessary because by this time the index() hint allowed the optimizer to decide for itself how to use an index and it was quite difficult to force it to use the strategy you really wanted.

It’s still a source of puzzlement to me that an explicit index() hint will sometimes be turned into an index_rs_asc() when you check the Outline Information from a call to dbms_xplan.display_xxx() the Optimizer wants to use to reproduce the plan, while there are other times that an explicit index_rs_asc() hint will be turned into a basic index() hint (which might not reproduce the original plan)!

The Warm-up

Here’s a little surprise that could only reveal itself in the 19c hint report – unless you were willing to read your way carefully through a 10053 (CBO) trace file in earlier versions of Oracle. It comes from a little investigation of the index_ffs() hint that I’ve kept repeating over the last 20 years.

rem
rem     Script:         c_indffs.sql
rem     Dated:          March 2001
rem     Author:         Jonathan Lewis
rem

create table t1
nologging
as
select 
        rownum                  id,
        rpad(mod(rownum,50),10) small_vc,
        rpad('x',50)            padding
from
        all_objects
where
        rownum <= 3000
;

alter table t1 modify id not null;

create index t_i1 on t1(id);
create index t_i2 on t1(small_vc,id);

set autotrace traceonly explain

select 
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index_ffs(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

select 
        /*+ index_ffs(t1) no_index(t1) */
        count(small_vc)
from    t1
where
        id > 2750
;

set autotrace off

I’ve created a table with two indexes, and then enabled autotrace to get the execution plans for 4 queries that vary only in their hinting. Here’s the plan (on 19.3, with my settings for system stats) for the first query:

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

It’s an index fast full scan on the t_i2 (two-column) index. If I add an index() hint to this query, will that allow Oracle to continue using the index fast full scan, or will it force Oracle into some other path. Here’s the plan for the query hinted with index(t1):

---------------------------------------------------------------------------------------------
| Id  | Operation                            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |      |     1 |    15 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |      |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1   |   250 |  3750 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T_I1 |   250 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">2750)

The optimizer has chosen an index range scan on the (single-column) t1 index. Since this path costs more than the index fast full scan it would appear that the index() hint does not allow the optimizer to consider an index fast full scan. So we might decide that an index_ffs() hint is appropriate to secure the plan we want – and here’s the plan we get with that hint:

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

As expected we get the index fast full scan we wanted. But we might want to add belts and braces – let’s include a no_index() hint to make sure that the optimizer doesn’t consider any other strategy for using an index. Since we’ve seen that the index() hint isn’t associated with the index fast full scan path it seems reasonable to assume that the no_index() is also not associated with the index fast full scan path. Here’s the plan we get from the final variant of my query with index_ffs(t1) no_index(t1):

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    15 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |      |     1 |    15 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_I2 |   250 |  3750 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("ID">2750)

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (2))
---------------------------------------------------------------------------
   2 -  SEL$1 / T1@SEL$1
         U -  index_ffs(t1) / hint conflicts with another in sibling query block
         U -  no_index(t1) / hint conflicts with another in sibling query block

The query has produced the execution plan we wanted – but only by accident. The hint report (which, by default, is the version that reports only the erroneous or unused hints) tells us that both hints have been ignored because they each conflict with some other hint in a “sibling” query block. In this case they’re conflicting with each other.

So the plan we get was our original unhinted plan – which made it look as if we’d done exactly the right thing to ensure that we’d made the plan completely reproducible. Such (previously invisible) errors can easily lead to complaints about the optimizer ignoring hints.

The Main Event

The previous section was about an annoying little inconsistency in the way in which the “negative” version of a hint may not correspond exactly to the “postive” version. There’s a more worrying issue to address when you try to be more precise in your use of basic index hints.

We’ve seen that an index() hint could mean almost anything other than an index fast full scan, while a no_index() hint (probably) blocks all possible uses of an index, but would you expect an index_rs_asc() hint to produce a skip scan, or an index_ss_asc() hint to produce a range scan? Here’s another old script of mine to create some data and test some hints:

rem
rem     Script:         skip_scan_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2009
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum  id
        from    all_objects 
        where   rownum <= 3000  -- > hint to avoid wordpress format issue
)
select
        mod(rownum,300)                                 addr_id300,
        mod(rownum,200)                                 addr_id200,
        mod(rownum,100)                                 addr_id100,
        mod(rownum,50)                                  addr_id050,
        trunc(sysdate) + trunc(mod(rownum,2501)/3)      effective_date,
        lpad(rownum,10,'0')                             small_vc,
        rpad('x',050)                                   padding
--      rpad('x',100)                                   padding
from
        generator       v1,
        generator       v2
where
        rownum <= 250000   -- > hint to avoid wordpress format issue
;

create index t1_i1 on t1(effective_date);
create index t1_i300 on t1(addr_id300, effective_date);
create index t1_i200 on t1(addr_id200, effective_date);
create index t1_i100 on t1(addr_id100, effective_date);
create index t1_i050 on t1(addr_id050, effective_date);

I’ve created a table with rather more indexes than I’ll be using. The significant indexes are t1_i1(effective_date), and t1_i050(addr_id050, effective_date). The former will be available for range scans the latter for skip scans when I test queries with predicates only on effective_date.

Choice of execution path can be affected by the system stats, so I need to point out that I’ve set mine with the following code:

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',500);
exception
        when others then null;
end;
/

And I’ll start with a couple of “baseline” queries and execution plans:

explain plan for
select 
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

alter index t1_i1 invisible;

explain plan for
select 
        /*+ index(t1) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

You’ll notice at line 11 I’ve made the t1_i1 index invisible, and it will stay that way for a couple more tests. Here are the first two execution plans:

Unhinted
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  1500 | 28500 |   428   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |  1500 | 28500 |   428   (9)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00',
              'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Hinted with index(t1)
-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index(t1)

Unhinted I’ve managed to rig the data and system stats so that the first path is a full tablescan; then, when I add the generic index(t1) hint Oracle recognises and uses the hint in the best possible way, picking the lowest cost index skip scan.

A variation I won’t show here – if I change the hint to index_rs_asc(t1) the optimizer recognizes there is no (currently visible) index that could be used for an index range scan and does a full tablescan, reporting the hint as unused. It won’t try to substitute a skip scan for a range scan.

What happens if I now try the index_ss(t1) hint without specifying an index. Firstly with the t1_i1 index still invisible, then after making t1_i1 visible again:

explain plan for
select 
        /*+ index_ss(t1) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

Here are the two execution plans, first when t1_i1(effective_date) is still invisible:

-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index_ss(t1)

As you might expect the optimizer has picked the t1_i050 index for a skip scan. (There are 3 other candidates for the skip scan, but since the have more distinct values for their leading column they are all turn out to have a higher cost than t1_i050).

So let’s make the t1_i1 index visible and see what the plan looks like:

----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |  1500 | 28500 |   521   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  1500 | 28500 |   521   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |  1500 |       |     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_ss(t1)

The optimizer picks an index range scan using the t1_i1 index, and reports the hint as unused! For years I told myself that an index skip scan was derived as a small collection of range scans, so an index range was technically a “degenerate” skip scan i.e. one where the “small collection” consisted of exactly one element. Oracle 19c finally told me I was wrong – the optimizer is ignoring the hint.

The fact that it’s a sloppy hint and you could have been more precise is irrelevant – if the optimizer won’t do a skip scan when you specify a range scan (but watch out for the next “index hints” instalment – see footnote) it shouldn’t do a range scan when you specify a skip scan (but that’s just a personal opinion).

We should check, of course, that a precisely targeted skip scan hint works before complaining too loudly – would index_ss(t1 t1_i050), or index_ss_t1 t1_i300) work when there’s a competing index that could produce a lower cost range scan? The answer is yes.

explain plan for
select 
        /*+ index_ss(t1 t1_i050) */
        small_vc
from    t1
where   effective_date >  to_date('&m_start_date','dd-mon-yyyy')
and     effective_date <= to_date('&m_end_date'  ,'dd-mon-yyyy')
;

select * from table(dbms_xplan.display(format=>'hint_report'));

-----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |  1500 | 28500 |  1558   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |  1500 | 28500 |  1558   (1)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | T1_I050 |  1500 |       |    52   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
       filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index_ss(t1 t1_i050)

If you specify a suitable index in the index_ss() hint then the optimizer will use it and won’t switch to the index range scan. You can, of course, specify the index by description rather than name, so the hint /*+ index_ss(t1 (addr_id050, effective_date)) */ or even a partial description like /*+ index_ss(t1 (addr_id050)) */ would have been equally valid and obeyed.

How much do you know?

I’ll finish off with a rhetorical question, which I’ll introduce with this description take from the 19c SQL Tuning Guide section 9.2.1.6:

The related hint USE_NL_WITH_INDEX(table index) hint instructs the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table. The index is optional. If no index is specified, then the nested loops join uses an index with at least one join predicate as the index key.

An intuitive response to this hint would be to assume that most people expect nested loops to use index unique scans or range scans into the second table. So what would your initial expectation be about the validity of use_nl_with_index() if the only way the index could be used was with an index skip scan, or a full scan, or a fast full scan. What if there were two join predicates and there’s a path which could do a nested loop if it used two indexes to do an index join (index_join()) or an index bitmap conversion (index_combine()). Come to that, how confident are you that the hint will work if the index specified is a bitmap index?

Summary

It’s important to be as accurate and thorough as possible when using hints. Even when a hint is documented you may find that you can asked “what if” questions about the hint and find that the only way to get answers to your questions is to do several experiments.

If you’re going to put hints into production code, take at least a little time to say to yourself:

“I know what I want and expect this hint to do; are there any similar actions that it might also be allowed to trigger, and how could I check if I need to allow for them or block them?”

Footnote: This journey of rediscovery was prompted by an email from Kaley Crum who supplied me with an example of Oracle using an index skip scan when it had been hinted to do an index range scan.

January 20, 2021

CBO Example

Filed under: CBO,Execution plans,Oracle,Statistics — Jonathan Lewis @ 10:01 am GMT Jan 20,2021

A little case study based on an example just in on the Oracle-L list server. This was supplied with a complete, working, test case that was small enough to understand and explain very quickly.

The user created a table, and used calls to dbms_stats to fake some statistics into place. Here, with a little cosmetic editing, is the code they supplied.

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

drop table t1 purge;

create table t1 (id number(20), v varchar2(20 char));
create unique index pk_id on t1(id);
alter table t1 add (constraint pk_id primary key (id) using index pk_id enable validate);
exec dbms_stats.gather_table_stats(user, 't1');
 
declare
        srec               dbms_stats.statrec;
        numvals            dbms_stats.numarray;
        charvals           dbms_stats.chararray;
begin
  
        dbms_stats.set_table_stats(
                ownname => user, tabname => 't1', numrows => 45262481, numblks => 1938304, avgrlen => 206
        );

        numvals := dbms_stats.numarray (1, 45262481);
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, numvals);
        dbms_stats.set_column_stats (
                ownname => user, tabname => 't1', colname => 'id', 
                distcnt => 45262481, density => 1/45262481,
                nullcnt => 0, srec => srec, avgclen => 6
        );

        charvals := dbms_stats.chararray ('', '');
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, charvals);
        dbms_stats.set_column_stats(
                ownname => user, tabname => 't1', colname => 'v', 
                distcnt => 0,  density => 0, 
                nullcnt => 45262481, srec => srec, avgclen => 0
        );
        dbms_stats.set_index_stats( 
                ownname => user, indname =>'pk_id', numrows => 45607914, numlblks => 101513,
                numdist => 45607914, avglblk => 1, avgdblk => 1, clstfct => 33678879, indlevel => 2
        );
end;
/
 
variable n1 nvarchar2(32)
variable n2 number

begin
        :n1 := 'D';
        :n2 := 50;
end;
/
 

select 
        /*+ gather_plan_statistics */ 
        * 
from    ( 
        select  a.id col0,a.id col1
        from    t1 a
        where   a.v = :n1 
        and     a.id > 1
        order by 
                a.id 
        ) 
where 
        rownum <= :n2 
;
 
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost peeked_binds '));

From Oracle’s perspective the table has 45M rows, with a unique sequential key starting at 1 in the id column. The query looks like a pagination query, asking for 50 rows, ordered by id. But the in-line view asks for rows where id > 1 (which, initiall, means all of them) and applies a filter on the v column.

Of course we know that v is always null, so in theory the predicate a.v = :n1 is always going to return false (or null, but not true) – so the query will never return any data. However, if you read the code carefully you’ll notice that the bind variable v has been declared as an nvarchar2() not a varchar2().

Here’s the exection plan I got on an instance running 19.3 – and it’s very similar to the plan supplied by the OP:

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |  3747 (100)|      0 |00:00:00.01 |
|*  1 |  COUNT STOPKEY                |       |      1 |        |            |      0 |00:00:00.01 |
|   2 |   VIEW                        |       |      1 |     50 |  3747   (1)|      0 |00:00:00.01 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |    452K|  3747   (1)|      0 |00:00:00.01 |
|*  4 |     INDEX RANGE SCAN          | PK_ID |      0 |   5000 |    14   (0)|      0 |00:00:00.01 |
----------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   2 - :2 (NUMBER): 50

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=:N2)
   3 - filter(SYS_OP_C2C("A"."V")=:N1)
   4 - access("A"."ID">1)

The question we were asked was this: “Why does the optimizer estimate that it will return 5,000 entries from the index range scan at operation4?”

The answer is the result of combining two observations.

First: In the Predicate Information you can see that Oracle has applied a character-set conversion to the original predicate “a.v = :n1” to produce filter(SYS_OP_C2C(“A”.”V”)=:N1). The selectivity of “function of something = bind value” is one of those cases where Oracle uses one of its guesses, in this case 1%. Note that the E-rows estimate for operation 3 (table access) is 452K, which is 1% of the 45M rows in the table.

In real life if you had optimizer_dynamic_sampling set at level 3, or had added the hint /*+ dynamic_sampling(3) */ to the query, Oracle would sample some rows to avoid the need for guessing at this point.

Secondly: the optimizer has peeked the bind variable for the rownum predicate, so it is optimizing for 50 rows (basically doing the arithmetic of first_rows(50) optimisation). The optimizer “knows” that the filter predicate at the table will eliminate all but 1% of the rows acquired, and it “knows” that it has to do enough work to find 50 rows in total – so it can calculate that (statistically speaking) it has to walk through 5,000 (= 50 * 100) index entries to visit enough rows in the table to end up with 50 rows.

Next Steps (left as exercise)

Once you’ve got the answer to the question “Why is this number 5,000?”, you might go back and point out that the estimate for the table access was 95 times larger than the estimate for the number of rowids selected from the index and wonder how that could be possible. (Answer: that’s just one of the little defects in the code for first_rows(n).)

You might also wonder what would have happened in this model if the bind variable n1 had been declared as a varchar2() rather than an nvarchar2() – and that might have taken you on to ask yet another question about what the optimizer was playing at.

Once you’ve modelled something that is a little puzzle there’s always scope for pushing the model a little further and learning a little bit more before you file the model away for testing on the next version of Oracle.

January 14, 2021

Between

Filed under: CBO,Conditional SQL,Execution plans,Oracle — Jonathan Lewis @ 11:07 am GMT Jan 14,2021

Reading Richard Foote’s latest blog note about automatic indexing and “non-equality” predicates I was struck by a whimsical thought about how the optimizer handles “between” predicates. (And at the same time I had to worry about the whimsical way that WordPress treats “greater than” and “less than” symbols.)

It’s probably common knowledge that if your SQL has lines like this:

columnA between {constant1} and {constant2}

the optimizer will transform them into lines like these:

    columnA >= {constant1}
and columnA <= {constant2}

The question that crossed my mind – and it was about one of those little details that you might never look at until someone points it out – was this: “does the optimizer get clever about which constant to use first?”

The answer is yes (in the versions I tested). Here’s a little demonstration:

rem
rem     Script:         between.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2021
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
select
        rownum  rn,
        ao.*
from
        all_objects ao
where
        rownum <= 50000
;

set autotrace traceonly explain

select  object_name
from    t1
where
        rn between 45 and 55
;


select  object_name
from    t1
where
        rn between 49945 and 49955
;


select  object_name
from    t1
where
        rn between 24945 and 24955
;

select  object_name
from    t1
where
        rn between 25045 and 25055
;

set autotrace off

All I’ve done is create a table with 50,000 rows and a column that is basically a unique sequence number between 1 and 50,000. Then I’ve checked the execution plans for a simple query for 11 rows based on the sequence value – but for different ranges of values.

Two of the ranges are close to the low and high values for the sequence; two of the ranges are close to, but either side of, the mid-point value (25,000) of the sequence. The big question is: “does the execution plan change with choice of range?”. The answer is Yes, and No.

No … because the only possible execution path is a full tablescan

Yes … because when you examine the plan properly you’ll notice a change in the Predicate Information. Here are the first two execution plans produced by the calls to dbms_xplan.display():

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"<=55 AND "RN">=45)

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=49945 AND "RN"<=49955)

Notice how the order of the filter predicates has changed as we move from one end of the range to the other. The optimizer has decided do the test that is more likely to fail first, and the test that is more likely to succeed second (which means there won’t be many rows where it has to run both tests which will make a small difference in the CPU usage).

Picking out just the filter predicate line from the output for this script (host grep filter between.lst) you can see the same pattern appear when the values supplied are very close to the mid-point (25,000).

SQL> host grep filter between.lst
   1 - filter("RN"<=55 AND "RN">=45)
   1 - filter("RN">=49945 AND "RN"<=49955)
   1 - filter("RN"<=24955 AND "RN">=24945)
   1 - filter("RN">=25045 AND "RN"<=25055)

My code has used literal values to demonstrate an effect. It’s worth checking whether we would still see the same effect if we were using bind variables (and bind variable peeking were enabled). So here’s a little more of the script:

set serveroutput off

variable b1 number
variable b2 number

exec :b1 := 45
exec :b2 := 55

select
        /* low_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

exec :b1 := 49945
exec :b2 := 49955

select
        /* high_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));
set serveroutput on

Since autotrace simply calls “explain plan” and doesn’t know anything about bind variables (treating them as unpeekable character strings) I’ve used code that executes the statements and pulls the plans from memory. Here are the results (with some of the script’s output deleted):

EXPLAINED SQL STATEMENT:
------------------------
select  /* low_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN"<=:B2 AND "RN">=:B1))


EXPLAINED SQL STATEMENT:
------------------------
select  /* high_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN">=:B1 AND "RN"<=:B2))

As you can see, when we query the low value the first comparison is made against :b2, when we query the high range the first comparison is made against :b1.

It is actually worth knowing that this can happen. How many times have you heard the question: “the plan’s the same, why is the performance different?”. Maybe the body of the plan looks the same and has the same plan_hash_value, but today the first person to execute the query supplied bind values that made the optimizer choose to apply the filters in the opposite order to usual. This probably won’t make much difference to CPU usage in most cases there are bound to be a few cases where it matters.

You’ll notice, by the way, that the plan with bind variables includes a FILTER operation that doesn’t appear in the plans with literal values. This is an example of “conditional SQL” – if you check the predicate information for operation 1 you’ll see that it’s checking that :b2 is greater than :b1, if this test doesn’t evaluate to true then operation 1 will not make a call to operation 2, i.e. the tablescan is in the plan but won’t happen at run-time.

(I believe that there may be some RDBMS which will treat (e.g.) “X between 20 and 10” as being identical to “X between 10 and 20” – Oracle doesn’t.)

Left as an exercise

The test data was created as a completely evenly spaced (by value) and evenly distributed (by count) set of values. How would things change if the data were sufficiently skewed that the optimizer would default to creating a histogram when gathering stats.

Left as another exercise**

There are lots of little bits of arithmetic that go into the CPU_COST component of an execution plan – including a tiny factor to allow for the number of columns that Oracle has to “step over” (by counting bytes) as it projects the columns needed by the query; so if you had a second “between” predicate on another column in the table, could you manage to get all 24 possible orders for the 4 transformed predicates by adjusting the ranges of the between clauses and/or moving the two columns to different positions in the row.

** For those in lockdown who need something to do to fill the time.

December 23, 2020

19c tweak

Filed under: CBO,Hints,Oracle,Upgrades — Jonathan Lewis @ 2:15 pm GMT Dec 23,2020

There are times when an upgrade makes a big difference to performance because an enhancement to the optimizer code path changes the costing of a plan that was always possible, but badly costed. I noticed an example of this while testing the code in the email I mentioned in last month’s posting on the “Incremental Sort” that Postgres 13 can do. Here’s a model to create some data and demonstrate the principle – the code is a modified (reduced) version of the code published by Phil Florent describing the feature.

rem
rem     Script:         fetch_first_postgres.sql
rem     author:         Phil Florent
rem     Dated:          6th Nov 2020
rem
rem     Last tested
rem             19.3.0.0        Uses index descending unhinted at low cost
rem             18.3.0.0        Used index desc efficiently if hinted, but high cost
rem             12.2.0.1        Used index desc efficiently if hinted, but high cost
rem

create table lancers(dtl timestamp, idg integer not null, perf integer);

insert into lancers(dtl, idg, perf)
with serie(i) as (
        select 25e4 from dual
        UNION ALL
        select i - 1 from serie where i > 1
)
select
        current_timestamp - (i / 1440),
        trunc(dbms_random.value * 1e5 + 1),
        case
                when dbms_random.value <= 0.001 then 50000 + trunc(dbms_random.value * 50000 + 1) 
                else trunc(dbms_random.value * 50000 + 1) 
        end
from serie
/

execute dbms_stats.gather_table_stats(user,'lancers',method_opt=>'for all columns size 1')

create index perf_i1 on lancers(perf, dtl);
alter table lancers modify (perf not null, dtl not null);

This is the basic statement I want to execute – but in some versions of Oracle it will have to be hinted to produce the execution plan I want to see.

select  
        idg, perf 
from  
        lancers 
order by
        perf desc  
fetch first 5 rows only
/

If you check the order by clause and the definition of the index perf_i1 you’ll see that Oracle could (in principle) walk the index in descending order, stopping after just 5 rows, to produce the result.

But here are the execution plans from 19.3.0.0, 18.3.0.0, and 12.2.0.1, with their plans pulled from memory and showing the rowsource execution statistics (hinted by gather_plan_statistics) to show you what happens – starting from the newest first:

19.3.0.0: (unhinted)
--------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |     8 (100)|      5 |00:00:00.01 |       9 |
|*  1 |  VIEW                         |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|     8   (0)|      5 |00:00:00.01 |       9 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |      5 |     3   (0)|      5 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

You can see an index_desc() hint in the output, but it has been commented out. The key feature to note is that the optimizer has found the path I was hoping to see, and it’s a low-cost path, although there is one oddity in the plan – the E-rows (cardinality estimate) for the table access doesn’t allow for the stopkey and, since there are no predicates in the query, reports the 250K rows that exist in the table.

For 18.3.0.0 I had to include the hint, and you’ll see why:

18.3.0.0 (hinted with index_desc)
---------------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)



Again we see the plan is possible, but the optimizer’s cardinality estimate for the hinted index scan is 250K rows – the full size of the index, and it has allowed for that in the cost of the query. So the cost of this plan is high and in the absence of the hint the optimizer would have used a full tablescan with sort.

Finally we get down to 12.2.0.1 – and I’ve shown the hinted and unhinted plans.

12.2.0.1 (hinted index_desc)
-----------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)


12.2.0.1 Unhinted
------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 1374242431

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |      1 |        |  1102 (100)|      5 |00:00:00.24 |     822 |       |       |          |
|*  1 |  VIEW                    |         |      1 |      5 |  1102  (10)|      5 |00:00:00.24 |     822 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|         |      1 |    250K|  1102  (10)|      5 |00:00:00.24 |     822 |  2048 |  2048 | 2048  (0)|
|   3 |    TABLE ACCESS FULL     | LANCERS |      1 |    250K|   132  (16)|    250K|00:00:00.13 |     822 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

As you can see, 12.2.0.1 and 18.3.0.0 behave exactly the same way when hinted – the path is acceptable, but the cost is high. Consequently when I remove the hint the optimizer switches to using a full tablescan with sort because it’s cost is lower (thanks, in part, to the pattern in the data) than the indexed access path.

Summary

Two thoughts to take away from this note.

  • First, there were two possible execution plans for the same query and the optimizer in versions below 19c was picking the one that was clearly a bad idea. The presence of alternatives, though, means that the patterns in the data, the index definition and statistics (especially the clustering_factor) the number of rows to fetch, and various other optimizer settings may mean that you find yourself in the unlucky position that the optimizer’s arithmetic is on the boundary between the two plans and it switches randomly between them from day to day.
  • Secondly, when you upgrade to 19c the optimizer seems to be more likely to pick the indexed access path for a query like this – and that will probably be a good thing, but in a few cases it might turn out to be a bad thing.

November 9, 2020

I wish

Filed under: CBO,Execution plans,Oracle,sorting,Wishlist — Jonathan Lewis @ 12:01 pm GMT Nov 9,2020

Here’s a lovely little mechanism new to Postgres 13 that can minimise sorting costs: the “incremental sort”. It would be nice to see it in Oracle as well as it could make an enormous difference to “fetch first N” queries.

The concept is simple – if a rowsource moving up a plan is known to be in “partially sorted” order when it reaches a sort operation the optimizer can choose whether or not to sort the entire rowsource in one go or to sort it in batches as it arrives.

For example if you have a query which has “order by cola, colb” as its final clause and the plan has used an index range scan on an index on just (cola) then the “sort order by” operation can accept the rows for the first value of cola, sort them by colb and pass them on, then accept the rows for the second value of cola, sort them by colb and pass them on, and so on.

Better still, if you have an “order by table1.colA, table2,colB” and the optimizer used an indexed access path on table1.colA and a nested loop into table2, then the optimizer will still recognize that the generated data is already partially sorted, and sort batches for table1.colA to order them by table2.colB. (Oracle has a mechanism for dealing with sorted hash clusters that is roughly similar.)

Obviously the benefit is that you avoid doing a very large sort that might spill to disc; slightly less obviously is that you might avoid sorting the whole data set if you have a “fetch first N” query. Here’s an Oracle model setting up a demonstration of the principle:

rem
rem     Script:         fetch_first_postgres_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem

create table t1 as
with generator(id) as (
        select 1 from dual
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator
/

alter table t1 modify n1 not null;

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(n1);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

drop index t1_i1;
create index t1_i1 on t1(n1, n2);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

I’ve created a table with 100,000 rows where the value of n1 is repeated 10 times and well-clustered, while for each value of n1, the n2 column has 10 distinct values (not necessarily in order thanks to the mod(,13)). Using this data set I’ve executed the same query three times – selecting the rows for 6 consecutive values of n1, ordering by n1, n2.

The first test will have to do a tablescan, the second can use the index I’ve created on (n1) but will have to do a sort (after visiting the table), the third can walk the index I’ve created on (n1,n2) and complete without sorting.

Here, in order, are the three execution plans:

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    70 |  4830 |   145   (9)| 00:00:01 |
|   1 |  SORT ORDER BY     |      |    70 |  4830 |   145   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T1   |    70 |  4830 |   144   (8)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N1"<=30 AND "N1">=25)


----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                       |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    70 |  4830 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1">=25 AND "N1"<=30)


-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    70 |  4830 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    70 |  4830 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1">=25 AND "N1"<=30)

Having created the Oracle model I asked Phil Florent (who had prompted this note by emailing me with a question about why Postgres 13 was executing a “Fetch First” so much faster than Oracle 19 (the basic answer is at this URL) if he would run it under Postgres and send me the execution plans.

The code required two changes – the first to handle the change in dialect, the second to supply a change in scale because my model produced such a small output that Postgres didn’t bother to use the new feature. Here’s the modified SQL to generate the original test data:

create table t1 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)       small_vc,
        rpad('x',50,'x')        padding
from
        generator;

And the three plans (so that you can compare the content and style of output with Oracle plans) that went with this small data set – first the no-index plan, then the plan for the (n1) index, then the plan with (n1,n2) indexed:

                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
Sort  (cost=2835.85..2836.00 rows=62 width=74) (actual time=19.534..19.600 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Seq Scan on t1  (cost=0.00..2834.00 rows=62 width=74) (actual time=0.067..19.417 rows=60 loops=1)
         Filter: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))
         Rows Removed by Filter: 99940

Planning Time: 0.351 ms
Execution Time: 19.703 ms


                                                    QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Sort  (cost=11.50..11.66 rows=62 width=74) (actual time=0.224..0.289 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Index Scan using t1_i1 on t1  (cost=0.42..9.66 rows=62 width=74) (actual time=0.024..0.113 rows=60 loops=1)
         Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.665 ms
Execution Time: 0.391 ms



                                                 QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Index Scan using t1_i1 on t1  (cost=0.42..114.66 rows=62 width=74) (actual time=0.022..0.155 rows=60 loops=1)
   Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.690 ms
Execution Time: 0.259 ms
 

As you can see, there’s very little difference between Oracle’s plans and Postgres’ plans in this example. (Though it’s rather nice to see what extra details appear in the Postgres plans, which were generated with the equivalent of Oracle’s “statistics_level = all” setting.

The middle plan shows us that Postgres didn’t use the “incremental sort” – but it’s useful to see it anyway so that we can compare the structure of the plan when we increase the volume of data – which the following script is for:

create table t2 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 10000000
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator;
 
create index t2_i1 on t2(n1);

explain analyze
select  *
from    t2
where   n1 between 25000 and 30000
order by
        n1, n2
;

analyze t2;

Now we have 10M rows, still with 10 rows per value of n1, and our query requests 5,001 values of n1, so 50,010 rows in total. With that much data the optimizer decided to use the incremental sort rather than sorting the whole result set in one go:

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Incremental Sort  (cost=0.49..4532.57 rows=52677 width=74) (actual time=0.160..164.125 rows=50010 loops=1)
   Sort Key: n1, n2
   Presorted Key: n1
   Full-sort Groups: 1251  Sort Method: quicksort  Average Memory: 30kB  Peak Memory: 30kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.055..61.663 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.197 ms
Execution Time: 204.490 ms

Note, particularly, the “Presorted Key” line leading to the “Sort Key” line. We can also check the plan without the incremental sort with a set command to disable the feature:

set enable_incremental_sort = off;

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Sort  (cost=8812.64..8944.33 rows=52677 width=74) (actual time=181.487..227.735 rows=50010 loops=1)
   Sort Key: n1, n2
   Sort Method: external merge  Disk: 4128kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.070..74.975 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.239 ms
Execution Time: 266.325 ms

Comparing the times for these two queries there isn’t really much difference – but you will notice that the old approach has had to dump 4MB to disc while the incremental sorts gets the job done in 30Kb of memory, which may be an important difference in other circumstances.

More importantly than a tiny time difference in this trivial example is the fact that Phil Florent’s original question was:

“Why is this two-table join with ‘fetch first 5’ taking 28 seconds to complete in Oracle when it takes less than one millisecond to complete in Postgres?”

The answer was:

  • Oracle is doing a hash join that produces a 25M row result set, sorting it to do an analytic row_number() call to get the first 5 rows.
  • Postgres is doing a nested loop join, fetching the first row(s) from the first table in the right order by index range scan then fetching the matching rows from the second table – then doing an incremental sort on those rows; then fetching the next row(s) and repeating the process – stopping after it’s got through a total of 5 rows instead of generating and sorting 25M rows.

Here’s the plan (with obfuscated table names):

                                                    QUERY PLAN                                                    

-------------------------------------------------------------------------------------------------------------------

Limit  (cost=284.36..287.21 rows=5 width=10)
   ->  Incremental Sort  (cost=284.36..14261803.18 rows=25000000 width=10)
         Sort Key: table1.col1 DESC, table2.col2 DESC
         Presorted Key: table1.col1
         ->  Nested Loop  (cost=1.00..13294209.39 rows=25000000 width=10)
               ->  Index Scan using table1_col1_idx on table1  (cost=0.56..1300593.47 rows=25000000 width=8)
               ->  Index Scan using table2.pk_col on table2 (cost=0.43..0.48 rows=1 width=6)
                     Index Cond: (id = table1.id)

You’ll notice in this exanple that the incremental sort can take advantage of the optimizer’s knowledge of the index definitions whether the sort is ascending or descending.

You’ll also notice that Postgres has the same problem as Oracle when it comes to coping with Fetch First (or, in Oracle’s case, rownum <= N and optimizer_mode = first_rows_N). Even when it “knows” that a query is going to stop fetching data very early the plan still reports 25M rows as the expected volume of data.

Summary

Postgres 13 has a wonderful mechanism for optimising sorts that can make a huge difference to “first rows” queries and even basic “order by” clauses involving join query result sets.

Footnote

Browsing the Internet for documentation and comment on the incremental sort I found the following. Given my level of ignorance about Postgres I can’t comment on the completeness or correctness of the information, but it looked good to me, and I found its comments about this feature very informative.

October 10, 2020

Interval Oddity

Filed under: CBO,Execution plans,Oracle,Partitioning — Jonathan Lewis @ 2:51 pm BST Oct 10,2020

Interval partitioning is a popular strategy for partitioning date-based data. It’s an enhanced variant of range partitioning that allows you to define a starting partition and an interval that should be used to derive the high values for all subsequent partitions – and Oracle doesn’t even have to create intervening partitions if you insert data that goes far beyond the current partition, it automatically creates exactly the right partition (with the correct high_value and correctly inferred lower boundary) for the incoming data and behaves as if the intervening partitions will become available when they’re needed at some later point in time. So no need for DBAs to work out actual partition high_values, no need to ensure that all the partitions you need out into the future have been pre-created, no more batch processes crashing with Oracle error ORA-14400: inserted partition key does not map to any partition.

But there’s a surprising difference between traditional range partitioning and the newer interval partitioning that will increase CPU usage in some cases and may (though I haven’t yet investigated this in sufficient detail to create an example) lead to variations in execution plans.

To demonstrate the difference I’m going to create two tables with the same structure and content then run the same query against them and show you the resulting execution plans. Here’s the code to create and populate the tables:

rem
rem     Script:         interval_or_range.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem


create table t_interval(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
interval (numtoyminterval(1,'MONTH'))
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy'))
        )
;

create table t_range(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy')),
                partition start_p2 values less than (to_date('01-Feb-2020','dd-mon-yyyy')),
                partition start_p3 values less than (to_date('01-Mar-2020','dd-mon-yyyy')),
                partition start_p4 values less than (to_date('01-Apr-2020','dd-mon-yyyy')),
                partition start_p5 values less than (to_date('01-May-2020','dd-mon-yyyy')),
                partition start_p6 values less than (to_date('01-Jun-2020','dd-mon-yyyy')),
                partition start_p7 values less than (to_date('01-Jul-2020','dd-mon-yyyy')),
                partition start_p8 values less than (to_date('01-Aug-2020','dd-mon-yyyy')),
                partition start_p9 values less than (to_date('01-Sep-2020','dd-mon-yyyy'))
        )
;

insert into t_range select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue  
;

insert into t_interval select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_RANGE',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_INTERVAL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve used composite partitioned tables in this example but the same anomaly appears with simple partitioning – which you can test by commenting out the “subpartion by ..” lines. The two tables have partitions defined to hold a month’s data. I’ve inserted a few rows into most of the partitions, and ensured that I haven’t accidentally attempted to insert data that falls outside the legal range of the table with the predefined partitions .

To show that the tables are nominally identical here’s the list of partitions with their high values and number of rows:

column table_name       format a15
column partition_name   format a15
column high_value       format a80

break on table_name skip 1

select
        table_name, partition_name, num_rows, high_value
from
        user_tab_partitions
where
        table_name in ('T_INTERVAL','T_RANGE')
order by
        table_name, partition_name
;


ABLE_NAME      PARTITION_NAME    NUM_ROWS HIGH_VALUE
--------------- --------------- ---------- --------------------------------------------------------------------------------
T_INTERVAL      START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10722              30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10723              29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10724              31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10725              30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10726              31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10727              30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10728              31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10729              28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

T_RANGE         START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P2                30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P3                29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P4                31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P5                30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P6                31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P7                30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P8                31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P9                28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


18 rows selected.

With this setup we can now run a simple query against the two tables using a where clause that is an exact match for the partition definition for the July data:

set serveroutput off

select 
        count(*) 
from 
        t_range 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

select 
        count(*) 
from 
        t_interval 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

I’ve left in two versions of the date predicates – the “ANSI” style, and a strictly formatted “to_char()” style that is still the approached used most commonly in Oracle systems. The effect on the execution plans is the same for both variations of the date declaration, and here are the two execution plans – reported from an instance of 19.3.0.0:

---------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |         |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|         |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|   3 |    TABLE ACCESS FULL    | T_RANGE |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
---------------------------------------------------------------------------------------------------


------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |            |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|            |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|*  3 |    TABLE ACCESS FULL    | T_INTERVAL |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(("ORDER_DATE">=TO_DATE(' 2020-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DATE"<TO_DATE(' 2020-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))

It’s quite possible that you won’t notice the difference these plans, even though I’ve put them directly one below the other; and it’s very likely that most people wouldn’t think about the difference if they didn’t have the two plans available at the same time to compare.

If you can’t spot the difference it’s because I’ve deliberately arranged them in a way that might fool you into not looking carefully enough.

If you’ve spotted the difference it’s probably because you’ve examined the Predicate Information section. There isn’t one for the the t_range example but there is one for the t_interval example – and it was a little sneaky of me to make it easy for you to assume that the one set of predicates I showed was common to the two plans.

Since the plans have been pulled from memory (v$sql_plan) it seems likely that they are truthful and the query based on the interval partitioning is actually checking every row in the partition against two date values. In my example that’s not going to make much difference to performance, but in a datawarehouse with millions of rows per partition there’s scope for a noticeable increase in CPU between the two queries – especially since the block reads are likely to be direct path, bypassing much of the usual (CPU -intensive) buffer cache activity.

Interestingly when I checked the 10053 trace files for the two queries the CPU cost for the pair was identical. Normally you expect to see at least a little CPU cost (even if it’s only tens of units out of millions) for checking a predicate. This raises the question – does the optimizer allow a cost for the t_range table for an event that isn’t going to happen, or is it reporting an event that isn’t going to happen for the t_interval table. [Edit: easy enough to check by doing the test on a simple heap table that clones the data from that one partition, and checking the calculated cost with and without the predicate]

There’s a further ramification to this anomaly, relating to the question on the Oracle-L list server that prompted the investigation. What happens on Exadata where the predicates can be offloaded to storage?

The original question wasn’t actually about the appearance (or not) of the predicates, it was about an unexpected cardinality estimate for a query involving two equi-partitioned tables, so this note has gone completely off-topic from the question; but another strange detail about the predicates showed up when I suggested the creation of a column group on the join columns. A (redundant) predicate that had been offloaded to storage stopped being offloaded; here, from the original posting with the original tables, are the two different Predicate Information sections that appeared – the first without the column group, the second after the column group had been created and its stats collected:

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - storage("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Predicate Information (identified by operation id):
---------------------------------------------------
   4 -
access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Notice how the storage() predicate that appears at operation 8 in the first set has disappeared from the second even though operation 6 manages to retain its storage() predicate throughout.

I’ve modelled a further example of odd behaviour using two pairs of tables – one pair using range/hash partitioning the other interval/hash partitioning. I won’t reproduce it here but the problem of redundant predicates appearing and then disappearing gets even stranger.

I haven’t yet produced an example where the unexpected predicate behaviour has affected the costs or cardinalities – but I’ve only spent a couple of hours playing around with well-formed examples: it’s possible that in badly formed examples (e.g. with statistical inconsistencies) the side effect could mean that two notionally identical queries produce different numbers and different plans because they end up with different predicates in the final transformed query.

Footnote

Following an email from David Kurtz, it occurred to me that I should have made it clear that the disappearance of predicates on the partition key is expected behaviour when the predicates are clearly synchronised with the partition boundaries. The behaviour for the interval partitioning is the oddity, the behaviour for the “normal” range partitioning is the standard.

October 9, 2020

Inline Hint

Filed under: 18c,CBO,Execution plans,Hints,Oracle,subqueries,Subquery Factoring,Tuning — Jonathan Lewis @ 12:46 pm BST Oct 9,2020

If you’ve ever used subquery factoring (“with” subqueries or common table expressions (CTEs) as they are often called) then you’re probably aware of the (undocumented) hints /*+ materialize */ , which forces Oracle to create a local temporary table to hold the result of the subquery for subsequent use, and /*+ inline */, which forces the optimizer to copy the text of the subquery into the body of the query before starting the optimisation phase.

There’s a small, but important, enhancement to these hints that appeared in Oracle 18. Like so many other hints in Oracle they can now have a query block name as a “parameter”, so you can use them at the top level of your query. Here’s some code to demonstrate:

rem
rem     Script:         inline_hint.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             18.3.0.0
rem             12.2.0.1  -- hints don't have any effect
rem

create table t1
as
select  *
from    all_objects
where   rownum <= 10000  -- > comment to avoid wordpress format issue
/

create index t1_i1 on t1(object_id);

create table t2
as
select  *
from    t1
/

create index t2_i1 on t2(object_id);

spool inline_hint.lst


explain plan for
with v1 as (
        select 
                /*+ 
                        qb_name(cte) 
                */ 
                object_id, object_type, object_name 
                from t2 gtt1
                where object_id is not null
)
select
        /*+
                qb_name(main)
                inline(@cte)
        */
        t1.object_id,
        t1.object_name
from
        t1
where
        exists (
                select
                        null
                from
                        v1      v1a
                where
                        v1a.object_id = t1.object_id
                and     v1a.object_type = 'TABLE'
        )
and     exists (
                select
                        null
                from
                        v1      v1b
                where
                        v1b.object_id = t1.object_id
                and     v1b.object_name like 'WRI%'
        )
and
        t1.object_id between 100 and 200
/

select * from table(dbms_xplan.display(format=>'alias'));

explain plan for
with v1 as (
        select 
                /*+ 
                        qb_name(cte) 
                */ 
                object_id, object_type, object_name 
                from t2 gtt1
                where object_id is not null
)
select
        /*+
                qb_name(main)
                materialize(@cte)
        */
        t1.object_id,
        t1.object_name
from
        t1
where
        exists (
                select
                        null
                from
                        v1      v1a
                where
                        v1a.object_id = t1.object_id
                and     v1a.object_type = 'TABLE'
        )
and
        t1.object_id between 100 and 200
/

select * from table(dbms_xplan.display(format=>'alias'));

The first of these two queries uses the factored subquery twice so, by default, it will create a “cursor duration memory” temporary table to hold the results of the subquery and then use that temporary table twice in the execution plan.

Conversely the second query uses the factored subquery just once, so the optimizer’s default action will be to copy the text into the body of the main query and optimize the whole thing as a single query block.

To reverse the default behaviour in versions of Oracle up to 12.2.0.1 (though later patch sets may include the 18c enhancements) you could add the /*+ inline */ or /*+ materialize */ hints respectively to the factored subqueries; but my demonstration you can see that I’ve given the factored subquery a query block name and added the relevant hint to the main query block passing in the query block name of the factored subquery – hence /*+ inline(@cte) */ and /*+ materialize(@cte) */.

Here – from 19.3 – are the resulting execution plans (with some cosmetic editing) – first the plan with the inline() hint.

------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |       |     1 |    63 |     9  (12)| 00:00:01 |
|   1 |  NESTED LOOPS SEMI                     |       |     1 |    63 |     9  (12)| 00:00:01 |
|   2 |   NESTED LOOPS                         |       |     1 |    50 |     7  (15)| 00:00:01 |
|   3 |    SORT UNIQUE                         |       |     1 |    25 |     4   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |    25 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                  | T2_I1 |    48 |       |     2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID BATCHED | T1    |     1 |    25 |     2   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN                   | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
|*  8 |   TABLE ACCESS BY INDEX ROWID BATCHED  | T2    |     1 |    13 |     2   (0)| 00:00:01 |
|*  9 |    INDEX RANGE SCAN                    | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$06B48120
   4 - SEL$06B48120 / GTT1@CTE
   5 - SEL$06B48120 / GTT1@CTE
   6 - SEL$06B48120 / T1@MAIN
   7 - SEL$06B48120 / T1@MAIN
   8 - SEL$06B48120 / GTT1@CTE
   9 - SEL$06B48120 / GTT1@CTE

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_NAME" LIKE 'WRI%')
   5 - access("OBJECT_ID">=100 AND "OBJECT_ID"<=200)
   7 - access("OBJECT_ID"="T1"."OBJECT_ID")
       filter("T1"."OBJECT_ID"<=200 AND "T1"."OBJECT_ID">=100)
   8 - filter("OBJECT_TYPE"='TABLE')
   9 - access("OBJECT_ID"="T1"."OBJECT_ID")
       filter("OBJECT_ID"<=200 AND "OBJECT_ID">=100)

As you can see Oracle has copied the subquery text into the main body of the text and then optimized to produce a three-table join. One of the subqueries has been unnested into an aggregate view (operations 3,4,5), the other has been transformed into a semi-join.

In passing you’ll also notice that the optimizer has used transitive closure to add the range predicate on t1 to both occurrences of the t2 table.

And here’s the plan for the query with the single use of the subquery and materialize() hint:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |    48 |  2448 |    39   (8)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |       |       |            |          |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6611_F53A566 |       |       |            |          |
|   3 |    TABLE ACCESS FULL                     | T2                         | 10000 |   322K|    27   (8)| 00:00:01 |
|*  4 |   HASH JOIN SEMI                         |                            |    48 |  2448 |    13  (16)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED   | T1                         |    48 |  1200 |     4   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                     | T1_I1                      |    48 |       |     2   (0)| 00:00:01 |
|*  7 |    VIEW                                  |                            | 10000 |   253K|     8  (13)| 00:00:01 |
|   8 |     TABLE ACCESS FULL                    | SYS_TEMP_0FD9D6611_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$A3F38ADC
   2 - CTE
   3 - CTE          / GTT1@CTE
   5 - SEL$A3F38ADC / T1@MAIN
   6 - SEL$A3F38ADC / T1@MAIN
   7 - SEL$AA28F105 / V1A@SEL$1
   8 - SEL$AA28F105 / T1@SEL$AA28F105

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V1A"."OBJECT_ID"="T1"."OBJECT_ID")
   6 - access("T1"."OBJECT_ID">=100 AND "T1"."OBJECT_ID"<=200)
   7 - filter("V1A"."OBJECT_TYPE"='TABLE' AND "V1A"."OBJECT_ID">=100 AND "V1A"."OBJECT_ID"<=200)

In this plan the optimizer has created an in-memory temporary table and then used it in the existence subquery – which it has then transformed into a semi-join, so we have a query block with the name SEL$A3F38ADC; but we also see that the query block CTE still exists, labelling the operations that Oracle used to populate the temporary table.

It is an interesting (and irritating) detail that when we look at object aliases we see (operation 8) that Oracle has given the temporary table the alias of t1 – which is just a little confusing since I actually have a table called t1!

Next Steps

Being able to nominate a query block for the inline() and materialize() hints may be of great help in some cases (there’s a recent example on the Oracle Developer Forum (may need a MOS login) where it might make a huge difference to the performance of a particular query without requiring a rewrite of the SQL).

But there are a couple of details to investigate. First, I had a query block name built into my factored subquery – what happens if the author of the SQL didn’t include a query block name?

Before I’d added the inline() hint and query block names in the first example above this is what the plan looked like:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |    48 |  6240 |    48  (11)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |       |       |            |          |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6612_F53A566 |       |       |            |          |
|   3 |    TABLE ACCESS FULL                     | T2                         | 10000 |   322K|    27   (8)| 00:00:01 |
|*  4 |   HASH JOIN SEMI                         |                            |    48 |  6240 |    21  (15)| 00:00:01 |
|*  5 |    HASH JOIN SEMI                        |                            |    48 |  4992 |    13  (16)| 00:00:01 |
|   6 |     TABLE ACCESS BY INDEX ROWID BATCHED  | T1                         |    48 |  1200 |     4   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN                    | T1_I1                      |    48 |       |     2   (0)| 00:00:01 |
|*  8 |     VIEW                                 |                            | 10000 |   771K|     8  (13)| 00:00:01 |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6612_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
|* 10 |    VIEW                                  |                            | 10000 |   253K|     8  (13)| 00:00:01 |
|  11 |     TABLE ACCESS FULL                    | SYS_TEMP_0FD9D6612_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$A317D234
   2 - SEL$1
   3 - SEL$1        / GTT1@SEL$1
   6 - SEL$A317D234 / T1@SEL$2
   7 - SEL$A317D234 / T1@SEL$2
   8 - SEL$D67CB2D2 / V1B@SEL$4
   9 - SEL$D67CB2D2 / T1@SEL$D67CB2D2
  10 - SEL$D67CB2D3 / V1A@SEL$3
  11 - SEL$D67CB2D3 / T1@SEL$D67CB2D3

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V1A"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - access("V1B"."OBJECT_ID"="T1"."OBJECT_ID")
   7 - access("T1"."OBJECT_ID">=100 AND "T1"."OBJECT_ID"<=200)
   8 - filter("V1B"."OBJECT_NAME" LIKE 'WRI%' AND "V1B"."OBJECT_ID">=100 AND "V1B"."OBJECT_ID"<=200)
  10 - filter("V1A"."OBJECT_TYPE"='TABLE' AND "V1A"."OBJECT_ID">=100 AND "V1A"."OBJECT_ID"<=200)

As you can see, the factored subquery (operations 2 and 3) has the query block name of sel$1 and the main query (operations 6 an 7 where the real t1 is used) has the query block name sel$2. So without giving the subquery a name I could have used the hint /*+ inline(@sel$1) */ in the main query block.

This takes us on to the second point that needs investigation. If you’ve looked at the example on the Oracle Developer Forum you will have seen that there’s an SQL statement that references a stored view and the factored subquery of interest is defined in the view. This means we might be able to edit the query that calls the view to include a hint referencing the query block inside the view – but then what do we do if we can’t edit the main query itself?

To be investigated (1) – would the inline() hint with nominated query block work if the factored subquery was inside a stored view that we were using in our query?

To be investigated(2) – if (1) works, could we achieve the same result by using an SQL Patch to attach the hint to the main query text without editing the main query?

Update (Oct 2020)

It turns out that I discovered this enhancement a few months ago while doing some experimentation with recursive subquery factoring.

Update Nov 2020

A blog note from Nenad Noveljic warns of a surprising ORA-07445 if you get too trigger-happy with the inline() and materialize() hints.

October 6, 2020

Index FFS Cost 2

Filed under: Bugs,CBO,Indexing,Oracle,Statistics — Jonathan Lewis @ 12:58 pm BST Oct 6,2020

Here’s a little puzzle, highlighting a “bug that’s not a bug” that was “fixed but not fixed” some time in the 10.2 timeline. (If you want specifics about exactly when the fix became available and what patches might be available they’re in MOS – Bug 5099019 : DBMS_STATS DOESN’T COUNT LEAF_BLOCKS CORRECTLY.

Running 19.3.0.0, with the system statistics as shown:


begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',500);
end;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                cascade          => true,
                estimate_percent => 100,
                method_opt       => 'for all columns size 1'
        );
end;
/

select
        index_name, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
;

alter system flush buffer_cache;

set autotrace traceonly 

select
        count(small_vc)
from    t1
where   small_vc is not null
;

set autotrace off

For my test run there are no uncommitted transactions, the gathering of table stats with cascade to indexes means all the blocks are clean so there’s no undo activity needed to produce an answer for the final query, and all that the query is going to do is run an index fast full scan to count the number of rows in the table because there’s an index on just (small_vc).

The system stats tell us that Oracle is going to cost the index fast full scan by taking the leaf block count, dividing by 16 (the MBRC) and multiplying by 2 ( mreadtim / sreadtim) and adding a bit for CPU usage. So here are the results from running the query and generating the plan with autotrace:


PL/SQL procedure successfully completed.

INDEX_NAME           LEAF_BLOCKS
-------------------- -----------
T1_V1                        153

1 row selected.

System altered.

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 274579903

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |    22   (5)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_V1 |  9999 |   107K|    22   (5)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("SMALL_VC" IS NOT NULL)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1530  consistent gets
       1522  physical reads
          0  redo size
        558  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Points to notice:

  • The cost is (as predicted):  ceiling(153 leaf_blocks / 16 MBRC ) * 2 + a bit:  = 20 + bit.
  • The number of physical blocks read is 1522, not the 153 leaf blocks reported in index stats
  • No recursive SQL, and I did say that there were no undo / read consistency issues to worry about

There is clearly an inconsistency between the size of the index and the (100% gathered) leaf_block count that the optimizer is using, and this is a point I made many years ago in “Cost Based Oracle – Fundamentals”.  To gather the leaf block count Oracle has looked at the number of leaf blocks that, after cleanout. hold any index entries – and here’s how I prepared this demo to confuse the issue:


rem
rem     Script:         index_ffs_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2012
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        rownum                  id,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid wordpress format issue
;

create index t1_v1 on t1(small_vc) pctfree 80;

delete
        from t1
where
        id between 5000 and 95000
;

commit;

When I gathered stats on the index I had just deleted 90% of the data from the table – so 90% of the leaf blocks in the index were empty but still in the index structure, and only 10% of the leaf blocks held any current index entries. This is why Oracle reported leaf_blocks = 153 while the index fast full scan had to read through nearly 1530 blocks.

This is one of those contradictory problems – for an index fast full scan you need to know the size of the segment that will be scanned because that’s the number of blocks you will have to examine; but in most cases the number of populated index leaf blocks is the number you need to know about when you’re trying to cost an index range scan. Of course in most cases the nature of Oracle’s implementation of B-tree indexes will mean that the two counts will be within one or two percent of each other. But there are a few extreme cases where you could get an index into a state where the segment size is large and the data set is small and you don’t want the optimizer to think that an index fast full scan will be low-cost.

Oracle produced a mechanism for getting the segment block count captured as the leaf_blocks statistic late in 10.2, but it’s not implemented by default, and it’s not something you can tweak into a query with the opt_param() hint. Fix control 509019 has the description: “set leaf blocks to the number of blocks in the index extent map”, but it’s not a run-time fix, it’s a fix that has to be in place when you gather the  index stats – thus:

alter session set "_fix_control"='5099019:1';

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                cascade          => true,
                estimate_percent => 100,
                method_opt       => 'for all columns size 1'
        );
end;
/

select
        index_name, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
;

alter system flush buffer_cache;

set autotrace traceonly 

select
        count(small_vc)
from    t1
where   small_vc is not null
;

set autotrace off

===================================================

Session altered.

PL/SQL procedure successfully completed.

INDEX_NAME           LEAF_BLOCKS
-------------------- -----------
T1_V1                       1555

1 row selected.

System altered.

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 274579903

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |   201   (3)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_V1 |  9999 |   107K|   201   (3)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("SMALL_VC" IS NOT NULL)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1530  consistent gets
       1522  physical reads
          0  redo size
        558  bytes sent via SQL*Net to client
        439  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Session altered.

As you can see Oracle has now set leaf_blocks = 1555. This is actually the number of blocks below the highwater mark in the segment. A further check with the dbms_space and dbms_space_usage packages showed that the number of blocks in the index structure was actually 1,523 with a further 32 blocks that we could infer were space management blocks. Of the 1,523 blocks in the index structure 157 were reported as “FULL” while 1364 were reported as FS2 which possibly ought to mean  “available for re-use” (though still in the structure), although this didn’t quite seem to be the case a few years ago.

Although Oracle has supplied a fix to a problem I highlighted in CBO-F, I can understand why it’s not enabled by default, and I don’t think I’d want to take advantage of it in a production system given the way it’s a session setting at stats gathering time. The number of times it likely to matter I’d probably add hints to the SQL to stop the optimizer from using the index incorrectly, or do something at a stats-gathering moment to call dbms_stats.set_index_stats().

 

October 1, 2020

Index FFS Cost

Filed under: Bugs,CBO,Execution plans,Indexing,Oracle — Jonathan Lewis @ 11:46 am BST Oct 1,2020

There are a number of unexpected issues with the optimizer’s treatment of the index fast full scan, the access path where Oracle ignores the structure of the B-tree and uses multiblock reads to do a brute-force segment scan as if the index were a “skinny table” with a few blocks of irrelevant garbage (i.e. the branch blocks) that could be ignored.

A recent comment made a few days on a blog about the optimizer’s “index-join” access path reminded me that I had a few notes to finish and publish that might help some people address performance issues relating to a couple of little-known fixes for optimizer oddities in this area. Today’s “fix” is one that appeared in 10g (or maybe very late in 9i) but never became the default optimizer behaviour (possibly because it’s one of those “not a bug” types of bug with no “always correct” strategy).

The problem revolves around the optimizer_index_cost_adj parameter (which is another reason why the fix might have been kept out of the standard code base). For index-only execution plans the parameter will apply to the index range scan, full scan, or skip scan, but not to the index fast full scan. Here’s a model to demonstrate this:


rem
rem     Script:         oica_iffs.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2012
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id 
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        lpad(trunc(dbms_random.value(0,10000)),10)      v1,
        rpad('x',20)                                    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid wordpress format issue
;

create index t1_i1 on t1(v1);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                cascade          => true,
                method_opt       => 'for all columns size 1'
        );
end;
.

select blocks from user_tables where table_name = 'T1';
select leaf_blocks from user_indexes where index_name = 'T1_I1';

I’ve created a fairly narrow table with 1M rows, and created an index on that table on the v1 column. In my test there were 5,219 blocks in the table and 3,068 blocks in the index.

I’m now going to run a query to count the rows in the table where v1 is not null and report its execution plan. The first run will be unhinted, the second run will include a hint that forces an index full scan, and the third run will include a hint to force an index fast full scan.


prompt  ==============
prompt  Unhinted query
prompt  ==============

select 
        count(*) 
from
        t1
where
        v1 is not null
;

prompt  ==========
prompt  Index hint
prompt  ==========

select 
        /*+ index(t1) */
        count(*) 
from
        t1
where
        v1 is not null
;

prompt  ==============
prompt  index_ffs hint
prompt  ==============

select 
        /*+ index_ffs(t1) */
        count(*) 
from
        t1
where
        v1 is not null
;

Here are the three execution plans. As you might have predicted the default plan (for my Oracle 19.3) is the index fast full scan:


==============
Unhinted query
==============
-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |   450  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_I1 |  1000K|    10M|   450  (15)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1" IS NOT NULL)

==========
Index hint
==========
--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |    11 |  3159   (3)| 00:00:01 |
|   1 |  SORT AGGREGATE  |       |     1 |    11 |            |          |
|*  2 |   INDEX FULL SCAN| T1_I1 |  1000K|    10M|  3159   (3)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1" IS NOT NULL)

==============
index_ffs hint
==============
-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |   450  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_I1 |  1000K|    10M|   450  (15)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1" IS NOT NULL)

Since the size of the index is less than the size of the table the cost of the index fast full scan will be lower than the cost of a tablescan; and since an index fast full scan does multiblock reads while the index full scan is assumed to use single block reads the cost of the index fast full scan will be lower than the cost of the index full scan.

Now we set the optimizer_index_cost_adj to 1 (as happens in a number of “legacy” commercial products) and repeat the experiment. Here are the three plans (reduced to just the body of the plan):

==================
OICA = 1, Unhinted
==================
--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |    11 |    32   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE  |       |     1 |    11 |            |          |
|*  2 |   INDEX FULL SCAN| T1_I1 |  1000K|    10M|    32   (4)| 00:00:01 |
--------------------------------------------------------------------------

====================
OICA = 1, index hint
====================
--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |    11 |    32   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE  |       |     1 |    11 |            |          |
|*  2 |   INDEX FULL SCAN| T1_I1 |  1000K|    10M|    32   (4)| 00:00:01 |
--------------------------------------------------------------------------

========================
OICA = 1, index_ffs hint
========================
-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |   450  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_I1 |  1000K|    10M|   450  (15)| 00:00:01 |
-------------------------------------------------------------------------------

The default plan is now the index full scan, not the index fast full scan; and when we look at the two hinted plans we can see why. The full scan cost has been reduced to 1% of the original cost (as dictated by the setting for optimizer_index_cost_adj); the fast full scan cost has not been adjusted.

So is this a good thing or a bad thing? Different people may answer that question differently. There are clearly likely to be cases where changing the setting for the optimizer_index_cost_adj (which you shouldn’t have been doing anyway for the last 20 years or so) will result in some plans switching from fast full scans to full scans with unfortunate consequences;

On the other hand when you do a fast full scan it may do direct path reads, while the full scan will do cached reads, and the cache may be holding a lot of the index leaf blocks already, so some people might prefer the current behaviour;

On the other, other hand an Exadata system can use storage indexes when doing direct path segment scans (even when the segment is an index) so an Exadata fast full scan might be far more desirable than a partially cached full scan.

So no generic right answer, only specific answers for particular sets of circumstances.

Solutions

This issue was addressed some time in the 10g timeline- my notes say the earliest successful tests I did of the fix were on 10.2.0.4, and that it wasn’t available in 9.2.0.8. There are two possibilities – a fix control, and an event.

  • Fix control: 4483286 – descibed as “Discount FFS cost using optimizer_index_cost_adj”
  • Event: 38085 at level one. I don’t think this event has ever appeared in the oraus.msg file, but it is referenced by bug 4483286

Of the two, setting the fix control is probably the safer option since the event is (apparently) non-existent and there’s always the possibility that the number will be hi-jacked in a future release to do something completely different. It’s also worth noting that the fix control can be applied to an individual statement through the /*+ opt_param() */ hint (there’s an example of the syntax towards the end of another article of mine).

A little demonstration:


-- alter session set events '38085 trace name context forever, level 1';
alter session set "_fix_control"='4483286:1';
alter session set optimizer_index_cost_adj = 1;

set autotrace traceonly explain

prompt  ==============
prompt  Unhinted query
prompt  ==============

select 
        count(*) 
from
        t1
where
        v1 is not null
;

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_I1 |  1000K|    10M|     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------

With the event or _fix_control set the cost of the index fast full scan drops to 1%, and the path reappears without being hinted.

 

September 9, 2020

Bloom Upgrade

Filed under: 18c,CBO,Joins,Oracle,Tuning — Jonathan Lewis @ 5:18 pm BST Sep 9,2020

It’s a common pattern of Oracle features that they start with various restrictions or limitations that disappear over time. This note is about an enhancement to Bloom filter processing that appeared in the 18.1 optimizer and, for some people, may be a good enough reason for upgrading to a newer version of Oracle. This enhancement came to my attention by way of the Oracle Developer forum in a thread with the title Bloom filters and view using UNION ALL asking how to get a Bloom filter pushed inside a UNION ALL view. The original requirement wasn’t a trivial one so I’ll demonstrate the problem with a very simple example – first the data set:

rem
rem     Script:         bloom_pushdown.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2020
rem
rem     Last tested:
rem             19.3.0.0
rem

create table t1 as select * from all_objects where rownum <= 50000;
create table t2 as select t1.* from t1, (select rownum n1 from dual connect by level <= 4);
create table t3 as select t1.* from t1, (select rownum n1 from dual connect by level <= 4); -- > comment to avoid wordpress format issue

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns object_type size 254'
        );
end;
/

I’ve been a bit lazy here, copying data from view all_objects. I’ve gathered stats on t1 so that I can generate a histogram on the object_type column because I’m going to query for a rare object_type and I want the optimizer to get a reasonable estimate of rows. I’m going to hint a parallel query to join t1 to t2 (aliased, trivially, as v1 for reasons that will become apparent soon):

select
        /*+ 
                parallel(2) 
        */
        t1.object_name, v1.object_name
from
        t1,
        t2 v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

In my case the optimizer chooses to do a hash join between these two table, and creates a Bloom filter to try and minimise the data passing through the data flow operation. The result set in my 12.2.0.1 database is only 16 rows, so it would be nice if the parallel scan could eliminate most of the 200,000 rows in t2 early – here’s the execution plan pulled from memory after running the query with rowsource execution stats enabled:


----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |      1 |        |   371 (100)|     16 |00:00:00.06 |      20 |      0 |       |       |          |
|   1 |  PX COORDINATOR        |          |      1 |        |            |     16 |00:00:00.06 |      20 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)  | :TQ10000 |      0 |     16 |   371   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN           |          |      2 |     16 |   371   (5)|     16 |00:00:00.05 |    6278 |   3988 |  1250K|  1250K|     2/0/0|
|   4 |     JOIN FILTER CREATE | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|*  5 |      TABLE ACCESS FULL | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|   6 |     JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.04 |    4244 |   3988 |       |       |          |
|   7 |      PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.04 |    4244 |   3988 |       |       |          |
|*  8 |       TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|     16 |00:00:00.03 |    4244 |   3988 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
   8 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"V1"."OBJECT_ID"))

We see that Oracle has generated a Bloom filter at operation 4 from the data returned from t1 at operation 5, and then used that Bloom filter at operation 6 to eliminate most of the data from t2 before passing the remaining few rows up to the hash join.

Let’s make the query more interesting – what if you want to use a UNION ALL of t2 and t3 in the query (for example one might be “current data” while the other is “historic data”. Here’s the query and plan from 12.2.0.1:

select
        /*+ 
                parallel(2) 
        */
        t1.object_name, v1.object_name
from
        t1,
        (select * from t2 union all select * from t3) v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |      1 |        |   667 (100)|     32 |00:00:00.37 |      40 |      0 |       |       |          |
|   1 |  PX COORDINATOR         |          |      1 |        |            |     32 |00:00:00.37 |      40 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |      0 |     32 |   667   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN            |          |      1 |     32 |   667   (5)|     32 |00:00:00.34 |    5125 |   3860 |  1250K|  1250K|     2/0/0|
|*  4 |     TABLE ACCESS FULL   | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|   5 |     VIEW                |          |      2 |    400K|   584   (4)|    400K|00:00:00.52 |    8488 |   7976 |       |       |          |
|   6 |      UNION-ALL          |          |      2 |        |            |    400K|00:00:00.24 |    8488 |   7976 |       |       |          |
|   7 |       PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|    200K|00:00:00.11 |    4244 |   3988 |       |       |          |
|*  8 |        TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|    200K|00:00:00.07 |    4244 |   3988 |       |       |          |
|   9 |       PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|    200K|00:00:00.11 |    4244 |   3988 |       |       |          |
|* 10 |        TABLE ACCESS FULL| T3       |     32 |    200K|   292   (4)|    200K|00:00:00.03 |    4244 |   3988 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   4 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
   8 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
  10 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue

No Bloom filter – so all 400,000 rows feed up the plan and through the hash join. This won’t matter too much for my sub-second tiny data set but on a pair of 50GB tables, with the potential to offload the Bloom filter to storage in Exadata and, perhaps, eliminate 99% of the data at the cell servers, this could make a huge difference to performance.

Since Bloom filters are all about hashing data (in Oracle the standard Bloom filter is the bitmap summarising the build table in a hash join) let’s trying pushing the optimizer into a hash distribution for the parallel join to see if that had any effect:


select
        /*+ 
                parallel(2) 
                gather_plan_statistics
                leading(@sel$1 t1@sel$1 v1@sel$1)
                use_hash(@sel$1 v1@sel$1)
                pq_distribute(@sel$1 v1@sel$1 hash hash)
        */
        t1.object_name, v1.object_name
from
        t1,
        (select * from t2 union all select * from t3) v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |      1 |        |   667 (100)|     32 |00:00:00.43 |      60 |      0 |       |       |          |
|   1 |  PX COORDINATOR             |          |      1 |        |            |     32 |00:00:00.43 |      60 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002 |      0 |     32 |   667   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN BUFFERED       |          |      1 |     32 |   667   (5)|     32 |00:00:00.38 |    4000 |   3752 |  2290K|  2082K|     2/0/0|
|   4 |     JOIN FILTER CREATE      | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX RECEIVE             |          |      2 |      4 |    75   (4)|      8 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       PX SEND HYBRID HASH   | :TQ10000 |      0 |      4 |    75   (4)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   7 |        STATISTICS COLLECTOR |          |      2 |        |            |      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|   8 |         PX BLOCK ITERATOR   |          |      2 |      4 |    75   (4)|      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|*  9 |          TABLE ACCESS FULL  | T1       |     26 |      4 |    75   (4)|      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|  10 |     PX RECEIVE              |          |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  11 |      PX SEND HYBRID HASH    | :TQ10001 |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  12 |       JOIN FILTER USE       | :BF0000  |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  13 |        VIEW                 |          |      2 |    400K|   584   (4)|    400K|00:00:00.68 |    8488 |   7976 |       |       |          |
|  14 |         UNION-ALL           |          |      2 |        |            |    400K|00:00:00.59 |    8488 |   7976 |       |       |          |
|  15 |          PX BLOCK ITERATOR  |          |      2 |    200K|   292   (4)|    200K|00:00:00.18 |    4244 |   3988 |       |       |          |
|* 16 |           TABLE ACCESS FULL | T2       |     32 |    200K|   292   (4)|    200K|00:00:00.06 |    4244 |   3988 |       |       |          |
|  17 |          PX BLOCK ITERATOR  |          |      2 |    200K|   292   (4)|    200K|00:00:00.12 |    4244 |   3988 |       |       |          |
|* 18 |           TABLE ACCESS FULL | T3       |     32 |    200K|   292   (4)|    200K|00:00:00.08 |    4244 |   3988 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   9 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue 
       filter("T1"."OBJECT_TYPE"='SCHEDULE') 
  16 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue
  18 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue

We’ve managed to introduce a Bloom filter (which is visible as :BF0000 in the plan, even through there’s no reference to sys_op_bloom_filter() in the predicate information) but there’s a problem, we’re still passing 400,000 rows up the plan and the Bloom filter is only being applied at (or just after) the VIEW operator, discarding all but 66 rows before doing the hash join. It’s an improvement but not ideal; we’d like to see the Bloom filter applied to each of the two tables separately to eliminate rows as early as possible.

This can’t be done in 12.2, and you’d have to rewrite the query, changing a “join with union” into a “union of joins”, and that’s not really a desirable strategy.

Next Steps

Searching MOS, though you will be able to find the following note:

Doc ID 18849313.8 – ENH : bloom filters/pruning are pushed through union-all view

There’s an enhancement request to do what we want in 18.1, and the enhancement has got into the software. Here’s the (unhinted) plan from 19.3 (the plan stays the same when optimizer_features_enable is set back to 18.1.0, but drops back to the 12.1. plan when OFE is set to 12.2.0.1):

------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |   666 (100)|     32 |00:00:00.11 |      10 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |            |     32 |00:00:00.11 |      10 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)    | :TQ10000 |      0 |     32 |   666   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN             |          |      2 |     32 |   666   (5)|     32 |00:00:00.05 |   10020 |   7958 |  1250K|  1250K|     2/0/0|
|   4 |     JOIN FILTER CREATE   | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    1998 |      0 |       |       |          |
|*  5 |      TABLE ACCESS FULL   | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    1998 |      0 |       |       |          |
|   6 |     VIEW                 |          |      2 |    400K|   583   (4)|     32 |00:00:00.04 |    8022 |   7958 |       |       |          |
|   7 |      UNION-ALL           |          |      1 |        |            |     12 |00:00:00.02 |    4011 |   3979 |       |       |          |
|   8 |       JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|   9 |        PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|* 10 |         TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|  11 |       JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|  12 |        PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|* 13 |         TABLE ACCESS FULL| T3       |     32 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
  10 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T2"."OBJECT_ID")) 
  13 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T3"."OBJECT_ID"))

As you can see, we create a Bloom filter at operation 4, and use it twice at operations 8 and 11 – with the sys_op_bloom_filter() functions clearly visible in the predicate information showing us that the Bloom filter is applied to the object_id column in both cases.

If you want to disable this enhancement for some reasons there are two hidden parameters available (which you might set for a single query using the opt_param() hint):

  • _bloom_filter_setops_enabled = true
  • _bloom_pruning_setops_enabled = true

The first is for Bloom filters in the situation shown, I assume the second deals with Bloom filters for partition pruning.

Summary

In versions prior to 18.1 the optimizer is unable to push Bloom filters down to the individual tables in a UNION ALL view, but this limitation was removed in the 18.1 code set.

 

July 10, 2020

Recursive WITH upgrade

Filed under: ANSI Standard,CBO,Execution plans,Oracle,Subquery Factoring,Upgrades — Jonathan Lewis @ 4:19 pm BST Jul 10,2020

There’s a notable change in the way the optimizer does cost and cardinality calculations for recursive subquery factoring that may make some of your execution plans change – with a massive impact on performance – as you upgrade to any version of Oracle from 12.2.0.1 onwards. The problem appeared in a question on the Oracle Developer Community forum a little while ago, with a demonstration script to model the issue.

I’ve copied the script – with a little editing – and reproduced the change in execution plan described by the OP. Here’s my copy of the script, with the insert statements that generate the data (all 1,580 of them) removed.

rem
rem     Script:         recursive_with_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem
rem     Notes:
rem     https://community.oracle.com/thread/4338248
rem
rem     The upgrade to 12.2.0.1 made this query much slower (on 15,000 rows)
rem     Setting OFE to 12.1.0.1 is a first possible fix for the issue.
rem     The scale is too small to see much difference in this case
rem

drop table test_folder purge;

create table test_folder(
        fldr_key                number(16,0)            not null        enable,                 
        fldr_id                 varchar2(255 byte)      not null        enable,                 
        fldr_desc_tx            varchar2(255 byte),                     
        par_fldr_key            number(16,0),                   
        seus_key                number(16,0)            not null        enable,                 
        fldr_private_flg        varchar2(1 byte)        not null        enable,                 
        last_updt_dt            date                    not null        enable,                 
        last_upby_seus_key      number(16,0)            not null        enable,                 
        lock_seq_nbr            number(9,0) default 0   not null        enable,                 
        content_guid            raw(16),                
        constraint test_folder_pk primary key (fldr_key)                
)       
;              

-- list of insert statements

alter table test_folder add constraint test_folder_fk  
        foreign key (par_fldr_key) references test_folder(fldr_key)
;  
  
create or replace force editionable view test_folder_vw (fldr_key) as   
with rec_path(fldr_key)  as (
        select  tf.fldr_key  
        from    test_folder tf where tf.par_fldr_key is null  
        union all  
        select  tf.fldr_key  
        from    test_folder tf, rec_path  
        where   rec_path.fldr_key = tf.par_fldr_key
)  
select fldr_key  
from rec_path   
; 

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'TEST_FOLDER',
                method_opt  => 'for all columns size 1'
        );
end;
/


select * from test_folder_vw where fldr_key = -41;  

I’ve run the test 3 times. First in 12.2.0.1 with no tweaking; then in 12.2.0.1 with the hint /*+ optimizer_features_enable(‘12.1.0.2’) */ and finally in a genuine 12.1.0.2 environment. In all three cases I enabled rowsource execution stats (‘alter session set statistics_level = all’) and pulled the plans from memory – with the following results

First, the base result from 12.1.0.2

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.03 |     604 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.03 |     604 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.03 |     604 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      68 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.02 |     536 |  1696K|  1696K| 1488K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|*  6 |     TABLE ACCESS FULL                     | TEST_FOLDER |      8 |   1419 |  11352 |00:00:00.01 |     536 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

Two points to note, in particular. First that the hash join has the recursive with pump as its first (build table) child and the table access full of test_folder as its second child (probe table); secondly that there is no value given for E-Rows for the recursive with pump.

Now the 12.2.0.1 plan:

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |   2524K|      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |   2524K|   1419 |00:00:00.01 |      23 |  1743K|  1743K| 1632K (0)|
|   5 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  6 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
|   7 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41)
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

There are three changes to notice in this plan – which (for the OP) was much slower than the 12.1.0.2 plan. First, the order of the hash join has changed, the recursive with pump is now the second child (probe table) in the join (and again shows no value for E-Rows); secondly that Oracle has introduced an extra operation – the buffer sort (reuse) – populated by the table access full – as the build table; thirdly (presumably the point of buffer sort (reuse) operation) the number of buffer visits has dropped from a few hundred to a couple of dozen.

Finally let’s check what happens if we set the OFE (optimizer_features_enable) to 12.1.0.2 while running 12.2.0.1

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.01 |      23 |  1797K|  1797K| 1573K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|   6 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  7 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   7 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

In these conditions the recursive with pump has gone back to being the build table (first child); but it’s worth noting that the 12.2 buffer sort (reuse) is still in place – saving us a few hundred buffer gets (and, for a bigger table, a number of disc reads possibly). Downgrading the optimizer_features_enable has given us the plan we needed, but this we’ve got an example that shows that hacking the parameter isn’t a guarantee that we will get exactly the path we used to get in the older version.

The story so far.

It seems that we can address the performance problem that the OP had by setting the optimzer_feature_enable to the older version – possibly through a hint embedded in the SQL, perhaps through an SQL Baseline or SQL Patch. Maybe we’ll have to have a logon trigger that sets the parameter for particular users or, worst case scenario, maybe we’ll have to set the parameter at the system level. Given how undesirable the last option could be it would be nice to know exactly what is causing the change in plan.

As a basic clue – if the order of tables in a hash join reverses itself this usually means that the byte (not row) estimates have changed. The optimizer will use the table with the lower byte count as the build table in a hash join. So the recursive with pump – whose row and byte estimates don’t appear – must have produced larger numbers in 12.2.0.1.

A change in the 12.2 plan that I haven’t yet mentioned is the E-rows for the hash join; it’s gone up from 640 (12.1.0.2) to 2.5 million! So let’s repeat the tests with the CBO (10053) trace file enabled and see if we can find a number like 2524K appearing as a join estimate in the trace file. Having created the two trace files (in 12.2.0.1, one with the OFE set backwards) I executed the following grep command against the trace files:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc

I’d set the tracefile_identifier to ‘ofe’ and ‘base’ respectively for the 12.1.0.2 and 12.2.0.1 tests, and here are the results:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
1166:Join Card - Rounded: 640 Computed: 639.941176

grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc
1195:Join Card - Rounded: 640 Computed: 639.941176
1391:Join Card - Rounded: 2544 Computed: 2543.865546
1576:Join Card - Rounded: 10112 Computed: 10111.865546
1737:Join Card - Rounded: 40193 Computed: 40193.075630
1898:Join Card - Rounded: 159759 Computed: 159758.731092
2059:Join Card - Rounded: 635008 Computed: 635008.462185
2220:Join Card - Rounded: 2524023 Computed: 2524023.394958
2269:Join Card - Rounded: 2524023 Computed: 2524023.394958

That’s an interesting clue. Something seems to be calculating a larger and larger value in the 12.2.0.1 trace, starting with the hash join cardinality that appeared in 12.1.0.2 had, growing by a factor of nearly 4 each time, and ending with the hash join cardinality we saw in the 12.2.0.1 plan.

Taking a closer look at the content of the 12.2.0.1 trace file it turned out that every stage in that escalation was Oracle recalculating the cost and cardinality of joining test_folder (the table) and rec_path (the “with” subquery) using the figures from the previous join calculation as the new base figures for rec_path. In effect the optimizer was calculating the cost of following the recursive subquery down to its 7th level of recursion.

Side note: in agreement with my comment about the smaller (in bytes) rowsource being used as the build table, the initial join order started as (test_folder, rec_path) in the first two iterations, but switched to (rec_path, test_folder) from the 3rd iteration onwards.

So we’ve identified the mechanics that cause the change in plan; the question now is: why 7 iterations to the final cost? (Briefly I did a quick check to see how many circles of hell there were in Dante’s Inferno – but it’s 9 (or 10 depending how you count). A quick check of v$parameter (and the x$ tables for the hidden parameters) revealed the following:

Name                                     Value
------------------------------------ ---------
_recursive_with_branch_iterations            7

Setting this parameter to 1 in the session, or adding the hint /*+ opt_param(‘_recursive_with_branch_iterations’ 1) */ to the query resulted in the 12.1.0.2 plan appearing in 12.2.0.1 – and this is a much less intrusive way of getting the plan we’re interested in than setting the entire OFE back to 12.1.0.2. One might even set the parameter in the spfile (after receiving approval from Oracle Corp., of course) given how precisely targetted it is (and know that it doesn’t switch off that nice little buffering trick.)

Summary

From 12.2 onwards the optimizer does recursive recosting of recursive “with” subqueries. This means the cost and cardinality estimates of a plan may change and the impact may cause a significant change in performance – it certainly did for the OP.

The change seems to be driven by the hidden parameter _recursive_with_branch_iterations, which was introduced in 12.2.0.0 with a default value of 7. Setting this parameter to 1 reverts the optimizer to the pre-12.2 behaviour. If you run into a problem of recursive “with” subqueries changing plans and causing performance problems on an upgrade from pre-12.2 to a newer version of Oracle then it would be worth investigating this parameter as the least intrusive way of reverting back to the original plans.

Update (July 2020)

I’ve just been prompted to check MOS for any references to the hidden parameter – and discovered a note that was published in September 2018, updated ub Feb 2019.  It’s amazing how easy it can be to find an answer on MOS when you already know what the answer is ;) Document id 2443466.1 Oracle 12.2.0.1 CBO calculating high cost/CPU for queries with recursive sub-query (Doc ID 2443466.1)

This gives two workarounds to the problem of a change in cost in 12.2 – set the optimizer_features_enable to 12.1.0.2, or set the hidden parameter to 1. It references two bugs (one a duplicate of the other, both apparently unpublished):

  • Bug 23515289 : PERFORMANCE REGRESSION OBSERVED WITH RECURSIVE WITH SERIAL PLAN
  • Bug 24566985 : UPG: QUERY PERFORMANCE ON ALL_TSTZ_TABLES 160 TIMES SLOWER THAN 11.2.0.4

and the Permanent Fix for the problem is to install the patch for Bug 24566985 on 12.2.0.1

 

June 29, 2020

Most Recent – 2

Filed under: CBO,Execution plans,Oracle,Tuning — Jonathan Lewis @ 1:02 pm BST Jun 29,2020

A question arrived in my email a few days ago with the following observations on a statement that was supposed to query the data dictionary for some information about a specified composite partitioned table. The query was wrapped in a little PL/SQL, similar to the following:

declare
        v_src_part      varchar2(30) := null;
        v_tab           varchar2(30)  := 'PT_COMPOSITE_1';
begin

        select
                /*+ qb_name(main) */
                uts1.subpartition_name
        into    v_src_part
        from
                user_tab_subpartitions uts1
        where
                uts1.table_name = v_tab
        and     uts1.last_analyzed is not null
        and     uts1.num_rows = (
                        select
                                /*+ qb_name(max_subq) */
                                max (uts2.num_rows)
                        from
                                user_tab_subpartitions uts2
                        where
                                uts2.table_name = /* v_tab */ uts1.table_name
                )
        and     rownum = 1
        ;

The requirement is simple: identify the subpartitions of a specific table that have the largest number of rows of any subpartition of the table – but report only the first match.

You’ll notice that the where clause of the subquery has a commented “v_tab” in it. This is the PL/SQL variable used in the outer query block to identify the target table, and it shouldn’t really make any difference if I use the PL/SQL variable in the subquery rather than using a correlating column. However, the question that came with this block of code was was follows:

All the partitions and subpartitions had their stats when running the test. On a first run using the correlated subquery the block reported oracle error ORA-01403: no data found. Changing the code to use the PL/SQL variable the block reported a specific subpartition as expected. A few hours later (after changing the code back to use the correlated subquery) the block reported the same subpartition. Have you ever seen anything like this? The Oracle version is 12.1.0.2.

Rule 1, of course, is to be a little sceptical when someone says “Honest, Guv, the stats are all okay”. But I’m going to assume that the statistcs on this table really were complete and that there was no “data-related” reason for this query to behave in such a surprising way.

The email is an invitation to consider two points.

  1. This looks like a bug: the two versions of the query are logically equivalent, they should return the same results if the underlying data had not changed. (In fact, I think the only “legal” way that the query could return ORA-01403 is if there were no stats on any subpartitions of the table in question – any ordinary usage of the dbms_stats package other than delete_table_stats() would have ensured that the query had to find something.) So, the first run of the correlated subquery produced no data while the modified query did get a result. That suggests a problem with some transformation in the 12.1.0.2 code to handle correlated aggregate subqueries.
  2. How could the second execution of the version with the correlated subquery produce a result a few hours later. Here are a couple of possibilities:
    • Someone had gathered dictionary stats (i.e. on the tables used by the query, not on the subpartitioned table) in the “few hours” gap so the optimizer picked a different execution plan which bypassed the bug.
    • (minor variation on previous) Someone had gather dictionary stats when the first execution plan was already in memory but the “auto_invalidate” option for cursor invalidation meant that the query didn’t get re-optimised for a few hours.
    • Nothing changed, but the query had been flushed from the library cache and did need re-optimisation a few hours later. Since the version is 12.1.0.2 this means statistics feedback or automatic SQL directives could have had an impact – which means there may be dynamic sampling during optimisation – and a different set of random samples could have resulted in a different execution plan.
    • Other …

The interesting bit

There is a generic feature about this question that is more interesting than the “what went wrong, how could I get different results”, and it’s in the choice you can make between using a correlation column and repeating a pl/sql variable (or literal value ).

The switch to using a pl/sql variable turns the subquery into a single-row, “standalone”, subquery – one that could be run without any reference to the outer query – and this imposes a dramatic change on what the optimizer can doSometimes that change will make a huge difference to the optimisation time and the run time.

As a correlated subquery the notional “first strategy” for the optimizer is:

“for each row in the outer query execute the inner query as a filter subquery passing in the correlation value

If you take the “standalone” approach the optimizer will be looking for a plan that says (in effect):

“run the subquery once to generate a constant that you will need to execute the rest of the query”

Running the subquery once rather than once per row is likely to be a good idea – on the other hand Oracle can do “scalar subquery caching” so if the value of the correlation column is always the same the correlated subquery will actually run only once anyway.

More importantly, when the optimizer sees a correlated subquery it will consider unnesting it and then transforming it in various other ways; and it might take the optimizer a long time to work out what it can and can’t do, and the plan it finally does produce may be much slower than what it could have done if it had not unnested the subquery.

Some test results

So I ran 3 variations of the PL/SQL block on Oracle 19.3.0.0 with the CBO trace (10053) enabled and picked out a few highlights. The three tests in order were:

  1. Use the pl/sql variable so the subquery could run as a standalone query
  2. Use the correlating column to make the subquery a correlated subquery
  3. Use the correlating column, but add the hint /*+ no_unnest */ to the subquery.

The results were as follows – first the timing, then a critical measure that explains the timing:

  • Case 1 – standalone subquery – total time 0.82 seconds
  • Case 2 – correlated subquery – total time 5.76 seconds
  • Case 3 – correlated subquery with no_unnest hint – total time 0.84 seconds

Where did all that extra time go – a lot of it went in optimisation. How many “Join Orders” were examined for each query

  • Case 1 – standalone subquery – 90 join orders
  • Case 2 – correlated subquery – 863 join orders
  • Case 3 – correlated subquery with no_unnest hint – 90 join orders

If you’re wondering what the 773 extra join orders were about here’s a clue. I extracted all the lines from the case 2 trace file that started with “SU:” – those are the lines tagged for “Subquery Unnest” – using a call to grep -n “^SU:” {tracefile name} and this is the result:


  2945:SU: Unnesting query blocks in query block SEL$071BB01A (#1) that are valid to unnest.
  2947:SU: Considering subquery unnest on query block SEL$071BB01A (#1).
  2948:SU:   Checking validity of unnesting subquery SEL$4F5F2F29 (#2)
  2949:SU:   Passed validity checks, but requires costing.
  2950:SU: Using search type: exhaustive
  2951:SU: Starting iteration 1, state space = (2) : (1)
  2952:SU:   Unnesting subquery query block SEL$4F5F2F29 (#2)Subquery removal for query block SEL$4F5F2F29 (#2)
  3089:SU: Costing transformed query.
 66112:SU: Considering interleaved complex view merging
 66113:SU:   Unnesting subquery query block SEL$4F5F2F29 (#2)Subquery removal for query block SEL$4F5F2F29 (#2)
 66366:SU: Costing transformed query.
129372:SU: Finished interleaved complex view merging
129373:SU: Considering interleaved distinct placement
129374:SU: Finished interleaved distinct placement
129375:SU: Considering interleaved join pred push down
129376:SU:   Unnesting subquery query block SEL$4F5F2F29 (#2)Subquery removal for query block SEL$4F5F2F29 (#2)
251638:SU: Rejected interleaved query.
251640:SU: Finished interleaved join pred push down
251641:SU: Considering interleaved OR Expansion
251642:SU:   Unnesting subquery query block SEL$4F5F2F29 (#2)Subquery removal for query block SEL$4F5F2F29 (#2)
251651:SU: Finished interleaved OR Expansion
251653:SU: Updated best state, Cost = 19.085153
251654:SU: Starting iteration 2, state space = (2) : (0)
251665:SU: Costing transformed query.
310395:SU: Not update best state, Cost = 20.083998
310396:SU: Will unnest subquery SEL$4F5F2F29 (#2)

The optimizer checks the validity of unnesting (generated) query block SEL$4F5F2F29 at line 2948 of the trace and decides, 308,000 lines later after an exhaustive examination of the possibilities, that it will unnest the subquery. Since this is a recent version of Oracle we take one simple extra step by checking for “TIMER” information, again using a “grep -n” call –

251639:TIMER:  SU: Interleaved JPPD SEL$B73B51DC cpu: 1.263 sec elapsed: 1.263 sec
251652:TIMER: SU: iteration (#1) SEL$B73B51DC cpu: 2.607 sec elapsed: 2.607 sec
310577:TIMER: CBQT SU and CVM SEL$071BB01A cpu: 3.323 sec elapsed: 3.323 sec
433371:TIMER: Cost-Based Join Predicate Push-Down SEL$12B6FE6C cpu: 1.307 sec elapsed: 1.306 sec
433477:TIMER: Cost-Based Transformations (Overall) SEL$12B6FE6C cpu: 4.731 sec elapsed: 4.731 sec
496189:TIMER: SQL Optimization (Overall) SEL$12B6FE6C cpu: 5.306 sec elapsed: 5.306 sec

Of course most of the time spent in this particular example was a result of optimising (and writing the optimizer trace), but for my tiny example (table definition below) the final figures I’ll show are the buffer gets and CPU time reported by a basic 10046 trace file after optimisation with all the relevant data was cached:

  • Case 1 – standalone subquery – 89 buffer gets / 0.00 seconds
  • Case 2 – correlated subquery – 130 buffer gets / 0.53 seconds
  • Case 3 – correlated subquery with no_unnest hint – 121 buffer gets / 0.08 CPU seconds

The sub-centisecond time is a little suspect, of course, but the others seem fairly trustworthy.

Conclusion

The title of this piece is “Most Recent” because the commonest requirement for a query of this shape is find the most recent row matching the following predicates”, even though in this case the interpretation is “find me the row matching the largest value”.

The “standard” pattern for writing a “most recent” query is to use a correlated subquery – but it’s worth remembering that you may reduce optimisation time and run time by “copying down the constant” rather than using the correlation mechanism.

(There are alternative strategies to the subquery approach, of course, and the analytic max() – introduced in Oracle 8i – is gaining traction as one of the popular alternatives.)

Footnote 1

If you want to re-run my test on different platforms and versions of Oracle, here’s the code to generate the table.  (Don’t be surprised if you don’t get completely consistent results – much of the optimization will depend on the size of all the relevant tables (tab$, tabcompart$, etc.) in the data dictionary, rather than on the actual definition of this partitioned table.


em
rem     Script:         most_recent_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2020
rem
rem     Last tested 
rem             19.3.0.0
rem

create table pt_composite_1 (
        id,
        grp,
        small_vc,
        padding
)
nologging
partition by range(id) 
subpartition by hash (grp)
subpartitions 4
(
        partition p2 values less than (400),
        partition p3 values less than (800),
        partition p4 values less than (1600),
        partition p5 values less than (3200)
)
as
select
        rownum                          id,
        trunc(rownum/50)                grp,
        to_char(trunc(rownum/20))       small_vc,
        rpad('x',100)                   padding
from
        all_objects
where 
        rownum <= 3000 -- > comment to avoid wordpress format issue
;

execute dbms_stats.gather_table_stats(user,'pt_composite_1',granularity=>'ALL')


Footnote 2

For reference, here are the outputs I got from executing egrep -n -e”^SU:” -e”TIMER” against the other two CBO trace files.

First for the “standalone” form – note how line 3130 tells us that “there is no correlation”.


806:SU: Considering subquery unnesting in query block MISC$1 (#0)
2947:SU: Unnesting query blocks in query block SEL$071BB01A (#1) that are valid to unnest.
2952:SU: Considering subquery unnest on query block SEL$071BB01A (#1).
2953:SU:   Checking validity of unnesting subquery SEL$4F5F2F29 (#2)
2954:SU:     SU bypassed: No correlation to immediate outer subquery.
2955:SU:     SU bypassed: Failed basic validity checks.
2956:SU:   Validity checks failed.
3130:SU:     SU bypassed: No correlation to immediate outer subquery.

Then for the correlated subquery with /*+ no_unnest */ hint; and line 3122 tells us that SU was bypassed because of a hint/parameter:


809:SU: Considering subquery unnesting in query block MISC$1 (#0)
2945:SU: Unnesting query blocks in query block SEL$071BB01A (#1) that are valid to unnest.
2947:SU: Considering subquery unnest on query block SEL$071BB01A (#1).
2948:SU:   Checking validity of unnesting subquery SEL$4F5F2F29 (#2)
2949:SU:     SU bypassed: Not enabled by hint/parameter.
2950:SU:     SU bypassed: Failed basic validity checks.
2951:SU:   Validity checks failed.
3122:SU:     SU bypassed: Not enabled by hint/parameter.

Neither file showed any “TIMER” information since that appears, by default, only for steps that take longer than one second. (If you want to adjust the granularity, see Franck Pachot’s note on parse time that describes bug/fix_control 16923858.

Next Page »

Website Powered by WordPress.com.