Oracle Scratchpad

July 9, 2020

Execution Plans

Filed under: Execution plans,extended stats,Histograms,Oracle,Performance,Problem Solving,Statistics,Troubleshooting — Jonathan Lewis @ 4:54 pm BST Jul 9,2020

This is an example from the Oracle Developer Community of using the output of SQL Monitor to detect a problem with object statistics that resulted in an extremely poor choice of execution plan.

A short time after posting the original statement of the problem the OP identified where he thought the problem was and the general principle of why he thought he had a problem – so I didn’t have to read the entire execution plan to work out a strategy that would be (at least) a step in the right direction of solving the performance problem.

This note, then, is just a summary of the five minute that I spent confirming the OP’s hypothesis and explaining how to work around the problem he had identified. It does, however, give a little lead-in to the comments I made to the OP in order to give a more rounded picture of what his execution plan wass telling us.

So here’s the top half of the plan (which is the first subquery of a “concatenation”) with the first few predicates:

===============================================================================================================================================================================================================================  
| Id |                 Operation                  |            Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity |             Activity Detail             |  
|    |                                            |                            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |               (# samples)               |  
===============================================================================================================================================================================================================================  
|  0 | SELECT STATEMENT                           |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  1 |   CONCATENATION                            |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |          |                                         |  
|  2 |    FILTER                                  |                            |         |      |     12191 |     +4 |     1 |     864K |      |       |         |       |     0.03 | Cpu (4)                                 |  
|  3 |     FILTER                                 |                            |         |      |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  4 |      NESTED LOOPS                          |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.02 | Cpu (3)                                 |  
|  5 |       NESTED LOOPS                         |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.07 | Cpu (8)                                 |  
|  6 |        NESTED LOOPS                        |                            |     241 | 251K |     12232 |     +4 |     1 |      26M |      |       |         |       |     0.05 | Cpu (6)                                 |  
|  7 |         NESTED LOOPS                       |                            |    5407 | 233K |     12242 |     +4 |     1 |      86M |      |       |         |       |          |                                         |  
|  8 |          MERGE JOIN CARTESIAN              |                            |       1 |   35 |     12242 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
|  9 |           TABLE ACCESS BY INDEX ROWID      | REF1                       |       1 |    3 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 10 |            INDEX RANGE SCAN                | REF1_PK                    |       1 |    2 |     12242 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 11 |           BUFFER SORT                      |                            |      84 |   32 |     12242 |     +4 |     1 |     1000 |      |       |         |  104K |          |                                         |  
| 12 |            TABLE ACCESS BY INDEX ROWID     | STAGE                      |      84 |   32 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 13 |             INDEX RANGE SCAN               | STAGE_IDX1                 |      84 |    4 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 14 |          PARTITION RANGE ITERATOR          |                            |    8292 | 232K |     12232 |     +4 |  1000 |      86M |      |       |         |       |          |                                         |  
| 15 |           TABLE ACCESS STORAGE FULL        | TAB1                       |    8292 | 232K |     12245 |     +1 |  1000 |      86M | 103M | 521GB |   1.96% |    7M |    51.81 | gc buffer busy acquire (1)              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: cache buffers chains (1)         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (1196)                              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (2) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | reliable message (5)                    |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (2827)  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell smart table scan (1977)            |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | read by other session (304)             |  
| 16 |         PARTITION RANGE ITERATOR           |                            |       1 |   12 |     12191 |     +4 |   86M |      26M |      |       |         |       |     0.42 | Cpu (51)                                |  
| 17 |          TABLE ACCESS BY LOCAL INDEX ROWID | TAB2                       |       1 |   12 |     12191 |     +4 |   86M |      26M |   4M |  28GB |         |       |    32.14 | gc cr grant 2-way (20)                  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc cr request (2)                       |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc remaster (6)                         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (319)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (4) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: gc element (2)                   |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (3563)  |  
| 18 |           INDEX RANGE SCAN                 | TAB2_IX1                   |     166 |    3 |     12210 |     +2 |   86M |      26M |   1M |  11GB |         |       |    15.17 | Cpu (292)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1557)  |  
| 19 |        INDEX UNIQUE SCAN                   | MTD_PK                     |       1 |    1 |     12242 |     +4 |   26M |      26M |  292 |   2MB |         |       |     0.17 | Cpu (20)                                |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1)     |  
| 20 |       TABLE ACCESS BY INDEX ROWID          | REF2                       |       1 |    2 |     12191 |     +4 |   26M |      26M |    7 | 57344 |         |       |     0.11 | Cpu (13)                                |  
| 21 |      TABLE ACCESS BY INDEX ROWID           | CONTROLTAB                 |       1 |    1 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 22 |       INDEX UNIQUE SCAN                    | CONTROLTAB_PK              |       1 |      |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 23 |     MINUS                                  |                            |         |      |       102 |     +4 |    25 |        3 |      |       |         |       |          |                                         |  
| 24 |      TABLE ACCESS BY INDEX ROWID           | CUST_ORG_PK                |       1 |    3 |       942 |     +4 |    25 |       10 |      |       |         |       |          |                                         |  
| 25 |       INDEX UNIQUE SCAN                    | MC_PK                      |       1 |    2 |       942 |     +4 |    25 |       25 |      |       |         |       |          |                                         |  
| 26 |      SORT UNIQUE NOSORT                    |                            |       1 |    4 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 27 |       TABLE ACCESS BY INDEX ROWID          | REF1                       |       1 |    3 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 28 |        INDEX RANGE SCAN                    | REF1_PK                    |       1 |    2 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  


Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter( EXISTS ( (SELECT /*+ INDEX_RS_ASC ("CUST_ORG_PK" "MC_PK") */ "CUST_ID" FROM "USER1"."CUST_ORG_PK"  "CUST_ORG_PK" 
               WHERE "CUST_ID"=:B1 AND "CUST_ORG_PK"."INDICATR"='Y') MINUS (SELECT /*+ INDEX_RS_ASC ("REF1" "REF1_PK") */ 
               TO_NUMBER("VAL") FROM "USER1"."REF1" "REF1" WHERE "PUSER"='ZZZ' AND "EDATE" .ge. TRUNC(SYSDATE@!) AND TO_NUMBER("VAL")=:B2  
               AND "SDATE" .le. TRUNC(SYSDATE@!))))  
   3 - filter( EXISTS (SELECT /*+ INDEX_RS_ASC ("CONTROLTAB" "CONTROLTAB_PK") */ 0 FROM  "USER2"."CONTROLTAB" "CONTROLTAB" WHERE
              "CONTROLTAB"."CNTRLID"=9999 AND  NVL("CONTROLTAB"."STATUS",'F')='S'))  
   9 - filter("REF1"."EDATE" .ge. TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE" .le. TRUNC(SYSDATE@!))  
  13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')  

Note: various inequality symbols changed to .le. / .ge. to avoid WordPress format issue.

The first thing to note is that the “Time (active)” shown at the top line is about 12,000 seconds – so it’s a long running query. Your next observation – before you look at the shape of the plan – might be to note that operations 15, 17 and 18 between them record thousands of seconds of time, mostly I/O time but including 1,200 seconds of CPU time. This might draw your eye to the part of the plan that tells you what you are doing at these operations and why you are doing it.

Looking at the detail – operation 15 is a full tablescan that is the child of a partition range iterator (operation 14), and that iterator is the second child of a nested loop join (operation 7). Why is the optimizer so out of balance that it thinks a table scan of (possibly) multiple partitions of a partitioned table is a good candidate for the second child of a nested loop?! The answer comes from the first child – the  Merge Join Cartesian at operation 8 has been given a cardinality estimate of just one row. When the cardinality estimate is one for the first table in a join then it doesn’t matter whether Oracle uses a nested loop join or a hash join, whatever happens next is only supposed to happen once.

Unfortunately when we compare “Rows (Estim)” and “Rows (Actual)” for the operation we see that the Merge Join Cartesian produced 1,000 rows – so the partition tablescan was going to happen 1,000 times (which we can confirm from the “Execs” column of operation 14). As a first step, then, we need to ensure that the optimizer gets a better estimate of the number of rows produced by the Cartesian join. So lets look at its child operations.

  • Operation 9 (Table Access of REF1) is predicted to return one row – with “Rows (actual)” = 1.
  • Operation 11 (Buffer Sort of data from STAGE1) is predicted to return 84 rows – with “Rows (actual)” = 1,000

Since the number of rows produced by a Cartesian join should be the product of the number of rows of the two inputs this tells us that the optimizer’s estimate of the number of rows from REF1 has been rounded up to 1 from a very small fraction (less than 1/84). If we can correct this then we may get Oracle to change the awful nested loop to an acceptable hash join. Wven if we can’t fix this mis-estimate we may be able to do something that improves the estimate for STAGE1 to something sufficienlty large that it will trigger the switch to a hash join. So let’s look at the predicates for these two tables.

REF1 predicates

   9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  

The index range scan is based on an access predicate (with no filter predicate), so it looks like there’s a nearly perfect three-column index on REF1, but the optimizer can’t use the number of distinct keys in the index to get a good estimate of cardinality because one of the predicates is range-based. So the arithmetic will look at the three predicates separately and multiply up their selectivities. (It’s possible, of course, that this might be the first three columns of a 4, or more, column index.)

It’s a reasonable guess that the number of distinct combinations of (puser, name) will be much smaller than num_distinct(puser) * num_distinct(name) – so one strategy that might help increase the table’s cardinality estimate is to create extended statistics on the column group (puser, name).

Another reasonable guess is that the number of distinct values for the two columns is (relatively) small, with some skew to the distribution (name = ‘CODE’ looks particularly susceptible to being a commonly occurring value) – so perhaps we need a histogram on one or both of the columns (which would then require a histogram to be created on the column group as well if we wanted the optimizer to use the column group). We’d also have to make sure that the queried values didn’t fall outside the known low/high values for the columns if we wanted the column group to be used.

STAGE1 Predicates

13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')

This is the access(-only) predicate for the index stage_idx1, and there are no filter predicates when we reach the table. So stage_idx1 might be a two-column index on the table that we are using completely, or it may be an index with more columns that we are using only incompletely. We can see that the cardinality estimate is out by a factor of 12 (84 predicted, 1,000 actual) so if this is the complete index (which would allow Oracle to use the distinct_keys value to estimate cardinality) there must be an uneven data distribution in the values; but if this is just the first two columns of a longer index then we might benefit from extended stats (viz: another column group) on this pair of columns.

Again, even if we create a column group, or take automatic advantage of the distinct_keys figure, the predicate STAT=’I’ (is that state, status?) looks a little dangerous – status columns tend to have a small number of distinct values with a signficant skew to the distribution of values – so we may want to create a histogram on the STAT column, which would then require us to create a histogram on the column group if we also wanted the column group stats to have an effect.

What Happened Next?

I made the suggestions about column groups and histogram to the OP – without the level of detail that I’ve given in the explanations above – and got the following response:

You are spot on.

There does exists frequency histogram on column NAME(having 14 distinct value) and STAT(having 7 distinct values) of table STAGE. Also there already exists a frequency histogram on column PUSER and height balanced histogram on column NAME of table REF1. But still they were not helping the existing query.

By creating a column group on both on ref1 (puser, name) and stage(name, stat) with histogram for both i.e. ‘FOR COLUMNS SIZE 254’. The merge Cartesian removed automatically from the path by optimizer and its doing a hash join now with TAB1 finishing the query in quick time.

Summary

When the cardinality (rows) estimate for an operation drops to one (which might mean much less than one and rounded up) then all hell can break loose and the choice of parent operation – and its cardinality estimate – might be insanely stupid, leading to a catastrophically poor execution plan.

Very low cardinality estimates are often the result of multiplying individual column selectivities to produce an estimated selectivity that is unrealistic (much too small) when compared with the actual number of distinct combinations that exist in the table. In such cases creating a column group, (possibly with a histogram) may be all you need to do to get a better cardinality estimate and a vastly improved execution plan.

 

June 5, 2020

Analytic cost error

Filed under: CBO,Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 1:06 pm BST Jun 5,2020

Here’s a surprising costing error that was raised on the Oracle Developer Forum a few days ago. There’s a glitch in the cost atributed to sorting when an analytic over() clause – with corresponding “window sort” operation – makes a “sort order by” operation redundant. Here’s a script to generate the data set I’ll use for a demonstration with a template for a few queries I’ll be running against the data.


rem
rem     Script:         window_sort_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                    id,
        cast(lpad(rownum,30,'0') as varchar2(30)) vc30,
        cast(lpad(rownum,65,'0') as varchar2(65)) vc65,
        lpad('x',100,'x')                         padding
from
        generator
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

select
--      row_number() over (order by vc65) rn1,
--      row_number() over (order by vc30) rn2,
--      vc30,
--      vc65,
        id
from
        t1
-- order by
--      vc65
--      vc30
/


I’m (optionally) using the row_number() analytic function over the entire data set and for each row_number() I include in the select list Oracle will have to sort the data; I’ve also got an (optional) order by on the two columns that appear in the row_number() functions and that may introduce some sorting as well. Here, for example, are a few examples of the queries I might run:


prompt  ===========================================
prompt  Select vc30, order by vc30
prompt  ===========================================

explain plan for
select 
        vc30,
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ==========================================
prompt  Select row_number over vc30 - no ordering
prompt  ==========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ===========================================
prompt  Select row_number over vc30 - order by vc65
prompt  ===========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

In the first query we select and sort vc30 so (approximately) we sort 10,000 rows x 30(-ish) bytes for 300K of sort space. In the second query we generate the row_number() based on sorting vc30 – the size of the output is much smaller (it’s only 10,000 numbers between 1 and 10,000) but to generate those numbers we do have to select and sort vc30, so the workload (predicted and actual) will probably be similar to that of the firsrt query. In the final query we have to select and sort vc30 to generate the row_number() but we also have to select (without reporting) and sort vc65 in order to report the results in the right order – so we should expect the workload to be roughly 3 times the size (approximately 10,000 * (30 + 65) bytes). Here, from 12.2.0.1, are the execution plans (with a little cosmetic tidying):

===========================================
Select vc30, order by vc30
===========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  SORT ORDER BY     |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

==========================================
Select row_number over vc30 - no ordering
==========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

===========================================
Select row_number over vc30 - order by vc65
===========================================
-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  SORT ORDER BY      |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

As expected, the execution plans are roughly consistent with the estimates I gave for volume of data – the agreement between the query with order by vc30 and the query with over(order by vc30), and the increased load of ordering by vc65 when selecting the row_number(over vc30) is good.

So let’s see what the plan looks like when we select row_number(over vc30) and then order by vc30. If the optimizer is smart it will recognise that it’s possible to adopt a strategy that allows it to take advantage of the sorting from the over() clause to avoid a separate sort order by:


explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|    42   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|    42   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|    42   (5)|
----------------------------------------------------------------

The plan shows us that Oracle has used the 10gR2 “order by elimination” feature to bypass the need for a “sort order by” operation because it knows the data will be arriving in the right order from the “Window Sort” operation. Ynfortunately it also shows us that Oracle has lost the cost of doing the Window Sort!

Let’s try another experiment – let’s generate two different row_number() columns, with and without ordering:


prompt  =====================================
prompt  Select Both row_numbers - no ordering
prompt  =====================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ======================================
prompt  Select Both row_numbers order by vc30
prompt  ======================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ============================================
prompt  Select Both row_numbers order by vc65
prompt  ============================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

You’ll notice that I’ve included a request for the projection information in the plans for these examples so that you can see what columns are passed up from each operation to its parent. Again, though, we’ll start by focusing on just the costs:


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC65"[VARCHAR2,65], "VC30"[VARCHAR2,30],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC30")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC65")[22]
   2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22],
       "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The first query – without an order by” clause – reports a cost of 463; add an order by clause and the cost drops to 253 (and the “order by” clause doesn’t appear as a sort order by operation in the plan). The cost differential between the ordered and “unordered” plans , by the way, is 210 (and from there down to the base tablescan is another 211) – and here’s another way to see that number (+/- 1) appearing:


explain plan for
select
        vc65,
        id
from
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   986K|       |   253   (3)|
|   1 |  SORT ORDER BY     |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
------------------------------------------------------------------------

The cost of the query with no “order by” clause is basically the cost of a table scan plus two sorts of (vc65, vc30, plus a few bits). When you add in an “order by” clause the optimizer discards the “order by” clause and then subtracts one of the sort costs as well.

CBO trace file

Every time I say something about 10053 (CBO) trace files I feel compelled to remind everyone that I rarely look at them, and then it’s usually because I think I know there’s a bug and where I’ll find it in the trace. That’s exactly the case here.

I’m expecting to see two differences in the trace files between the “no order” query, and a query where I’ve added in an “order by” clause. One difference is that one trace file will have an “OBYE” (eliminate order by) comment which won’t be in the other trace, one trace file will have a “cost for SORT” calculation which won’t be in the other.

So here are the relevant bits – first from the query without the order by clause:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 463.384707  card: 10000.000000  bytes: 1010000.000000
***********************

And from one of the plans with an order by:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE performed.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 252.512458  card: 10000.000000  bytes: 1010000.000000
***********************

As you can see, the first (no order) trace file has two sort calculations under WiF sort, (Window Function?) while the second (order by) trace file reports “OBYE performed” and then loses one of its WiF sorts.

Note: If I had ordered by ID I would have seen two calculations of cost under the heading of WiF sort and a third calculation below that with the heading ORDER BY sort. Unfortunately when I ran the test to check this I also found that the OBYE report said: “OBYE performed” when it wasn’t relevant and there wasn’t an “order by” available for elimination.

Interestingly I tested to see if I could change the optimizer’s behaviour by adding the no_eliminate_oby(@sel$1) hint to the “order by” queries but the GENERAL PLANS section didn’t change, even though the trace file report: “OBYE: OBYE bypassed: hinted”, and the “Hint Report” from the Oracle 19i execution plan acknowledge the hint as legal and used.

Summary

If you have an “order by” clause in a query block that includes analytic functions and the optimizer decides that it can eliminate the “order by” and rely on the side effect of an analytic over() clause you may find that the cost of the query block is reduced by the cost of one of the Window Sort operations. (Technically this might lead to cases where the optimizer then made some poor choices in overall shape of the execution plan – though such cases might be very rare given that this costing error doesn’t affect the cardinality estimates.)

Lagniappe

In the last set of tests I added in the projection information as a simple example of a case where it can help you understand a little more of what the plan is supposed to achieve.  If you examine the last two query plans carefully (select both row_number() values and order by vc30 / vc65 respectively) Operation 2 of the first plan reports:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

while operation 2 of the second plan reports:

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

It’s not until you look at these two lines that the plans show any differences – operations 1 and 2 simply say “Window Sort” without giving any clue about which window sort is for which over() clause. The projection information, though, tells you which way around the over() clauses operated – in the first query the over(order by vc65) is applied to the result of the tablescan first, while in the second query it’s the over(order by vc30) that is applied first.

Lagniappe 2

There’s another little oddity you might spot when you look at the projection information and think about the WiF sort costs from the unordered query. The columns passed from operation 3 to operation 2 are:

 3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The columns passed from operation 2 to operation 1 are one of:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

Operation 2 sorts the data from operation 3, and operation 1 sorts the data from operation 2 – but the columns arriving from operation 2 have an extra row_number()[22] added to them. So when you look in the trace file at the two Wif Sort calculations why do they both show:

Blocks to Sort: 150 Row size: 122 Total Rows: 10000

Shouldn’t one of them show a Row size that’s (at least) 22 longer than the other ?

 

 

 

 

 

 

May 5, 2020

Execution Plans

Filed under: Execution plans,Oracle,Performance,Troubleshooting,Tuning — Jonathan Lewis @ 12:36 pm BST May 5,2020

Table Of Contents

1.0 Introduction
2.0 Overview
3.0 The Main Course
4.0 Simplify
5.0 Filling the Gaps
6.0 Looking at the numbers
7.0 Predicate Information
8.0 Resolution
9.0 Summary
Footnote


 

1.0 Introduction

1.1 In a comment to a recent post on reading a non-trivial execution someone asked me to repeat the exercise using a plan I had published a few days previously in a post about tweaking the hints in an outline. The query in question involved a number of subqueries and transformations of different types, which means it’s going to take a little work explaining the details, and it’s probably going to be a fairly long read.

1.2 Here’s the query that produced the plan we’re going to examine. I’ve done some cosmetic alteration  to make it a little easier to read (though it’s still not perfect according to my standards). I’ve also made one very important addition to the query to make it easier to follow my walkthrough of the execution plan; the original text didn’t specify any query block names (/*+ qb_name() */ hints) even though it starts off with 9 separate query blocks, so I’ve walked through the text very carefully adding in the query block names that Oracle would have used (sel$NN) for each query block. In this case I got lucky because there were no views of other recursive problems involved so all I had to do was find each occurence of the word “select” in literal text order and increment the NN in sel$NN for each one.


SELECT  /*+ QB_NAME(SEL$1) */
        COUNT(applicant_id)
FROM    (
        SELECT  /*+ QB_NAME(SEL$2) */
                applicant_id,
                academic_year,
                applicant_gender,
                medium_of_study,
                education_type,
                college_id,
                course_id,
                medium_id,
                hostel_required,
                preference_order,
                status_flag,
                attribute7,  -- Added on 7-mar-20
                college_status_flag,
                percentage,
                caste_category,
                alloted_category,
                NULL allotment_type
        FROM    (
                SELECT   /*+ QB_NAME(SEL$3) */
                        adt.applicant_id,
                        lmt_gender.lov_code applicant_gender,
                        adt.medium_of_study,
                        act.college_id,
                        lmt_education_type.lov_code education_type,
                        act.course_id,
                        act.medium_id,
                        act.hostel_required,
                        act.preference_order,
                        act.status_flag,
                        act.attribute7, -- Added on 7-mar-20
                        adt.college_status_flag,
                        adt.academic_year,
                        adt.percentage,
                        adt.applicant_dob,
                        adt.legacy_appln_date,
                        adt.caste_category,
                        act.attribute1 alloted_category,
                        DECODE (lmt_pass.lov_code,  'ATTFIRST', 1,  'COMPARTL', 2,  3) order_of_pass,
                        DECODE (late_entry_flag,  'N', 1,  'Y', 2,  3)      order_of_entry,
                        DECODE (lmt_appearance.lov_code,  'REGULAR', 1,  'PRIVATE', 2,  3) order_of_appearance,
                        DECODE (adt.is_ttd_employ_ward,  'Y', 1,  'N', 2,  3) order_of_ttd_emp,
                        DECODE (adt.is_balbhavan_studnt,  'Y', 1,  'N', 2,  3) order_of_schooling,
                        act.attribute3 course_qe_priority,
                        adt.is_local_canditature_valid,
                        adt.is_ttd_emp_ward_info_valid,
                        adt.is_sv_bm_student_info_valid,
                        adt.is_social_ctgry_info_valid,
                        DECODE(adt.college_status_flag,'B',1,'O',2,'N',3) order_of_status
                FROM 
                        xxadm.xxadm_applicant_details_tbl    adt,
                        xxadm.xxadm_applicant_coursprefs_tbl act,
                        xxadm.xxadm_college_master_tbl       cmt,
                        xxadm.xxadm_course_master_tbl        crmt,
                        xxadm.xxadm_medium_master_tbl        mmt,
                        xxadm.xxadm_lov_master_tbl           lmt_gender,
                        xxadm.xxadm_lov_master_tbl           lmt_pass,
                        xxadm.xxadm_lov_master_tbl           lmt_appearance,
                        xxadm.xxadm_lov_master_tbl           lmt_religion,
                        xxadm.xxadm_lov_master_tbl           lmt_education_type
                WHERE
                        adt.applicant_id = act.applicant_id
                AND     act.college_id = cmt.college_id
                AND     act.course_id = crmt.course_id
                AND     act.medium_id = mmt.medium_id
                AND     adt.applicant_gender = lmt_gender.lov_id
                AND     adt.pass_type = lmt_pass.lov_id
                AND     adt.appearance_type = lmt_appearance.lov_id
                AND     adt.religion = lmt_religion.lov_id
                AND     cmt.education_type = lmt_education_type.lov_id
                AND     adt.status = 'Active'
                AND     1 = (CASE 
                                WHEN act.hostel_required = 'Y'
                                        THEN (CASE
                                                     WHEN    adt.distance_in_kms >20
                                                     AND     lmt_religion.lov_code = 'HINDU'
                                                     AND     adt.caste_category NOT IN (
                                                                     SELECT  /*+ QB_NAME(SEL$4) */
                                                                             category_id
                                                                     FROM    xxadm.xxadm_category_master_tbl
                                                                     WHERE   category_code IN ('BACKWRDC', 'BACKWRDE')
                                                             )
                                                             THEN 1
                                                             ELSE 2 
                                              END
                                             )
                                        ELSE 1 
                               END
                              )
                AND     1 =  (CASE 
                                WHEN act.hostel_required  = 'Y'
                                        THEN    (CASE 
                                                        WHEN    (    lmt_education_type.lov_code = 'COEDUCOL' 
                                                                 AND mt_gender.lov_code = 'FEMALE'
                                                                )
                                                                THEN 2
                                                                ELSE 1 
                                                 END
                                                )
                                        ELSE 1 
                               END
                              )
                AND     adt.course_applied_for = 'DEG' 
                AND     (adt.college_status_flag IS NULL OR adt.college_status_flag IN ('N','T','C','B','O')) 
                AND     act.preference_order <= NVL( -- > comment to avoid WordPress format issue
                                (SELECT  /*+ QB_NAME(SEL$5) */ 
                                         preference_order 
                                 FROM    xxadm.xxadm_applicant_coursprefs_tbl act1 
                                 WHERE   act1.applicant_id = adt.applicant_id 
                                 AND     status_flag IN('B','T','C','O') 
                                 ), act.preference_order 
                        )
                AND     act.preference_order >=  NVL(
                                (SELECT /*+ QB_NAME(SEL$6) */
                                        preference_order
                                FROM    xxadm.xxadm_applicant_coursprefs_tbl act2 
                                WHERE   act2.applicant_id = adt.applicant_id
                                AND     status_flag  = 'C'
                                ), act.preference_order
                        )
                AND     act.preference_order NOT IN (
                                SELECT  /*+ QB_NAME(SEL$7) */
                                        act3.preference_order 
                                FROM    xxadm.xxadm_applicant_coursprefs_tbl act3
                                WHERE   act3.applicant_id = adt.applicant_id 
                                AND     act3.status_flag  = 'O'
                        ) 
                AND     act.preference_order NOT IN (
                                SELECT  /*+ QB_NAME(SEL$8) */
                                        act1.preference_order 
                                FROM    xxadm.xxadm_applicant_coursprefs_tbl act1 
                                WHERE   act1.applicant_id = adt.applicant_id 
                                AND     act1.status_flag IN ('C','B')
                                AND     act1.attribute1 IN (
                                                SELECT  /*+ QB_NAME(SEL9) */
                                                        category_id 
                                                FROM    xxadm.xxadm_category_master_tbl 
                                                WHERE   category_code IN ('OPENMERT')
                                        ) 
                                AND     NVL(act1.attribute7,'N') = 'N'
                        ) 
                AND     cmt.college_id = :p_college_id
                AND     crmt.course_id = :p_course_id
                AND     mmt.medium_id  = :p_medium_id
                AND     act.hostel_required = :p_hostel_required
                ORDER BY
                        order_of_pass,
                        course_qe_priority,
                        percentage DESC,
                        applicant_dob,
                        legacy_appln_date
                ) 
        WHERE
                 ROWNUM <=  :p_seats
        ) 
WHERE 
        applicant_id = :p_applicant_id
;

Figure 1-1

1.3 This query first came to light in a thread on the Oracle Developer forums with an extract from a tkprof file showing that it had executed 842,615  times. That number should be ringing alarms and flashing warning lights, but if we assume that there really is no way of doing some sort of batch processing to get through the data we need to do a little bit of arithmetic to see how much of a threat this query is and how much is matters.

1.4 For every extra 0.01 seconds it takes to execute this query the total run-time goes up by8,426 seconds, which is 2 hours and 20 minutes. If the average execution time is a mere 0.06 seconds you’ll be at it all night long – and it will be a long, long night.


 

2.0 Overview

2.1 Before we look at the execution plan let’s take a moment to pick out a few points from the query. You may want to re-open this post in a separate window so that you can switch easily between the SQL and my comments.

2.2 We start off with a simple select from an inline view – and if we replace the inline view the simple “object name” V_THING we get the following query:


select  count(applicant_id)
from    V_THING
where   applicant_id = :p_applicant_id
;

Figure 2-1

2.3 This should prompt two questions

  • First, how far into the view V_THING will the optimizer be able to push that predicate, possibly the entire content of the view will have to be constructed before the predicate can apply, possibly the nature of the view is such that the optimizer could do a simple filter pushdown to apply the predicate very early. That still leaves (or leads on to) the question of whether the optmizer might then be able to generate further uses of the predicate through transitive closure.
  • Secondly, if the view V_THING is a multiable view will we be able to work out which table applicant_id comes from by the time it becomes visible in the view.  It’s possible that changing the table from which applicant_id comes will change the execution plan.

2.4 Digging down one layer we see that our V_THING is also a simple select from an inline view – let’s call it V_ANOTHER – so if we again forget about the complexity of the inner view we’re looking at a query that goes:


select  /*+ QB_NAME(SEL$1) */
        count(applicant_id)
from    (
        select  /*+ QB_NAME(SEL$2) */
                applicant_id, 
                {15 more columns}
                null    allotment_type
        from
                V_OTHER
        where
                rownum <=  :p_seats
        )       V_THING
where 
        applicant_id = :p_applicant_id
;

Figure 2-2

2.5 A couple of details hit the eye when you look at this: Why are we selecting 17 columns from a complex view, and then counting only one of them and discarding the rest. Let’s hope the optimizer is smart enough to discard the excess columns at the earliest possible moment (which might allow it to do some index-only accesses instead of visiting tables for columns we don’t really need).

2.6 Stranger still, one of those columns is a delberately generated NULL! This hints at the possibility that the client code is doing something like “count how many query X will give me, then run query X”– giving us the pattern “select count(*) from (inlne query X); execute query X” Maybe this whole query is a waste of time, but if it can’t be avoided maybe it should be edited down to the smallest  query that will get the correct count.

2.7 Another thought about this layer of the query, the predicate “rownum <= :bind_variable” may be pushing the optimizer into first_rows(n) optimization and this might be enough to make it choose a bad execution plan. I’d have to check, and check for specific versions, but off the top of my head I think that when comparing rownum with a bind variable the optimizer will optimizer for first_rows(10) unless there’s some other reason for choosing anything else.)

2.8 I’m also a little curious about a requirement that seems to say – “pick at most N rows, then tell me how many you’ve picked”. What’s it actually trying to do and why?

2.9 Let’s dig one layer deeper before we get into the complex stuff. Here’s a version of the code that expands V_OTHER in an extremely stripped down form:


SELECT  /*+ QB_NAME(SEL$1) */
        COUNT(applicant_id)
FROM    (
        SELECT  /*+ QB_NAME(SEL$2) */
                applicant_id, 
                {15 more columns}
                NULL allotment_type
        FROM    (
                SELECT   /*+ QB_NAME(SEL$3) */
                        {lots of columns}
                FROM 
                        {lots of tables}
                WHERE
                        {lots of predicates}
                ORDER BY
                        order_of_pass,
                        course_qe_priority,
                        percentage DESC,
                        applicant_dob,
                        legacy_appln_date
                )  
        WHERE
                ROWNUM <=  :p_seats
        ) 
WHERE 
        applicant_id = :p_applicant_id
;

Figure 2-3

2.10 At this point we can start to see reasons for the layering of inline views – we need to select data in the right order before we apply the rownum predicate; as for the excess columns in the select list – even if we selected only the applicant_id in the outer layers the optimizer would still have to acquire the five columns in the order by clause.

2.11 But this emphasises the oddity of the query. If we’re only counting applicant_id to see whether we got :p_seats or fewer rows for a specific applicant_id why does the order matter – surely the order will only matter when we “repeat” the query to get the actual rows (if that’s what we do). As it is, to count a small number of rows this query might have to fetch and sort a large number, then discard most of them. (Some statistics from other posts by the OP indicated that the underlying query might fetch anything between a few hundred and a couple of thousand rows. This particular run showed the query finding 171 rows to sort and then restricting the rowsource to the first two sorted rows)


 

3.0 The Main Course

3.1 To make it a little easier to discuss the detail of the execution plan I’ve laid it out in a small number of sections corresponding to the final (outline_leaf) query blocks the optimizer generated. To do this I applied two sets of information – the Query Block / Object Alias information (which follows the body of the plan) and any appearances of the VIEW operation in the plan.


--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                                |      1 |        |   574 (100)|      1 |00:00:00.02 |    3822 |       |       |         |
|   1 |  SORT AGGREGATE                             |                                |      1 |      1 |            |      1 |00:00:00.02 |    3822 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
select count(applicant_id) - above
select where rownum less than - below
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   VIEW                                      |                                |      1 |      1 |   574   (2)|      0 |00:00:00.02 |    3822 |       |       |         |
|*  3 |    COUNT STOPKEY                            |                                |      1 |        |            |      2 |00:00:00.02 |    3822 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Start of "real" main query, query block SEL$7E0D484F
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   4 |     VIEW                                    |                                |      1 |      1 |   574   (2)|      2 |00:00:00.02 |    3822 |       |       |         |
|*  5 |      SORT ORDER BY STOPKEY                  |                                |      1 |      1 |   574   (2)|      2 |00:00:00.02 |    3822 |  2048 |  2048 | 2048  (0)|
|*  6 |       FILTER                                |                                |      1 |        |            |    171 |00:00:00.02 |    3822 |       |       |         |
|   7 |        NESTED LOOPS                         |                                |      1 |      1 |   568   (2)|    182 |00:00:00.02 |    3128 |       |       |         |
|   8 |         NESTED LOOPS                        |                                |      1 |      1 |   568   (2)|    182 |00:00:00.02 |    2946 |       |       |         |
|   9 |          NESTED LOOPS                       |                                |      1 |      1 |   567   (2)|    182 |00:00:00.02 |    2942 |       |       |         |
|  10 |           NESTED LOOPS                      |                                |      1 |      1 |   566   (2)|    182 |00:00:00.02 |    2938 |       |       |         |
|  11 |            NESTED LOOPS ANTI                |                                |      1 |      1 |   565   (2)|    182 |00:00:00.02 |    2752 |       |       |         |
|  12 |             NESTED LOOPS ANTI               |                                |      1 |      1 |   562   (2)|    182 |00:00:00.02 |    2388 |       |       |         |
|* 13 |              HASH JOIN                      |                                |      1 |      5 |   557   (2)|    182 |00:00:00.02 |    2022 |  1599K|  1599K| 1503K (0)|
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
join index transformation query block SEL$082F290F
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  14 |               VIEW                          | index$_join$_008               |      1 |    127 |     2   (0)|    127 |00:00:00.01 |       8 |       |       |         |
|* 15 |                HASH JOIN                    |                                |      1 |        |            |    127 |00:00:00.01 |       8 |  1368K|  1368K| 1522K (0)|
|  16 |                 INDEX FAST FULL SCAN        | XXADM_LOVS_CODE_UK             |      1 |    127 |     1   (0)|    127 |00:00:00.01 |       4 |       |       |         |
|  17 |                 INDEX FAST FULL SCAN        | XXADM_LOVS_PK                  |      1 |    127 |     1   (0)|    127 |00:00:00.01 |       4 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Continuation of "real" main query, query block SEL$7E0D484F
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 18 |               HASH JOIN                     |                                |      1 |    478 |   555   (2)|    182 |00:00:00.01 |    2014 |  1245K|  1245K| 1277K (0)|
|  19 |                NESTED LOOPS                 |                                |      1 |    478 |   243   (2)|    209 |00:00:00.01 |     883 |       |       |         |
|  20 |                 NESTED LOOPS                |                                |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       4 |       |       |         |
|  21 |                  TABLE ACCESS BY INDEX ROWID| XXADM_COLLEGE_MASTER_TBL       |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |       |       |         |
|* 22 |                   INDEX UNIQUE SCAN         | XXADM_COLLEGES_PK              |      1 |      1 |     0   (0)|      1 |00:00:00.01 |       1 |       |       |         |
|  23 |                  TABLE ACCESS BY INDEX ROWID| XXADM_LOV_MASTER_TBL           |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |       |       |         |
|* 24 |                   INDEX UNIQUE SCAN         | XXADM_LOVS_PK                  |      1 |      1 |     0   (0)|      1 |00:00:00.01 |       1 |       |       |         |
|* 25 |                 TABLE ACCESS FULL           | XXADM_APPLICANT_COURSPREFS_TBL |      1 |    478 |   241   (2)|    209 |00:00:00.01 |     879 |       |       |         |
|* 26 |                TABLE ACCESS FULL            | XXADM_APPLICANT_DETAILS_TBL    |      1 |   6685 |   311   (2)|  10488 |00:00:00.01 |    1131 |       |       |         |
|* 27 |              TABLE ACCESS BY INDEX ROWID    | XXADM_APPLICANT_COURSPREFS_TBL |    182 |   8881 |     1   (0)|      0 |00:00:00.01 |     366 |       |       |         |
|* 28 |               INDEX UNIQUE SCAN             | XXADM_APPLCNT_PREF_ORDER_UK    |    182 |      1 |     0   (0)|    182 |00:00:00.01 |     184 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unnested subquery SEL$A75BE177 (from sel$8, sel$9)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  29 |             VIEW PUSHED PREDICATE           | VW_SQ_1                        |    182 |      1 |     3   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|  30 |              NESTED LOOPS                   |                                |    182 |      1 |     3   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|* 31 |               TABLE ACCESS BY INDEX ROWID   | XXADM_APPLICANT_COURSPREFS_TBL |    182 |      1 |     2   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|* 32 |                INDEX UNIQUE SCAN            | XXADM_APPLCNT_PREF_ORDER_UK    |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     184 |       |       |         |
|* 33 |               TABLE ACCESS BY INDEX ROWID   | XXADM_CATEGORY_MASTER_TBL      |      0 |      1 |     1   (0)|      0 |00:00:00.01 |       0 |       |       |         |
|* 34 |                INDEX UNIQUE SCAN            | XXADM_CATEGORY_PK              |      0 |      1 |     0   (0)|      0 |00:00:00.01 |       0 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Start of "real" main query, query block SEL$7E0D484F
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  35 |            TABLE ACCESS BY INDEX ROWID      | XXADM_LOV_MASTER_TBL           |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     186 |       |       |         |
|* 36 |             INDEX UNIQUE SCAN               | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |       |       |         |
|* 37 |           INDEX UNIQUE SCAN                 | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |       |       |         |
|* 38 |          INDEX UNIQUE SCAN                  | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |       |       |         |
|  39 |         TABLE ACCESS BY INDEX ROWID         | XXADM_LOV_MASTER_TBL           |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     182 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Filter subquery, query block SEL$5
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 40 |        TABLE ACCESS BY INDEX ROWID BATCHED  | XXADM_APPLICANT_COURSPREFS_TBL |    182 |      1 |     3   (0)|     29 |00:00:00.01 |     507 |       |       |         |
|* 41 |         INDEX RANGE SCAN                    | XXADM_APPLCNT_PREFS_UK         |    182 |      5 |     2   (0)|   1450 |00:00:00.01 |     191 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Filter subquery, query block SEL$6
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  42 |        TABLE ACCESS BY INDEX ROWID BATCHED  | XXADM_APPLICANT_COURSPREFS_TBL |    171 |      1 |     2   (0)|      0 |00:00:00.01 |     173 |       |       |         |
|* 43 |         INDEX RANGE SCAN                    | XXADM_APPLCNT_APPLICANT_STATUS |    171 |      1 |     1   (0)|      0 |00:00:00.01 |     173 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Filter subquery SEL$F665FE1B (from sel$4 with tranform for index join)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 44 |        VIEW                                 | index$_join$_014               |      6 |      1 |     0   (0)|      0 |00:00:00.01 |      14 |       |       |         |
|* 45 |         HASH JOIN                           |                                |      6 |        |            |      0 |00:00:00.01 |      14 |  1519K|  1519K|  666K (0)|
|* 46 |          INDEX RANGE SCAN                   | XXADM_CATEGORY_PK              |      6 |      1 |     0   (0)|      6 |00:00:00.01 |       6 |       |       |         |
|  47 |          INLIST ITERATOR                    |                                |      6 |        |            |     12 |00:00:00.01 |       8 |       |       |         |
|* 48 |           INDEX UNIQUE SCAN                 | XXADM_CATEGORY_CODE_UK         |     12 |      1 |     0   (0)|     12 |00:00:00.01 |       8 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$2        / from$_subquery$_001@SEL$1
   3 - SEL$2
   4 - SEL$7E0D484F / from$_subquery$_002@SEL$2
   5 - SEL$7E0D484F
  14 - SEL$082F290F / LMT_GENDER@SEL$3
  15 - SEL$082F290F
  16 - SEL$082F290F / indexjoin$_alias$_001@SEL$082F290F
  17 - SEL$082F290F / indexjoin$_alias$_002@SEL$082F290F
  21 - SEL$7E0D484F / CMT@SEL$3
  22 - SEL$7E0D484F / CMT@SEL$3
  23 - SEL$7E0D484F / LMT_EDUCATION_TYPE@SEL$3
  24 - SEL$7E0D484F / LMT_EDUCATION_TYPE@SEL$3
  25 - SEL$7E0D484F / ACT@SEL$3
  26 - SEL$7E0D484F / ADT@SEL$3
  27 - SEL$7E0D484F / ACT3@SEL$7
  28 - SEL$7E0D484F / ACT3@SEL$7
  29 - SEL$A75BE177 / VW_SQ_1@SEL$67DC521B
  30 - SEL$A75BE177
  31 - SEL$A75BE177 / ACT1@SEL$8
  32 - SEL$A75BE177 / ACT1@SEL$8
  33 - SEL$A75BE177 / XXADM_CATEGORY_MASTER_TBL@SEL$9
  34 - SEL$A75BE177 / XXADM_CATEGORY_MASTER_TBL@SEL$9
  35 - SEL$7E0D484F / LMT_PASS@SEL$3
  36 - SEL$7E0D484F / LMT_PASS@SEL$3
  37 - SEL$7E0D484F / LMT_APPEARANCE@SEL$3
  38 - SEL$7E0D484F / LMT_RELIGION@SEL$3
  39 - SEL$7E0D484F / LMT_RELIGION@SEL$3
  40 - SEL$5        / ACT1@SEL$5
  41 - SEL$5        / ACT1@SEL$5
  42 - SEL$6        / ACT2@SEL$6
  43 - SEL$6        / ACT2@SEL$6
  44 - SEL$F665FE1B / XXADM_CATEGORY_MASTER_TBL@SEL$4
  45 - SEL$F665FE1B
  46 - SEL$F665FE1B / indexjoin$_alias$_001@SEL$F665FE1B
  48 - SEL$F665FE1B / indexjoin$_alias$_002@SEL$F665FE1B

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      OPT_PARAM('_optimizer_use_feedback' 'false')
      OPT_PARAM('_optimizer_dsdir_usage_control' 0)
      OPT_PARAM('_optimizer_adaptive_plans' 'false')
      OPT_PARAM('_optimizer_gather_feedback' 'false')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$F665FE1B")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SEL$6")
      OUTLINE_LEAF(@"SEL$A75BE177")
      PUSH_PRED(@"SEL$7E0D484F" "VW_SQ_1"@"SEL$67DC521B" 16 15)
      OUTLINE_LEAF(@"SEL$082F290F")
      OUTLINE_LEAF(@"SEL$7E0D484F")
      UNNEST(@"SEL$9D10C90A")
      UNNEST(@"SEL$7")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE(@"SEL$180402DE")
      OUTLINE(@"SEL$7E0D484F")
      UNNEST(@"SEL$9D10C90A")
      UNNEST(@"SEL$7")
      OUTLINE(@"SEL$67DC521B")
      OUTLINE(@"SEL$9D10C90A")
      UNNEST(@"SEL$9")
      OUTLINE(@"SEL$7")
      OUTLINE(@"SEL$C04829E0")
      ELIMINATE_JOIN(@"SEL$3" "CRMT"@"SEL$3")
      ELIMINATE_JOIN(@"SEL$3" "MMT"@"SEL$3")
      OUTLINE(@"SEL$8")
      OUTLINE(@"SEL$9")
      OUTLINE(@"SEL$3")
      NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")
      NO_ACCESS(@"SEL$2" "from$_subquery$_002"@"SEL$2")
      INDEX_RS_ASC(@"SEL$7E0D484F" "CMT"@"SEL$3" ("XXADM_COLLEGE_MASTER_TBL"."COLLEGE_ID"))
      INDEX_RS_ASC(@"SEL$7E0D484F" "LMT_EDUCATION_TYPE"@"SEL$3" ("XXADM_LOV_MASTER_TBL"."LOV_ID"))
      FULL(@"SEL$7E0D484F" "ACT"@"SEL$3")
      FULL(@"SEL$7E0D484F" "ADT"@"SEL$3")
      INDEX_JOIN(@"SEL$7E0D484F" "LMT_GENDER"@"SEL$3" ("XXADM_LOV_MASTER_TBL"."LOV_CODE") ("XXADM_LOV_MASTER_TBL"."LOV_ID"))
      INDEX_RS_ASC(@"SEL$7E0D484F" "ACT3"@"SEL$7" ("XXADM_APPLICANT_COURSPREFS_TBL"."APPLICANT_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."PREFERENCE_ORDER"))
      NO_ACCESS(@"SEL$7E0D484F" "VW_SQ_1"@"SEL$67DC521B")
      INDEX_RS_ASC(@"SEL$7E0D484F" "LMT_PASS"@"SEL$3" ("XXADM_LOV_MASTER_TBL"."LOV_ID"))
      INDEX_RS_ASC(@"SEL$7E0D484F" "LMT_APPEARANCE"@"SEL$3" ("XXADM_LOV_MASTER_TBL"."LOV_ID"))
      INDEX(@"SEL$7E0D484F" "LMT_RELIGION"@"SEL$3" ("XXADM_LOV_MASTER_TBL"."LOV_ID"))
      LEADING(@"SEL$7E0D484F" "CMT"@"SEL$3" "LMT_EDUCATION_TYPE"@"SEL$3" "ACT"@"SEL$3" "ADT"@"SEL$3" "LMT_GENDER"@"SEL$3" "ACT3"@"SEL$7" "VW_SQ_1"@"SEL$67DC521B"
              "LMT_PASS"@"SEL$3" "LMT_APPEARANCE"@"SEL$3" "LMT_RELIGION"@"SEL$3")
      USE_NL(@"SEL$7E0D484F" "LMT_EDUCATION_TYPE"@"SEL$3")
      USE_NL(@"SEL$7E0D484F" "ACT"@"SEL$3")
      USE_HASH(@"SEL$7E0D484F" "ADT"@"SEL$3")
      USE_HASH(@"SEL$7E0D484F" "LMT_GENDER"@"SEL$3")
      USE_NL(@"SEL$7E0D484F" "ACT3"@"SEL$7")
      USE_NL(@"SEL$7E0D484F" "VW_SQ_1"@"SEL$67DC521B")
      USE_NL(@"SEL$7E0D484F" "LMT_PASS"@"SEL$3")
      USE_NL(@"SEL$7E0D484F" "LMT_APPEARANCE"@"SEL$3")
      USE_NL(@"SEL$7E0D484F" "LMT_RELIGION"@"SEL$3")
      NLJ_BATCHING(@"SEL$7E0D484F" "LMT_RELIGION"@"SEL$3")
      SWAP_JOIN_INPUTS(@"SEL$7E0D484F" "LMT_GENDER"@"SEL$3")
      PQ_FILTER(@"SEL$7E0D484F" SERIAL)
      INDEX_RS_ASC(@"SEL$A75BE177" "ACT1"@"SEL$8" ("XXADM_APPLICANT_COURSPREFS_TBL"."APPLICANT_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."PREFERENCE_ORDER"))
      INDEX_RS_ASC(@"SEL$A75BE177" "XXADM_CATEGORY_MASTER_TBL"@"SEL$9" ("XXADM_CATEGORY_MASTER_TBL"."CATEGORY_ID"))
      LEADING(@"SEL$A75BE177" "ACT1"@"SEL$8" "XXADM_CATEGORY_MASTER_TBL"@"SEL$9")
      USE_NL(@"SEL$A75BE177" "XXADM_CATEGORY_MASTER_TBL"@"SEL$9")
      INDEX_RS_ASC(@"SEL$6" "ACT2"@"SEL$6" ("XXADM_APPLICANT_COURSPREFS_TBL"."APPLICANT_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."STATUS_FLAG"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$6" "ACT2"@"SEL$6")
      INDEX_RS_ASC(@"SEL$5" "ACT1"@"SEL$5" ("XXADM_APPLICANT_COURSPREFS_TBL"."APPLICANT_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."COLLEGE_ID"
              "XXADM_APPLICANT_COURSPREFS_TBL"."COURSE_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."MEDIUM_ID" "XXADM_APPLICANT_COURSPREFS_TBL"."HOSTEL_REQUIRED"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$5" "ACT1"@"SEL$5")
      INDEX_JOIN(@"SEL$4" "XXADM_CATEGORY_MASTER_TBL"@"SEL$4" ("XXADM_CATEGORY_MASTER_TBL"."CATEGORY_ID") ("XXADM_CATEGORY_MASTER_TBL"."CATEGORY_CODE"))
      END_OUTLINE_DATA
  */

Figure 3-1

3.2 There’s no rigid rule I can give you about an approach for looking for query blocks and transformations, but it’s worth checking to see which of your original query blocks still exist in the final execution plan and which have disappeared thanks to some transformation.

3.3 If we look down the Query Block Name list above we can see that sel$1, sel$2, sel$5 and sel$6 have “survived” the machinations of the optimizer. We’ve already noted that sel$1 and sel$2 are simply “select from {inline view}” as far as the optimizer is concerned; sel$5 and sel$6 are simple subqueries that appeared as filter subqueries in the original query text and have kept that status by the end of the optimizer’s transformation stage.

3.4 Tracking down the other query blocks that we named we can see the following:

  • sel$3 – most of its tables appear in a new query block called SEL$7E0D484F but one of them appears in a query block called SEL$082F290F; a closer look at SEL$082F290F shows us that it ranges from operations 14 to 17 and holds a “single table” transformation where the optimizer has chosen to use an index join of two indexes on the xxadm_lov_master_tbl rather than doing a tablescan. The index join is represented as a VIEW of a hash join, hence the separate query block. Another little detail we note – the xxadm_lov_master_tbl appears five times in the query, so we need to know which occurrence this is: fortunately the Object Alias information tells us at operation it’s the LMT_GENDER alias.
  • sel$4 is the scalar subquery inside a CASE expression, involving table xxadm_category_master_tbl. We can find the table name (which hasn’t been given an alias) and the query block name in the Object Alias information at operation 44 in a query block called SEL$F665FE1B. There are two points of interest about this query block – it has come into existence because it’s another example where the optimizer has used an index join to avoid a full tablescan; and it has been used in a filter subquery (the parent of operation 44 is the FILTER at operation 6).
  • sel$7 appeared in the original text as a NOT IN subquery against xxadm_applicant_coursprefs_tbl with an alias of act3. The Query Block / Object Alias information tells us that ACT3@SEL$7 appears at operations 27 and 28 – and when we track up the plan from operation 27 we see that it is the second child of operation 12 which is a nested loop anti. The optimizer has unnested the subquery and turned it into an anti join as one of the steps that produced query block SEL$7E0D484F
  • sel$8 appeared in the original text as a NOT IN subquery against xxadm_applicant_coursprefs_tbl aliased as act1 (OUCH – that’s the second time the alias act1 has appeared in this query!). But the subquery had it’s own subquery, named sel$9, against xxadm_category_master_tbl which didn’t have an alias (more OUCH!). When we search for ACT1@SEL$8 and XXADM_CATEGORY_MASTER_TBL@SEL$9 in the Query Block / Object Alias information we find that they both appear in a query block called SEL$A75BE177 which ranges from operations 29 to 34, and a check of the plan shows that operation 29 is a view pushed predicate of a view called VW_SQ_1 – a name that identifies the view as an internally generated non-mergeable view that was created as the result of unnestingn a subquery. The view contains a join between xxadm_applicant_coursprefs_tbl and xxadm_category_master_tbl, so we can say that the optimizer has unnested our sel$9 to create a NOT IN subquery that is a join subquery, then it has unnested again to produce an inline non-mergeable view, and it has then allowed “join predicate pushdown (JPPD)” so that the non-mergeable view can be the second table of a nested loop. To confirm the last comment we track up the plan to discover that operation 29 is the second child of operation 11 which is, indeed, a nested loop and (since the subquery was a NOT IN subquery) a nested loop anti.
  • sel$9 – see sel$8.

3.5 As you can see, when you’re using the execution plan output to identify what’s happened to the individual query blocks from your original query you’re likely to jump around from the query to the plan body, to the Query Block / Object Alias information in a fairly arbitrary way.

3.6 I’ll close this chapter of the analysis with a quick look at the Outline Data – in particular two of the hint types that appear: OUTLINE() and OUTLINE_LEAF() – which I’ve extracted and sorted for ease of reading:

      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$7")
      OUTLINE(@"SEL$8")
      OUTLINE(@"SEL$9")

      OUTLINE(@"SEL$180402DE")
      OUTLINE(@"SEL$67DC521B")
      OUTLINE(@"SEL$7E0D484F")
      OUTLINE(@"SEL$9D10C90A")
      OUTLINE(@"SEL$C04829E0")

      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SEL$6")
      OUTLINE_LEAF(@"SEL$082F290F")
      OUTLINE_LEAF(@"SEL$7E0D484F")
      OUTLINE_LEAF(@"SEL$A75BE177")
      OUTLINE_LEAF(@"SEL$F665FE1B")

Figure 3-2

3.7 In this context an OUTLINE() is a query block that existed at some point in the optimization sequence that got us to the final execution plan but did not appear as a query block in the final plan. In the previous paragraphs we described how the original query blocks sel$3, sel$4, sel$7, sel$8 and sel$9 disappeared through transformation so (apart from sel$4 which is a bit of an anomaly that I’ll pick up in a moment) they appear in an   OUTLINE() hint. I described how sel$9 would have been unnested into sel$8 to create a join which would still have been a filter subquery until that too was unnested – that join subquery would have been one of the five OUTLINE() query blocks above with the 8 character hexadecimal names.

3.8 An OUTLINE_LEAF() is a “final” query block – one that is present in the final execution plan. If you ignore sel$4, you’ll see that the other 8 query blocks correspond to the 8 query block names that appear in the Query Block Name / Object Alias information. The appearance of sel$4 as an OUTLINE_LEAF() looks like an anomaly to me; I can’t think of a good reason why it should be in the list.


 

4.0 Simplify

4.1 We’re just about ready to do a full read-through of the execution plan. We’ve taken the two outer layers off the query/plan because they represent such simple in-line views, and we’ve discussed the disappearance of some of our initial query blocks and identified and explained all the different query blocks that have appeared in the final plan. So with the extra bits of information in hand let’s take a couple more steps in simplifying the execution plan.

4.2 First I’ll replace each of the three VIEW operations and their descendants with a single line that says “this is a rowsource”. I’ll distinguish between the two variants of the operation (VIEW and VIEW PUSHED PREDICATE) by calling them BULK ROWSOURCE and PRECISION ROWSOURCE respectively: it’s not a perfect description but broadly speaking we expect a VIEW to be called once by its parent to produce a “large” data set and we expect a VIEW PUSHED PREDICATE to be called many times by its parent to produce (each time) a “small” data set using extremely efficient methods.

4.3 Then I’ll remind you that a multichild FILTER operation calls the first child once to supply a rowsource then, for each row returned, calls the other child operations in turn to determine whether or not to keep the row from the first child. This means we can examine just the first child in isolation to see how the optimizer wants to get the driving bulk of the data (and we can examine the other children later, bearing in mind how often they might need to be called and checking how efficient each call is likely to be).

4.4 Finally I’ll note that query block SEL$7E0D484F (the “real main query” as I labelled it in the plan above) starts: “VIEW -> SORT ORDER BY STOPKEY -> FILTER” – after we’ve filtered our data we simply sort it with the intention of keeping only the “top few” rows. That part of the plan is so simple we’ll ignore those lines of the plan and focus on just the first child of the FILTER- leaving the core plan looking like this:


-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   7 |        NESTED LOOPS                         |                                |      1 |      1 |   568   (2)|    182 |00:00:00.02 |    3128 |
|   8 |         NESTED LOOPS                        |                                |      1 |      1 |   568   (2)|    182 |00:00:00.02 |    2946 |
|   9 |          NESTED LOOPS                       |                                |      1 |      1 |   567   (2)|    182 |00:00:00.02 |    2942 |
|  10 |           NESTED LOOPS                      |                                |      1 |      1 |   566   (2)|    182 |00:00:00.02 |    2938 |
|  11 |            NESTED LOOPS ANTI                |                                |      1 |      1 |   565   (2)|    182 |00:00:00.02 |    2752 |
|  12 |             NESTED LOOPS ANTI               |                                |      1 |      1 |   562   (2)|    182 |00:00:00.02 |    2388 |
|* 13 |              HASH JOIN                      |                                |      1 |      5 |   557   (2)|    182 |00:00:00.02 |    2022 |
|  14 |               BULK ROWSOURCE                | index$_join$_008               |      1 |    127 |     2   (0)|    127 |00:00:00.01 |       8 |
|* 18 |               HASH JOIN                     |                                |      1 |    478 |   555   (2)|    182 |00:00:00.01 |    2014 |
|  19 |                NESTED LOOPS                 |                                |      1 |    478 |   243   (2)|    209 |00:00:00.01 |     883 |
|  20 |                 NESTED LOOPS                |                                |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       4 |
|  21 |                  TABLE ACCESS BY INDEX ROWID| XXADM_COLLEGE_MASTER_TBL       |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |
|* 22 |                   INDEX UNIQUE SCAN         | XXADM_COLLEGES_PK              |      1 |      1 |     0   (0)|      1 |00:00:00.01 |       1 |
|  23 |                  TABLE ACCESS BY INDEX ROWID| XXADM_LOV_MASTER_TBL           |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |
|* 24 |                   INDEX UNIQUE SCAN         | XXADM_LOVS_PK                  |      1 |      1 |     0   (0)|      1 |00:00:00.01 |       1 |
|* 25 |                 TABLE ACCESS FULL           | XXADM_APPLICANT_COURSPREFS_TBL |      1 |    478 |   241   (2)|    209 |00:00:00.01 |     879 |
|* 26 |                TABLE ACCESS FULL            | XXADM_APPLICANT_DETAILS_TBL    |      1 |   6685 |   311   (2)|  10488 |00:00:00.01 |    1131 |
|* 27 |              TABLE ACCESS BY INDEX ROWID    | XXADM_APPLICANT_COURSPREFS_TBL |    182 |   8881 |     1   (0)|      0 |00:00:00.01 |     366 |
|* 28 |               INDEX UNIQUE SCAN             | XXADM_APPLCNT_PREF_ORDER_UK    |    182 |      1 |     0   (0)|    182 |00:00:00.01 |     184 |
|  29 |             PRECISION ROWSOURCE             | VW_SQ_1                        |    182 |      1 |     3   (0)|      0 |00:00:00.01 |     364 |
|  35 |            TABLE ACCESS BY INDEX ROWID      | XXADM_LOV_MASTER_TBL           |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     186 |
|* 36 |             INDEX UNIQUE SCAN               | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |
|* 37 |           INDEX UNIQUE SCAN                 | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |
|* 38 |          INDEX UNIQUE SCAN                  | XXADM_LOVS_PK                  |    182 |      1 |     0   (0)|    182 |00:00:00.01 |       4 |
|  39 |         TABLE ACCESS BY INDEX ROWID         | XXADM_LOV_MASTER_TBL           |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     182 |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Figure 4-1

4.5 We need to examine a plan of only 25 lines with no complicated bits (because we’ve hidden any bits that looked complicated and will get back to them later). The thing now looks like a single query block which means we can think “First Child First”, so:

  • operation 7 calls operation 8 which calls operation 9 which calls operation 10 which calls operation 11 which calls operation 12 which calls operation 13 which calls operation 14 which is the first operation to produce a rowsource (though we don’t care how at present).
  • Operation 13 is a hash join, so the rowsource from operation 14 becomes its “build” table, and we call operation 18 to supply the “probe” table.
  • Operaion 18 calls operation 19 which calls operation 20 which calls operation 21 which is a table access by rowid that has to call operation 22 to get rowids. So operation 22 supplies the second rowsource (in our collapsed plan). It’s an INDEX UNIQUE SCAN of the index that appears (judging from its name)to be the primary key index of a table, so operation 22 will produce at most one rowid that is passed up to operation 21 that will use that rowid to get the one row from the table. (Operation 21 supplies the 3rd rowsource).
  • Operation 21 passes a row up to operation 20 which calls operation 23 which calls operation 24 (4th rowsource) to do another unique scan of a unique index to get a rowid to pass up to operation 23 to find (and test) a row from the table (5th rowsource) which it passes up to operation 20 to do the join and pass the result up (6th rowsource) to operation 19.
  • Operation 19 calls its second child (operatiomn 25) for each row it receives – but because of the primary key/unique scans the optimizer knows that the first child will return at most one row and sees no problem with using a full tablescan as the second child of the nested loop. So the tablescan of XXADM_APPLICANT_COURSPREFS_TBL is the 7th rowsource. Any rows survinging the join are passed up to operation 18 (making operation 19 the 8th rowsource).
  • Operation 18 uses the incoming rowsource to build its in-memory hash table, and calls operation 26 to supply its second (probe table) rowsource. Operation 26 is the the 9th rowsource, executing a full tablescan of XXADM_APPLICANT_DETAILS_TBL and passing the results up to operation 18, which performs the join and passes the results up to its parent, making it the 10th rowsource.
  • Operation 18 was the second child of the hash join at operation 13, which now uses the incoming data as the probe table to generate the 11th rowsource and pass the results up to operation 12.
  • Operation 12 is a nested loop anti-join and operation 13 has just supplied it with its first child rowsource, so operation 12 will now call its second child once for each row in the first child. Its second child is operation 27 (table access by rowid) which calls its first child (operation 28 index range scan) which fetches rowids from the index passes them up to operation 27 which fetches table rows and passes them up to operation 12. So operation 28 supplies the 12th rowsource, operation 27 the 13th. Since operation 12 is an ANTI join a row from the first child survives if operation 27 doesn’t find a row to return. Operation 12 passes any survivors (14th rowsource) up to operation 11.
  • Operation 11 is another ANTI-join nested loop so for each row from operation 12 it will call its second child, passing in values from its first child to drive an efficient access path and forwarding any rows from the first child where the second child returns no rows. Its second child is operation 29 – our “precision rowsource” – and we can postpone worrying about the exact details of how that works. Operation 29 produces the 15th rowsource in our reduced plan, which it passes up to operation 11.
  • Operation 11 is the first child of the nested loop at operation 10 – and from this point onwards we have 4 nested loop joins and we can simply list through the order of operation. Operation 11 produces the 16th rowsource, then Operation 10 calls its second child (operation 35) which calls operation 36 which passes rowids (17th rowsource) up to operation 35 which passes rows (18th rowsource) up to operation 10.
  • Operation 10 passes its data (19th rowsource) up to operation 9 which calls operation 37 as its second child. Operation 37 (20th rowsource) passes index entries up to operation 9 which performs the join and passes the result (21st rowsource) up to operation 8.
  • Operation 8 calls operation 38 as its second child. Operation 38 (22nd rowsource) passes index entries up to operation 8 which performs the join and passes the result (23rd rowsource) up to operation 7.
  • Operation 7 calls operation 39 as its second child. Operation 39 (24th rowsource) passes index entries up to operation 7 which performs the join and that’s the final (25th) rowsource as far as our reduced execution plan is concerned.

4.6 Here’s the reduced plan, cut back to minimum width, with the order of rowsource generation included:


-----------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           |  Order |
-----------------------------------------------------------------------------------------------
|   7 |        NESTED LOOPS                         |                                |     25 |
|   8 |         NESTED LOOPS                        |                                |     23 |
|   9 |          NESTED LOOPS                       |                                |     21 |
|  10 |           NESTED LOOPS                      |                                |     19 |
|  11 |            NESTED LOOPS ANTI                |                                |     16 |
|  12 |             NESTED LOOPS ANTI               |                                |     14 |
|* 13 |              HASH JOIN                      |                                |     11 |
|  14 |               BULK ROWSOURCE                | index$_join$_008               |      1 |
|* 18 |               HASH JOIN                     |                                |     10 |
|  19 |                NESTED LOOPS                 |                                |      8 |
|  20 |                 NESTED LOOPS                |                                |      6 |
|  21 |                  TABLE ACCESS BY INDEX ROWID| XXADM_COLLEGE_MASTER_TBL       |      3 |
|* 22 |                   INDEX UNIQUE SCAN         | XXADM_COLLEGES_PK              |      2 |
|  23 |                  TABLE ACCESS BY INDEX ROWID| XXADM_LOV_MASTER_TBL           |      5 |
|* 24 |                   INDEX UNIQUE SCAN         | XXADM_LOVS_PK                  |      4 |
|* 25 |                 TABLE ACCESS FULL           | XXADM_APPLICANT_COURSPREFS_TBL |      7 |
|* 26 |                TABLE ACCESS FULL            | XXADM_APPLICANT_DETAILS_TBL    |      9 |
|* 27 |              TABLE ACCESS BY INDEX ROWID    | XXADM_APPLICANT_COURSPREFS_TBL |     13 |
|* 28 |               INDEX UNIQUE SCAN             | XXADM_APPLCNT_PREF_ORDER_UK    |     12 |
|  29 |             PRECISION ROWSOURCE             | VW_SQ_1                        |     15 |
|  35 |            TABLE ACCESS BY INDEX ROWID      | XXADM_LOV_MASTER_TBL           |     18 |
|* 36 |             INDEX UNIQUE SCAN               | XXADM_LOVS_PK                  |     17 |
|* 37 |           INDEX UNIQUE SCAN                 | XXADM_LOVS_PK                  |     20 |
|* 38 |          INDEX UNIQUE SCAN                  | XXADM_LOVS_PK                  |     22 |
|  39 |         TABLE ACCESS BY INDEX ROWID         | XXADM_LOV_MASTER_TBL           |     24 |
-----------------------------------------------------------------------------------------------

Figure 4-2

4.7 Once we’ve got this image sorted out we still have a few details to fill in before we’ve gpt the full picture of the execution plan.

  • What does Oracle do to generate the “bulk rowsource” at operation 14
  • What does Oracle do on every call to the “precision rowsource” at operation 29
  • We know that the reduced plan above is the first child of a FILTER operation and if we refer back to previous “real main query” we know that there are three further children to the FILTER that might have to execute once for each row produced by the first child. So that’s another 3 (small) query blocks we need to examine in detail.
  • We need to bring in the predicates to see how the optimizer has used them
  • We need to look at the Starts and A-Rows to compare what happened with the optimizer’s expectation
  • We need to look at disk reads and buffer gets to see how much excess work we did to acquire the data


 

5.0 Filling the Gaps

5.1 After getting the overall shape of the query’s execution we can go back and examine the bits we have so far postponed view. There are three pieces to consider

  • the “bulk rowsource” at operation 14 that was the first child of a hash join.
  • the “precision rowsource” at operation 29 that was the second child of a nested loop anti-join
  • the filter subqueries that were the 2nd, 3rd and 4th children of the explicit FILTER at operation 6

5.2 We start with the “bulk rowsource” that was a VIEW with a highly suggestive name of index$_join$_008. This shows Oracle finding a way of selecting data from a table without visiting the table, using a couple of indexes as if they were skinny tables that could be scanned and joined. In effect Oracle has transformed “select key1, key2 from tableX” into something like:


select  ix1.key1, ix2.key2
from
        (select key1, rowid r1 from tableX) ix1,
        (select key2, rowid r2 from tableX) ix2
where
        ix1.r1 = ix2.r2
;

5.3 This strategy can only be used when Oracle knows that every relevant row will appear in the two indexes – which basically means you have to be careful about NULLs. In the simplest case you might have to have a NOT NULL constraint on at least one column in each of the target indexes; or a predicate in each inline view that ensures that Oracle can use just the index without losing some of the rowids that it needs. After acquiring key values and rowids from each index in turn, Oracle then joins the two sets of data using a hash join. Technically there is no limit to the number of indexes that Oracle can join in this fashion, the choice of strategy depends largely on how big the table is compared to the sum of the sizes of the indexes that could be used instead; practically (as in our main query) it’s rare to see more than two indexes used for this “index join” mechanism.


join index transformation query block SEL$082F290F, with parent
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 13 |              HASH JOIN                      |                                |      1 |      5 |   557   (2)|    182 |00:00:00.02 |    2022 |  1599K|  1599K| 1503K (0)|
|  14 |               VIEW                          | index$_join$_008               |      1 |    127 |     2   (0)|    127 |00:00:00.01 |       8 |       |       |         |
|* 15 |                HASH JOIN                    |                                |      1 |        |            |    127 |00:00:00.01 |       8 |  1368K|  1368K| 1522K (0)|
|  16 |                 INDEX FAST FULL SCAN        | XXADM_LOVS_CODE_UK             |      1 |    127 |     1   (0)|    127 |00:00:00.01 |       4 |       |       |         |
|  17 |                 INDEX FAST FULL SCAN        | XXADM_LOVS_PK                  |      1 |    127 |     1   (0)|    127 |00:00:00.01 |       4 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

5.4 Moving on to the “precision rowsource” that appears in the original plan as a VIEW PUSHED PREDICATE. This means that Oracle has optimised a non-mergeable view allowing for an input value from its parent. If you take operations 30 to 34 in complete isolation it’s just a simple nested loop join and you might wonder why the view is non-mergable. But when you look back at the parent you discover that it’s an ANTI-join, so Oracle has to say (for each driving row) “join these two tables and see if you get any rows as a result”, it doesn’t have a generic mechanism for doing two separate but consecutive (anti-)join operations at this point.


Unnested subquery SEL$A75BE177 (from sel$8, sel$9) with parent and its first child
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  11 |            NESTED LOOPS ANTI                |                                |      1 |      1 |   565   (2)|    182 |00:00:00.02 |    2752 |       |       |         |
|  12 |             Driving Rowsource               |                                |      1 |      1 |   562   (2)|    182 |00:00:00.02 |    2388 |       |       |         |
|  29 |             VIEW PUSHED PREDICATE           | VW_SQ_1                        |    182 |      1 |     3   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|  30 |              NESTED LOOPS                   |                                |    182 |      1 |     3   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|* 31 |               TABLE ACCESS BY INDEX ROWID   | XXADM_APPLICANT_COURSPREFS_TBL |    182 |      1 |     2   (0)|      0 |00:00:00.01 |     364 |       |       |         |
|* 32 |                INDEX UNIQUE SCAN            | XXADM_APPLCNT_PREF_ORDER_UK    |    182 |      1 |     1   (0)|    182 |00:00:00.01 |     184 |       |       |         |
|* 33 |               TABLE ACCESS BY INDEX ROWID   | XXADM_CATEGORY_MASTER_TBL      |      0 |      1 |     1   (0)|      0 |00:00:00.01 |       0 |       |       |         |
|* 34 |                INDEX UNIQUE SCAN            | XXADM_CATEGORY_PK              |      0 |      1 |     0   (0)|      0 |00:00:00.01 |       0 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

5.5 Finally we have the three filter subqueries, which I’ve shown with their parent FILTER and a one-liner for the driving rowsource. For each row in operation 7 we call operations 40, 42 and 44 in turn although the parent filter may decide after calling operation 40 that it doesn’t need to call the other two and can simply move on to the next row from operation 7. Similarly the filter might call operations 40 and 42 and not need to call operation 44. It’s also possible that, thanks to scalar subquery caching, Oracle can say “I’ve already called operation 40 for this value, so I know the result and don’t need to call it again”. When we look at the Starts and A-Rows columns for the three operations we will get some idea of how the “notional” execution sequence turned into an actual workload.


Filter subqueries SEL$5, SEL$6 and SEL$F665FE1B, with their parent and its first child
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name                           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  6 |       FILTER                                |                                |      1 |        |            |    171 |00:00:00.02 |    3822 |       |       |         |
|   7 |        "Simplify" Plan                      |                                |      1 |      1 |   568   (2)|    182 |00:00:00.02 |    3128 |       |       |         |
|* 40 |        TABLE ACCESS BY INDEX ROWID BATCHED  | XXADM_APPLICANT_COURSPREFS_TBL |    182 |      1 |     3   (0)|     29 |00:00:00.01 |     507 |       |       |         |
|* 41 |         INDEX RANGE SCAN                    | XXADM_APPLCNT_PREFS_UK         |    182 |      5 |     2   (0)|   1450 |00:00:00.01 |     191 |       |       |         |
|  42 |        TABLE ACCESS BY INDEX ROWID BATCHED  | XXADM_APPLICANT_COURSPREFS_TBL |    171 |      1 |     2   (0)|      0 |00:00:00.01 |     173 |       |       |         |
|* 43 |         INDEX RANGE SCAN                    | XXADM_APPLCNT_APPLICANT_STATUS |    171 |      1 |     1   (0)|      0 |00:00:00.01 |     173 |       |       |         |
|* 44 |        VIEW                                 | index$_join$_014               |      6 |      1 |     0   (0)|      0 |00:00:00.01 |      14 |       |       |         |
|* 45 |         HASH JOIN                           |                                |      6 |        |            |      0 |00:00:00.01 |      14 |  1519K|  1519K|  666K (0)|
|* 46 |          INDEX RANGE SCAN                   | XXADM_CATEGORY_PK              |      6 |      1 |     0   (0)|      6 |00:00:00.01 |       6 |       |       |         |
|  47 |          INLIST ITERATOR                    |                                |      6 |        |            |     12 |00:00:00.01 |       8 |       |       |         |
|* 48 |           INDEX UNIQUE SCAN                 | XXADM_CATEGORY_CODE_UK         |     12 |      1 |     0   (0)|     12 |00:00:00.01 |       8 |       |       |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

5.6 The operations for the first two subqueries (40,41) and (42,43) shouldn’t need any explanation. The third set of operations (44 to 48) is a little more complex. In fact it’s similar to the mechanism that appeared in our first “bulk rowsource” – we have Oracle turning a table access into an “index (only) join”, collecting key values and rowids from two different indexes and doing a hash join on the rowids. The plan is just a little more subtle, though – instead of getting their data from “index fast full scans”, one index is accessed by a range scan (possibly using a predicate value passed in by the filter operation) and the other is using an INLIST ITERATOR with unique scan. In the second case we must be handling a predicate of the form: “key_column in {list of values}”. (Taking a first look at the Starts column we can see that the INLIST ITERATOR runs 6 times calling the INDEX UNIQUE SCAN a total of 12 times so it’s fairly obvious that there are exactly two elements in the list in this case.)


 

6.0 Looking at the Numbers

6.1 Rather than walking through the entire plan again putting the pieces together I’m going to assume that I can carry on to the next stage of analysis, and assume that the pieces will fall into place as we talk about some of the critical numbers.

6.2 It’s probably best to open a copy of the note in a separate window so that you can examine the plan and read my comments at the same time. Starting at the top we apply “first child first”, noting in passing that the examples of the VIEW operations we have in this plan don’t have a significicant impact on the order of operation as we work down the plan – they simply remind us that there are “non-mergeable” parts of the plan. Eventually we get to operation 13 which is a hash join and operations 14 through 17 give us the build (first) table – a quick check shows 127 rows for estimated (E-rows) and actual (A-rows); then we see the probe (second) table is itself a hash join returning 182 rows (estimate 487, so in the right ballpark) and the hash join at operation 13 produces a result set of 182 rows.

6.3 At this point a quick check back UP the plan tells us that the 182 rows survive all the way up to operation 6, where a FILTER eliminates just a few of them; then the result set drops to just 2 rows at operation 5. Then a quick check of operation 5 (with a cross reference to the query) reminds us that we have an inline view that does an “order by” followed by a “rownum < :bind” predicate – so the sort order by stop key at operation 5 is sorting all the data but only passing on the first two rows: so there’s no way we could have modified the join order to eliminate the redundant rows sooner.

6.4 So we see that we get the right volume of data at about the right moment in the plan, and probably can’t do much to avoid the volume of data access – but let’s check how we got the 182 rows at operation 18. Using “first child first” – we see a nested loop joining two “single row by unique index” rowsources, then a nested loop to a full tablescan of XXADM_APPLICANT_COURSPREFS_TBL. The knee-jerk reaction to a “full tablescan as the second table in a nested loop” is that it must be bad – but in this case we know that it will happen no more than once, so we don’t need to panic immediately. Applying a little more thought (and arithmetic): we note that the tablescan returns 209 rows (estimated 478) using 879 buffer gets; that’s not an extreme number of buffer gets per row (especially if the 478 is a reasonably accurate average for the operation). We’ll postpone worrying about the tablescan for the moment but take note that it might be worth revisiting.

6.5 The second child to the hash join at operation 18 is another full tablescan (of XXADM_APPLICANT_DETAILS_TBL) which requires 1,141 buffer gets. Again, though this might not be a bad move, since the alternative would be an actual 209 index probes (or an estimated average 478 index probes). The workload is, again, in the right ballpark but, again, something we might come back to. In fact in both tablescans it might be more important to worry about the work done at each row by the row predicates rather than worrying about the fact that the operation is a tablescan; a predicate involving a CPU-intensive PL/SQL function might be the thing that makes 478 index probes to 478 rows a better option than a tablescan of (say) 100,000 rows.

6.6 From this point onwards (operation 13) we have 6 nested loop joins (though the first two are anti-joins) so it’s “call the second child for each row in the first child” all the way down, and we’ve seen that we don’t eliminate any data as we go. If we want to make the execution plan any faster by “local” tweaking we’ll just have to make sure each “second child” operation is as efficient as possible, which tends to mean looking for cases where we supply a lots of rows (rowids) from an index range scan but find that we then discard the table rows after visiting the table. So …

6.6.1 Operation 12 – nested loop anti join – calls 28/27 (table access by unique index). We find an index entry on each call, but the table row doesn’t qualify – which is what we want for a “successful” anti-join. We could make this a little more efficient by adding a column to the (already unique) index and avoid visiting the table.

6.6.2 Operation 11 – nested loop anti join – calls operation 29 (view with pushed predicate) which operates a high-precision nested loop join at operation 30. The first child of operation 30 is a table access by index, but the table never returns a row (which is nice) so we never call the second child of operation 30. Again we could make this view access a little more efficient by adding extra columns to (already unique) indexes to avoid any need to visit tables.

6.6.3 Operations 10, 9, 8, 7 – nested loop joins – operating very efficiently, some not even visiting tables to acquire data. The order of operation at this point is: 11, 36, 35, 10, 37, 9. 38, 8, 19, 7. And then we get to the FILTER operation, which has 3 subqueries to operate in turn,

6.7 Operation 6 executes in turn the two subqueries we named sel$5 and sel$6 respectively, the first one 182 times, the second one 171 times. Since operation 6 produces 171 rows it seems likely that the initial 182 rows dropped to 171 rows as a consequence of the sel$5 subquery resulting in the smaller number of calls to the sel$6 subquery. It’s worth noting here that operation 41, the index range scan, returned 1,450 rowids, but the subsequent table accesses returned a total of only 29 rows after an additional 316 buffer gets (507 – 191). There may be an opportunity here (yet again) for adding an extra column to the index so that we visit the table only 29 times rather than visiting it 1,450 times. In fact, though it’s not obvious in this SQL Monitor report, the indications from other examples from the same query suggested that this subquery was the single more resource intensive part of the plan.

6.8 The last subquery executed by Operation 6 is the one identified by query block sel$4. The sub-plan starts with a VIEW operation because the table (identified as originating in sel$4 in the query block / object alias information) is “accessed” by way of an index hash join. This subquery is executed only 6 times. Given that there are (at least) 171 rows for which this subquery could be called this means one of two things.  First we can from the query text that this subquery is part of a complex CASE expression – so maybe the simple conditions in the expression mean we rarely need to call the subquery;. secondly it could mean the run-time engine has managed to take advantage of scalar subquery caching and the query doesn’t have many distinct inputs for this subquery – and when we check the predicate section we can see the relevant predicate for a query against the XXADM_CATEGORY table was “category_id = {correlation variable}” which has the look of a table with a few rows and distinct ids..

6.9 In summary, then, there may be a few “localised” tweaks that we can to do improve performance of this plan – largely by adding columns to existing indexes and using them effectively. There are indications that one of the filter subqueries might be a particularly good target for this type of tweak; after which we might want to look at what could be done with the two tablescans which are in that grey area where it’s not easy to decide whether an indexed access or a tablescan is the better option. We have to remember, though, that this query was originaly reported as executing 842,000 times – so maybe we need to do much better than just a little tweaking.


 

7.0 Predicate Information

7.1 Why are we running a query 842,000 times in a batch? The right way to find an answer to that question is to ask the right person – if you can find them. A slightly more long winded way is to find out what is driving the 842,000 executions – and you might be able to do that if you have the full tkprof output from the trace file. (Hint: statement X runs 842,000 times, and if statement Y executes 13 times and produces 842,000 rows maybe Y is driving X.) Sometimes, though, you don’t have the people, or the full data set, or the access you need, so you might take a look at the query and the predicates and start making some reasonable guesses.

7.2 Here’s the tail end of the query, conveniently capturing all the input bind variables:

                AND     cmt.college_id = :p_college_id
                AND     crmt.course_id = :p_course_id
                AND     mmt.medium_id  = :p_medium_id
                AND     act.hostel_required = :p_hostel_required
                ORDER BY
                        order_of_pass,
                        course_qe_priority,
                        percentage DESC,
                        applicant_dob,
                        legacy_appln_date
                ) 
        WHERE
                 ROWNUM <=  :p_seats
        ) 
WHERE 
        applicant_id = :p_applicant_id

7.3 There are two things we can note about these predicates – first they don’t follow the pattern of “:Bnnn” so they’re not from a statement statically embedded in PL/SQL, secondly the names are intelligible and meaningful, so we might draw some tentative conclusions from them, in particular how many distinct values there might be and how this lead to 842,000 executions of the query.

7.4 The variable name that stands out is :p_applicant_id. We seem to be looking at a query about applicants for courses at colleges – and the latter pair probably gives us a relatively small number of combinations. The variable :p_hostel_required is surely just going to be a “yes/no/maybe/null” option. The :p_medium_id is a bit of a puzzle but scanning through the query it looks like it’s the id for the “medium of study” so probably another variable with a small number of values. So where in the plan do these variables get used? Here’s the full list of predicates from the plan, followed by an extra few lines showing just the predicates that reference the bind variables:

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("APPLICANT_ID"=:P_APPLICANT_ID)
   3 - filter(ROWNUM<=:P_SEATS) -- > comment added to avoid wordpress format issue
   5 - filter(ROWNUM<=:P_SEATS) -- > comment added to avoid wordpress format issue
   6 - filter((    "ACT"."PREFERENCE_ORDER"<=NVL(,"ACT"."PREFERENCE_ORDER") -- > comment added to avoid wordpress format issue
               AND "ACT"."PREFERENCE_ORDER">=NVL(,"ACT"."PREFERENCE_ORDER") AND CASE "ACT"."HOSTEL_REQUIRED"
              WHEN 'Y' THEN CASE  WHEN ("ADT"."DISTANCE_IN_KMS">20 AND "LMT_RELIGION"."LOV_CODE"='HINDU' AND  IS NULL) THEN 1 ELSE 2 END  ELSE 1 END =1))
  13 - access("ADT"."APPLICANT_GENDER"="LMT_GENDER"."LOV_ID")
       filter(CASE "ACT"."HOSTEL_REQUIRED" WHEN 'Y' THEN CASE  WHEN ("LMT_EDUCATION_TYPE"."LOV_CODE"='COEDUCOL' AND "LMT_GENDER"."LOV_CODE"='FEMALE') THEN 2 ELSE 1 END
               ELSE 1 END =1)
  15 - access(ROWID=ROWID)
  18 - access("ADT"."APPLICANT_ID"="ACT"."APPLICANT_ID")
  22 - access("CMT"."COLLEGE_ID"=:P_COLLEGE_ID)
  24 - access("CMT"."EDUCATION_TYPE"="LMT_EDUCATION_TYPE"."LOV_ID")
  25 - filter(("ACT"."COURSE_ID"=:P_COURSE_ID AND "ACT"."COLLEGE_ID"=:P_COLLEGE_ID AND "ACT"."MEDIUM_ID"=:P_MEDIUM_ID AND "ACT"."HOSTEL_REQUIRED"=:P_HOSTEL_REQUIRED))
  26 - filter(("ADT"."STATUS"='Active' AND "ADT"."COURSE_APPLIED_FOR"='DEG' AND (INTERNAL_FUNCTION("ADT"."COLLEGE_STATUS_FLAG") OR "ADT"."COLLEGE_STATUS_FLAG" IS
              NULL)))
  27 - filter("ACT3"."STATUS_FLAG"='O')
  28 - access("ACT3"."APPLICANT_ID"="ADT"."APPLICANT_ID" AND "ACT"."PREFERENCE_ORDER"="ACT3"."PREFERENCE_ORDER")
  31 - filter((INTERNAL_FUNCTION("ACT1"."STATUS_FLAG") AND NVL("ACT1"."ATTRIBUTE7",'N')='N'))
  32 - access("ACT1"."APPLICANT_ID"="ADT"."APPLICANT_ID" AND "ACT1"."PREFERENCE_ORDER"="ACT"."PREFERENCE_ORDER")
  33 - filter("CATEGORY_CODE"='OPENMERT')
  34 - access("CATEGORY_ID"=TO_NUMBER("ACT1"."ATTRIBUTE1"))
  36 - access("ADT"."PASS_TYPE"="LMT_PASS"."LOV_ID")
  37 - access("ADT"."APPEARANCE_TYPE"="LMT_APPEARANCE"."LOV_ID")
  38 - access("ADT"."RELIGION"="LMT_RELIGION"."LOV_ID")
  40 - filter(("STATUS_FLAG"='B' OR "STATUS_FLAG"='C' OR "STATUS_FLAG"='O' OR "STATUS_FLAG"='T'))
  41 - access("ACT1"."APPLICANT_ID"=:B1)
  43 - access("ACT2"."APPLICANT_ID"=:B1 AND "STATUS_FLAG"='C')
  44 - filter(("CATEGORY_ID"=:B1 AND INTERNAL_FUNCTION("CATEGORY_CODE")))
  45 - access(ROWID=ROWID)
  46 - access("CATEGORY_ID"=:B1)
  48 - access(("CATEGORY_CODE"='BACKWRDC' OR "CATEGORY_CODE"='BACKWRDE'))


   2 - filter("APPLICANT_ID"=:P_APPLICANT_ID)
   3 - filter(ROWNUM<=:P_SEATS) -- > comment added to avoid wordpress format issue
   5 - filter(ROWNUM<=:P_SEATS) -- > comment added to avoid wordpress format issue
  22 - access("CMT"."COLLEGE_ID"=:P_COLLEGE_ID)
  25 - filter(("ACT"."COURSE_ID"=:P_COURSE_ID AND "ACT"."COLLEGE_ID"=:P_COLLEGE_ID AND "ACT"."MEDIUM_ID"=:P_MEDIUM_ID AND "ACT"."HOSTEL_REQUIRED"=:P_HOSTEL_REQUIRED))

Figure 7-1

7.5 Apart from the predicates in the final shortlist you probably noticed further bind variables at operation 41, 43, 44, and 46 – but these are all named :B1, which is Oracle flagging up the need to pass correlating values into filter subqueries.

7.6 Operation 25 (where we test almost all the predicates) is one of the first operations to drive the query while operation 2 (where we test the :p_applicant_id) is close to the very last thing we do in the execution plan. So we generate a load of data for a college, course, and couple of other predicates, sort it then – at the last moment – decide that we only want a few (:p_seats) rows and count how many rows we’ve found for the specific applicant – and we do that a very large number of times. This takes me back to section 2 where I asked a couple of critical questions:

7.6.1 (2.3) First, how far into the view V_THING will the optimizer be able to push that predicate, possibly the entire content of the view will have to be constructed before the predicate can apply,

7.6.2 (2.8) I’m also a little curious about a requirement that seems to say – “pick at most N rows, then tell me how many you’ve picked”. What’s it actually trying to do and why?

7.7 We now know the answer to the first query – that predicate isn’t going anywhere, and we recognise why not, of course: it’s a consequence of the “rownum” pseudo-column which has to be evaluated for all the generated data before the rownum restriction can be applied: “select for the applicant then apply the rownum” is very different from “apply the rownum (column and predicate) then restrict to the applicant”. That brings us to the second question – why would you generate all the data, then order it, then restrict it to the first N, and then count how many times a specific applicant appeared? And there’s one “valid” answer to the last bit of the question – what if you’re not really trying to count how many times the applicant appeared, you’re only trying to find out whether the count is zero or non-zero.

7.8 The intent of the query is to answer the question: “does this applicant appear in the first N candidates”. Once you’ve realised this the underlying performance problem with the query becomes clear. In the monitored example show here the query found 171 applicants that matched the initial predicates – and at some point in the batch it’s going to do the same work all over again for each of the other 170 applicants that we’ve discarded. For each combination of the initial predicates (excluding applicant id and seat count) we run the query N times if there are N candidates that match that combination. It’s bad enough that this query took 0.02 seconds to run and would have run 172 times (for a total of 3,4 seconds) but another sample run took 0.05 seconds to run identifying 1,835 applicants (which means another 1,834 executions for a total of 91 seconds run time).


 

8.0 Resolution

8.1 There is a serious flaw in the design of this application. We are seeing a piece of code running once per applicant_id when (with some minor variations) it looks as if it should be running no more than once per set of distinct combinations of (course_id, college_id, medium_id, hostel_required). In fact, if the set of distinct combinations could be generated by a simple query, you could imagine the entire required result set as a join between two non-mergable views, with a little row-by-row post processing – but that might be too ambitious a change to implement in the short term.

8.2 Realistically (as a low-risk strategy) it might be possible to keep a very large percentage of the existing code structure for whatever this task does but precede it with a PL/SQL loop that steps through each of the distinct combinations required, populating a table (perhaps an IOT) with {applicant_id, (combination columns), “rownum”); and then replace our problem query by a simple primary key look up to find the saved “rownum” for each applicant and combination, to check whether the stored “rownum” was fell within the required seat count.


 

9.0 Summaryi

9.1 For a DBA working on-site, or a consultant on a short-term visit, the analysis shown in this post is probably not how things would have progressed. I could imagine the sequence of events being more like:

9.1.1 This “start of year / start of term” batch job takes too long

9.1.2 What is it trying to achieve (business overview) – sketch an outline of the process (technical overview)

9.1.3 Trace the job and discover most of the time went on this query

9.1.4 Investigate the logic of this query and why it is run for every applicant_id

9.1.5 Recognise the fundamental design threat then choose between three possible strategies:

9.1.5.1 make the query much faster

9.1.5.2 re-engineer this part of the batch completely

9.1.5.3 subvert this section of the batch to pre-build a “materialized result” and use a much simpler query to query it

9.2 Effectively, however, we’ve come in at 9.1.5.1 and run the consultation backwards. As we did so we raised an early question about the applicant_id and pushing predicates and the oddity of counting a limited list, and we finally came back to those points towards the end of the post with an educated guess about what the query was trying to achieve and how it should be reduced from “once per applicant” to “once per combination”.

9.3 Despite the post starting at the wrong place it’s quite possible that we would have reached 9.1.5.1 by following a sensible order of problem analysis, and still want to think about how the query might run more quickly – so this investigation wasn’t a total waste of time and it’s allowed us to work through a real-world query and plan in a realistic way which we can sum up in the following stages:

9.3.1 Simplify: cross-referencing between the overall plan shape, the Query Block / Object Alias information, and original query we can take out sections of the plan (sub-plans) and analyse them separately. In particular we can identify and reduce to a minimum the core of the plan that generates the final result set, calling on the various sub-plans as it goes.

9.3.2 Follow the workload: in this case we didn’t get much help from the timing information, but buffer gets, disk reads and A-rows also supply clues to where most work is done. Critically we noted that the volume of data we picked up early on in the query was needed all the way through the query – until the last moment – and we didn’t waste resources carrying and processing unnecessary rows. Along the way, of course, we compare Oracle’s predictions of data volume with the actual data volume (A-Rows = Starts * E-Rows as a guideline). We also noted a couple of opportunities where modifying indexes might eliminate table visits, potentially reducing I/O and buffer gets.

9.3.3 Check the predicates: which goes hand in hand with following the workload – how and where are our predicates used. What predicates have been generated (or eliminated) by transitive closure; which predicates are (or could be) pushed further down the plan tree to eliminate data earlier; will multiple predicates result in bad optimizer estimates followed by bad choices for access paths.

9.4 It would be nice to think that there was a simple progression, a fixed sequence of steps that one could follow to interpret an execution plan quickly and accurately. Unfortunately (like the optimisation process itself) interpretation requires a measure of looping and recursion. It’s probably always best to start with simplifying – but how much you simplify and how you pick which subplan to simplify (or start analysing in detail) depends on being able to spot where the biggest workload appears; and before you get stuck too deeply into a sub-plan you might glance down at the use of predicates because a change in one predicate might make the optimizer completely re-engineer its choice of plan. And maybe, before anything else, you’ll see a single operation which you know should (for exanple) generate about 10 rows when the optimizer is predicting 25,000 rows (or vice versa) and you’ll want to check why there’s such a bad estimate at one point in the plan before you tackle any of the harder questions.

9.5 The bottom line with execution plans is simply this: the more you practice the faster you get at spotting the clues that are worth pursuing; and the faster you spot the clues the less time you waste unpicking every little detail, and the less time you spend on the preripheral pieces of the plan the easier it becomes to keep the big picture in your mind and see how the optimizer got to where it is, and how you might want to redirect it. So pick a couple of random queries each week that produce plans of about 20 lines and use them to exercise your interpretation skills; and increase the complexity of the queries every couple of weeks.

The End

 
 

 

Footnote

I think I’ve spent more than 20 hours writing a detailed description of something that would normally take me a few minutes to do [and some poeple wonder why I’ve not yet written another book on the optimizer]. In part the difference in time is because with practice the “intuitive” skill grows and the pattern of reading is more like –

  • How does it start (ignoring the “trivial” bits around the edges)
  • How does it end ( ditto )
  • Where do we do most work reading and discarding data

What I’ve done in this note is talk about every single query block and every single line whereas in real-life I might have scanned the plan, examined about 10 lines, and done a quick check on the corresponding predicates as the first step to deciding whether or not the plan was reasonably efficient.

April 23, 2020

date_to_date

Filed under: Execution plans,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 12:00 pm BST Apr 23,2020

Every now and again someone posts a piece of SQL on the Oracle Developer Forum that includes a predicate with an expression like to_date(date_column). This is a problem for several reasons – not the least being the type of performance problem that showed up in a post from a couple of years back that has just been resurrected.

Before I examine the performance detail, here’s a simple demo of the “wrong data” problem that can go unnoticed, cut-n-paste from a 12.2.0.1 session of SQL*Plus:


SQL> create table t1 (d1 date);

Table created.

SQL> insert into t1 values(sysdate);

1 row created.

SQL> select * from t1 where d1 = to_date(d1);

no rows selected

SQL> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

Session altered.

SQL> select * from t1 where d1 = to_date(d1);

D1
--------------------
22-apr-2020 15:12:36

1 row selected.


Note particularly how changing the nls_date_format can change the result of a query! (There’s another simple example on the referenced blog note.)

What’s going on? to_date(date_col) is equivalent to to_date(to_char(date_col)) using the nls_date_format to do the two conversions, and the most common default format is one that truncates the date column to date-only. So in may systems  to_date(date_col) is nearly (though doing it an expensive way) the same as trunc(date_col).

But let’s go further and see how we wreck the benefit of an index, even if we’ve made sure that we still get the correct results. The following is a minimalist model of a common billing requirement: conversion between currencies:


rem
rem     Script:         date_to_date.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        'GBP'                           from_currency,
        'USD'                           to_currency,
        trunc(sysdate - 1000) + rownum  conversion_date,
        'Corporate'                     conversion_type,
        round(
                1.25 + dbms_random.value/4,
                6
        )                               conversion_rate,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
where
        rownum <= 1000 -- > comment to avoid WordPress format issue
;

create unique index t1_i1 on t1(from_currency, to_currency, conversion_date, conversion_type)
/
alter table t1 add constraint t1_pk primary key(
        from_currency, to_currency, conversion_date, conversion_type
)
/

create table driver(
        invoice_currency        varchar2(3),
        billing_currency        varchar2(3),
        client_type             varchar2(10),
        invoice_date            date
);

insert into driver values(
        'GBP', 'USD','Corporate',trunc(sysdate)
);

commit;

execute dbms_stats.gather_table_stats(null,'driver');

I’ve created table t1 to model the exchange rates between US dollars and UK pounds over a range of about three years ending “today”. There’s also a conversion_type column in the unique key to this table that allows us to have multiple reasons for exchanges, allowing multiple exchange rates on the same day. I really ought to have a check constraint on this table that says something like: check (conversion_date = trunc(conversion_date)).

I’ve also created a “driver” table that holds the data that might be exactly the data we need to extract an exchange rate for a single invoice. So let’s run the SQL that gets the appropriate exchange rate for this one invoice:


set serveroutput off
alter session set statistics_level = all;

prompt  =====================================
prompt  First run with simple date comparison
prompt  =====================================

select
        /*+
                leading(driver t1)
                use_nl_with_index(t1)
        */
        driver.*,
        t1.conversion_rate
from
        driver, t1
where
        t1.from_currency = driver.invoice_currency
and     t1.to_currency = driver.billing_currency
and     t1.conversion_type = driver.client_type
and     t1.conversion_date = driver.invoice_date
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  =================================
prompt  Now run with to_date(date_column)
prompt  =================================

select
        /*+
                leading(driver t1)
                use_nl_with_index(t1)
        */
        driver.*,
        t1.conversion_rate
from
        driver, t1
where
        t1.from_currency = driver.invoice_currency
and     t1.to_currency = driver.billing_currency
and     t1.conversion_type = driver.client_type
and     to_date(t1.conversion_date) = to_date(driver.invoice_date)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

In the first case I’ve used the correct join predicate between these tables; in the second I’ve put in a redundant to_date() function call at both ends of the predicate. (If you think this is unrealistic – it’s an exact match for the production code I reported in the blog note I cited above).

Here are the two execution plans – with their rowsource execution stats:

----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |      1 |00:00:00.01 |      10 |      8 |
|   1 |  NESTED LOOPS                |        |      1 |      1 |      1 |00:00:00.01 |      10 |      8 |
|   2 |   NESTED LOOPS               |        |      1 |      1 |      1 |00:00:00.01 |       9 |      8 |
|   3 |    TABLE ACCESS FULL         | DRIVER |      1 |      1 |      1 |00:00:00.01 |       7 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | T1_I1  |      1 |      1 |      1 |00:00:00.01 |       2 |      8 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |      1 |      1 |00:00:00.01 |       1 |      0 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."FROM_CURRENCY"="DRIVER"."INVOICE_CURRENCY" AND
              "T1"."TO_CURRENCY"="DRIVER"."BILLING_CURRENCY" AND "T1"."CONVERSION_DATE"="DRIVER"."INVOICE_DATE"
              AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE")


----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |      1 |00:00:00.01 |      15 |      4 |
|   1 |  NESTED LOOPS                |        |      1 |      1 |      1 |00:00:00.01 |      15 |      4 |
|   2 |   NESTED LOOPS               |        |      1 |      1 |      1 |00:00:00.01 |      14 |      4 |
|   3 |    TABLE ACCESS FULL         | DRIVER |      1 |      1 |      1 |00:00:00.01 |       7 |      0 |
|*  4 |    INDEX RANGE SCAN          | T1_I1  |      1 |      1 |      1 |00:00:00.01 |       7 |      4 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |      1 |      1 |00:00:00.01 |       1 |      0 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."FROM_CURRENCY"="DRIVER"."INVOICE_CURRENCY" AND
              "T1"."TO_CURRENCY"="DRIVER"."BILLING_CURRENCY" AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE")
       filter((TO_DATE(INTERNAL_FUNCTION("T1"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("DRIVE
              R"."INVOICE_DATE")) AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE"))

Three things to take note of:

  1. The index unique scan at operation 4 has changed to an index range scan.
  2. The predicate information for operation 4 has changed from a pure access predicate to an access predicate plus a very messy filter predicate
  3. The range scan that now appears at operation 4 gets 7 buffers (that’s one root block and 6 leaf blocks) to find the one rowid we need, and will have applied the messy filter predicate to all 1,000 index entries where the two currency codes were GBP/USD.

In the case of the older article the problem query was spending almost all of its time in a join like this, doing 59 buffer gets to find a single rowid for every invoice presented – possibly doing thousands of comparisons as it went.

 

April 22, 2020

Eureka!

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 1:08 pm BST Apr 22,2020

I woke up last night with a brilliant solution to a problem that’s been bugging me for more than a year. How does a call to report_sql_monitor() manage to produce output like this:

SQL Plan Monitoring Details (Plan Hash Value=4262489872)
======================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read  | Read  | Activity | Activity Detail |
|    |                                 |       | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes |   (%)    |   (# samples)   |
======================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |      |        13 |     +2 |     1 |        1 |       |       |          |                 |
|  1 |   SORT AGGREGATE                |       |       1 |      |        13 |     +2 |     1 |        1 |       |       |          |                 |
|  2 |    NESTED LOOPS                 |       |      5M |  14M |        13 |     +2 |     1 |       4M |       |       |          |                 |
|  3 |     NESTED LOOPS                |       |      5M |  14M |        13 |     +2 |     1 |       4M |       |       |          |                 |
|  4 |      TABLE ACCESS FULL          | T1    |      5M | 4686 |        13 |     +2 |     1 |       4M |  1050 | 260MB |          |                 |
|  5 |      INDEX RANGE SCAN           | T2_I1 |       1 |    2 |        14 |     +1 |    5M |       4M |  5494 |  83MB |          |                 |
|  6 |     TABLE ACCESS BY INDEX ROWID | T2    |       1 |    3 |        13 |     +2 |    5M |       4M | 17743 | 260MB |          |                 |
======================================================================================================================================================


If you’re wondering why this is a problematic output take a look at the A-Rows column for operation 4 (and 5), and the Execs colukmn for operations 5 (and 6). Given the nature of a nested loop join the number of Execs of operation 5 should match the number of rows (A-rows) generated by operation 4, and the number of Execs of operation 6 should match the number of rows generated by operation 5.

But Oracle claims to have generated 4 million rows then executed 5 million times in both cases. How does that happen?

The very simple thought I had last night was “formatting” – if the actual value is 4,500,000 maybe the A-rows rounds down and the Execs rounds up, so I set up a little test to check the hypothesis. Since the version where I’d first seen the anomaly was 12.1.0.2 that’s the version I tested first.

rem
rem     Script:         report_sql_monitor_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem 

drop table t2 purge;
drop table t1 purge;

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',25,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 4500100 -- > comment to avoid WordPress format issue
/

create index t1_i1 on t1(id);

create table t2 as select * from t1;
create index t2_i1 on t2(id);

alter system flush shared_pool;
alter system flush buffer_cache;

select  /*+
                monitor
                qb_name(main)
                leading(@inline t1@inline t2@inline)
                use_nl_with_index(@inline t2@inline)
        */
        count(n2)
from    (
        select
                /*+
                        qb_name(inline)
                */
                t1.n1, t2.n1 n2
        from    t1, t2
        where   t1.id <= 4499999
--      where   t1.id <= 4500000
--      where   t1.id <= 4500001
        and     t2.id =  t1.n1
        )
;

prompt  =======================================
prompt  Now run report_sql_monitor.sql from SYS
prompt  =======================================

You’ll note that I’ve allowed three possible values for the number of rows I get from t1 to drive the index probe into t2 – I thought I’d have to do several experiments to make the oddity appear. In fact it appeared on the very first attempt: A-Rows reported 4M, and Execs report a clearly incorrect 5M. (Repeating the test on 12.2.0.1 the anomaly disappeared – all the output reported 4M for 4,499,999; and 5M for 4,500,000.)

You’ll notice the “flush buffer_cache” command in this script, this came about because of some secondary testing. Because I wanted to check a few variations I commented out the “drop table” commands. The consequence of not dropping the tables was that the anomaly disappeared – in some way it was connected to reading blocks from disc and didn’t appear once the table and index were cached. In fact when I set the driving count to 4,998,000 in one of my tests I got a result that looked slightly more counter-intuitive than the original:

=====================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity | Activity Detail |
|    |                                 |       | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |   (# samples)   |
=====================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |      |        11 |     +2 |     1 |        1 |      |       |          |                 |
|  1 |   SORT AGGREGATE                |       |       1 |      |        11 |     +2 |     1 |        1 |      |       |          |                 |
|  2 |    NESTED LOOPS                 |       |      4M |  14M |        11 |     +2 |     1 |       4M |      |       |          |                 |
|  3 |     NESTED LOOPS                |       |      4M |  14M |        11 |     +2 |     1 |       4M |      |       |          |                 |
|  4 |      TABLE ACCESS FULL          | T1    |      4M | 4686 |        11 |     +2 |     1 |       4M | 1050 | 260MB |          |                 |
|  5 |      INDEX RANGE SCAN           | T2_I1 |       1 |    2 |        12 |     +1 |    4M |       4M |  345 |  83MB |          |                 |
|  6 |     TABLE ACCESS BY INDEX ROWID | T2    |       1 |    3 |        11 |     +2 |    5M |       4M | 1048 | 262MB |          |                 |
=====================================================================================================================================================


Notice how I’ve (“correctly”) acquried 4M index rowids at operation 5, but (apparently) managed to access the table 5M times!

As I noted above, the oddity seems to have disappeared in 12.2.0.1 so it’s not worth pursuing further. I think it might relate either to Oracle double counting the Exec when its first access is a phyiscal read, or maybe it’s managing to increment the wrong counter when it visits a space management block to find the next batch of blocks to read. The detail is really not very important.

What might be important, though, is that sometimes the number of Execs you see is truthfully greater than ought to be possible because of the strange effects that can appear with read-consistency. For a very long time I believed that the oddity I’ve show here was something to do with read-consistency and query restart, but I couldn’t come up with a hypothesis to explain how that could be happening. So I was very relieved last night to come up with a simple explanation (which actually turned out to be wrong, but did at least help me recognise how the anomaly could happen in perfectly harmless circumstances).

Footnote:

What’s the difference between 4 million and 5 million – sometimes it’s one (just one, not one million). When testing under 12.2.0.1 I only had to add one row to the driving rowsource (changing 4,499,999 to 4,500,000) to see the A-rows and Execs change from 4M to 5M. If you forget the implicit rounding errors in reports like this you can end up chasing a tiny anomaly because it looks like a huge anomaly.

 

April 10, 2020

raw timestamp

Filed under: Histograms,Oracle,Troubleshooting — Jonathan Lewis @ 6:59 pm BST Apr 10,2020

Many years ago I wrote a note with some comments about conveting the low_value and high_value columns of user_tab_columns to see what values they actually held. At the time the dbms_stats package held a few procedures to convert raw values and it was necessary to wrap these procedures in user-defined functions if you wanted a function to do the job.

In recent versions of Oracle (18c onwards) the package has finally acquired several functions to do a proper job, these are:

  • convert_raw_to_bin_double()
  • convert_raw_to_bin_float()
  • convert_raw_to_date()
  • convert_raw_to_number()
  • convert_raw_to_nvarchar()
  • convert_raw_to_rowid()
  • convert_raw_to_varchar()

You might note that these functions do not appear in the PL/SQL Packages and Types Reference, but they are documented in the script $ORACLE_HOME/rdbms/admin/dbmsstat.sql that creates the package:


-- Convert the internal representation of a minimum or maximum value
-- into a datatype-specific value. The minval and maxval fields
-- of the StatRec structure as filled in by get_column_stats or
-- prepare_column_values are appropriate values for input.

One thing you’ll notice about the list is that there’s no convert_raw_to_timestamp(), and a question came up recently on the oracle-l list server asking how to do this. This note answers that question and, in passing, demonstrates the typical use of the other functions.

As ever we start by creating some data:


rem
rem     Script:         raw_to_timestamp.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1(
        t0 timestamp(0),
        t3 timestamp(3),
        t6 timestamp(6),
        t9 timestamp(9),
        ts timestamp
);

insert into t1
with t_tab as (
        select to_timestamp('01-Apr-2020 21:15:10.123456789','dd-mon-yyyy hh24:mi:ss.ff9') ts
        from dual
)
select  t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts
from    t_tab
;

insert into t1
with t_tab as (
        select to_timestamp('01-Apr-2020 21:15:10.987654321','dd-mon-yyyy hh24:mi:ss.ff9') ts
        from dual
)
select  t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts
from    t_tab
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 2'
        );
end;
/

I’ve creaed a table to hold timestamps constrained to different levels of precision. The maximum precision allowed is 9 decimal places for the seconds, the default (column ts) is 6. I’ve then created two rows in the table using slightly different timestamps for the rows but giving all the columns in a single row the same value. Then I’ve gathered stats – including a histogram – on the table and all its columns.

I can now query user_tab_cols to pick up the low ahd high values:


select
        column_name, low_value, high_value
from
        user_tab_cols
where
        table_name = 'T1'
order by
        column_name
/

COLUMN_NAME          LOW_VALUE                  HIGH_VALUE
-------------------- -------------------------- --------------------------
T0                   7878040116100B             7878040116100C
T3                   7878040116100B0754D4C0     7878040116100B3AE3AF00
T6                   7878040116100B075BCDE8     7878040116100B3ADE6770
T9                   7878040116100B075BCD15     7878040116100B3ADE68B1
TS                   7878040116100B075BCDE8     7878040116100B3ADE6770


And here’s a dump of the columns so that you can see the actual values held in the table in their internal representation.


select
        dump(t0,16), 
        dump(t3,16), 
        dump(t6,16), 
        dump(t9,16), 
        dump(ts,16) 
from 
        t1;


DUMP(T0,16)
-------------------------------------------------------------------------------------
DUMP(T3,16)
-------------------------------------------------------------------------------------
DUMP(T6,16)
-------------------------------------------------------------------------------------
DUMP(T9,16)
-------------------------------------------------------------------------------------
DUMP(TS,16)
-------------------------------------------------------------------------------------
Typ=180 Len=7: 78,78,4,1,16,10,b
Typ=180 Len=11: 78,78,4,1,16,10,b,7,54,d4,c0
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,e8
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,15
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,e8

Typ=180 Len=7: 78,78,4,1,16,10,c
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,e3,af,0
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,67,70
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,68,b1
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,67,70

You’ll notice that, despite the attempt to limit the precision, all the columns other than the first use 11 bytes. The first column is different, with zero precision we use only 7 bytes and if you look closely (and know your internal storage formats) you’ll realise that those 7 bytes are using the standard format for stored dates. The last 4 bytes hold the fraction of the second stored as a number of nano-seconds as a simple 32-bit binary number. (Warning: it’s possible that you will see a difference in this set of bytes if your machine uses a different endianness from mine – I can’t test that for myself at present.)

So let’s see what we get if we try to convert the raw values using the call to dbms_stats.convert_raw_to_date(). I’ll start by setting the nls_XXX formats to get the full conntent of a date column or a timestamp column reported from SQL*Plus.


alter session set nls_date_format = 'dd-Mon-yyyy hh24:mi:ss';
alter session set nls_timestamp_format = 'dd-Mon-yyyy hh24:mi:ss.ff9';

set linesize 165

column t0 format a32
column t3 format a32
column t6 format a32
column t9 format a32
column ts format a32

select  t0, t3, t6, t9, ts from t1;


select
        column_name, 
        dbms_stats.convert_raw_to_date(low_value)       date_low,
        dbms_stats.convert_raw_to_date(high_value)      date_high
from 
        user_tab_cols
where
        table_name = 'T1'
order by 
        column_name
/



T0                               T3                               T6                               T9                               TS
-------------------------------- -------------------------------- -------------------------------- -------------------------------- --------------------------------
01-Apr-2020 21:15:10.000000000   01-Apr-2020 21:15:10.123000000   01-Apr-2020 21:15:10.123457000   01-Apr-2020 21:15:10.123456789   01-Apr-2020 21:15:10.123457000
01-Apr-2020 21:15:11.000000000   01-Apr-2020 21:15:10.988000000   01-Apr-2020 21:15:10.987654000   01-Apr-2020 21:15:10.987654321   01-Apr-2020 21:15:10.987654000

2 rows selected.


COLUMN_NAME          DATE_LOW             DATE_HIGH
-------------------- -------------------- --------------------
T0                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:11
T3                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
T6                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
T9                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
TS                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10

5 rows selected.

Since I’ve inserted only two rows, with each row holding a single value with different precision, we expect those values to be reflected in the low_value and high_value for the columns – and we’re not disappointed (probably) but, as you might have expected, Oracle has processed the timestamp data type into a date data type by simply ignoring the last 4 bytes and dealing with the first 7 bytes as if they were a date. We need to do a little more check that we can convert the low and high values into exactly the values that appear in the table itself.

So we can write a messy bit of SQL that extracts the first 7 bytes, converts them using the convert_to_date() call, then extracts the last 4 bytes and converts them to a number using a simple to_number() call (with an nvl() thrown in to deal with the special case of there being no nanosecond component), divides by 1e9, converts the result to an interval in seconds using the numtointervalds() function, and adds that to the date. To keep things clean I’ll only apply the mess to the low_value column. You’ll note that I’ve


select 
        column_name, 
        low_value,
--
--      convert bytes 8 onwards to numeric with an nvl()
--      to handle the case of zero precision when there 
--      are no bytes to show the number of nanoseconds
--
--      Note - I've used substr() to bytes 8 onwards turns
--      into characters 15 - 22.
--
        nvl(
                to_number(
                        hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                ),
                0
        ) nanoseconds,
--
--      repeat the conversion, and wrap with numtodsinterval()
--      to show the resulting interval in seconds.
--
        numtodsinterval(
                nvl(
                        to_number(
                                hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                        )/1e9,
                        0
                )+0,
                'SECOND'
        ) interval_val,
--
--      Converted the 1st 7 bytes (14 characters) to a date,
--      coerce the result to a timestamp, then convert the 
--      nanoseconds bytes to an interval and add
--
        to_timestamp(
                dbms_stats.convert_raw_to_date(hextoraw(substr(low_value,1,14)))
        ) + 
        numtodsinterval(
                nvl(
                        to_number(
                                hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                        )/1e9,
                        0
                ),
                'SECOND'
        ) timestamp_val
from 
        user_tab_cols
where
        table_name = 'T1'
order by
        column_name
/

COLUMN_NAME          LOW_VALUE                  NANOSECONDS INTERVAL_VAL                     TIMESTAMP_VAL
-------------------- -------------------------- ----------- -------------------------------- --------------------------------
T0                   7878040116100B                       0 +000000000 00:00:00.000000000    01-Apr-2020 21:15:10.000000000
T3                   7878040116100B0754D4C0       123000000 +000000000 00:00:00.123000000    01-Apr-2020 21:15:10.123000000
T6                   7878040116100B075BCDE8       123457000 +000000000 00:00:00.123457000    01-Apr-2020 21:15:10.123457000
T9                   7878040116100B075BCD15       123456789 +000000000 00:00:00.123456789    01-Apr-2020 21:15:10.123456789
TS                   7878040116100B075BCDE8       123457000 +000000000 00:00:00.123457000    01-Apr-2020 21:15:10.123457000

5 rows selected.

If you go back to the original output we got from selecting the actual values from the table you can see that the timestamp_val column we’ve generated correctly matches the column values for the lower of the two original values we inserted into the table.

Q.E.D.

March 12, 2020

Sequence Costs

Filed under: Bugs,Oracle,Performance,trace files,Troubleshooting — Jonathan Lewis @ 8:18 pm GMT Mar 12,2020

You’re probably aware of the “identity” construct that appeared in 12.1 and uses Oracle’s sequence mechanism to model the Autonumber or Identity columns that other databases have. A posting from Clay Jackson on the Oracle-L list server suggests that something about their code path has introduced a surprising overhead in 19c … when you’re not using them.

The following code is a slightly modified version of a test case that Clay Jackson posted to demonstrate a strange difference in performance between 12.2 and 19.3

rem
rem     Script:         19c_sequence_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

drop table test_tab2;
drop table test_tab1;

drop sequence test_seq1;
drop sequence test_seq2;

create sequence test_seq1;
create sequence test_seq2;

create table test_tab1 (
   id number primary key,
   c1 varchar2(1000)
);

create table test_tab2 (
   id number,
   t1_id number,
   c2 varchar2(1000),
   constraint test_tab2_fk foreign key (t1_id) 
        references test_tab1 (id) on delete cascade
);

spool 19c_sequence_cost.lst

set feedback off
set pagesize 0
set serveroutput off

execute snap_my_stats.start_snap
alter session set events '10046 trace name context forever';

prompt  ==============================================================================
prompt  start_1000.sql cascades to start_1.sql
prompt  start_1.sql holds:
prompt
prompt  insert into test_tab1 values (test_seq1.nextval,'hello');
prompt  insert into test_tab2 values (test_seq2.nextval, test_seq1.currval, 'byebye');
prompt  commit;
prompt
prompt  ==============================================================================

start start_1000

set serveroutput on
set pagesize 60
alter session set events '10046 trace name context off';
execute snap_my_stats.end_snap

spool off

I’ve got a couple of tables with columns that I plan to generate from sequences, and I’ve got a referential integrity constraint between those tables. I’ll be using nextval from one sequence to populate the first table, then use currval from the same sequence for the foreign key and nextval from the other sequence as a unique identifier for the second table.

I’ve used my “repeater” code to run a short script from the SQL*Plus prompt 1,000 times (as suggested by Clay Jackson in his posting). I’ve also added lines to enable SQL_trace at the basic level, and taken a snapshot of the session activity stats.

Just as Clay Jackson had reported – 19.3.0.0 took significantly longer than 12.2.0.1 to complete the 1,000 calls to the script. (Though in absolute terms we’re only talking fractions of a second for a small single user test.) Examination of the stats – which prompted me to add the 10046 trace and repeat – made it very clear where the difference lay. Here are a few figures picked from the session activity stats:

Version    Statistic                        Value
========   ========================         ===== 
12.2.0.1   CPU used by this session            53  
19.3.0.0   CPU used by this session           114

12.2.0.1   recursive cpu usage                 18
19.3.0.0   recursive cpu usage                 44

12.2.0.1   recursive calls                  1,182
19.3.0.0   recursive calls                  7,215

12.2.0.1   execute count                    2,137                                     
19.3.0.0   execute count                    8,151

Clearly it’s worth finding out what was happening in those 6,000 extra recursive calls. And this is what the 19.3 trace file (after passing through tkprof … sort=execnt) showed:


select count(1)
from
 sys.idnseq$ where seqobj# = :1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   6000      0.07       0.09          0          0          0           0
Fetch     6000      0.03       0.04          0       6000          0        6000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    12001      0.10       0.14          0       6000          0        6000

If you can’t guess what the table idnseq$ is about, the following definition appears in $ORACLE_HOME/rdbms/admin/dcore.bsq:

create table idnseq$     /* stores table identity column to sequence mapping */
( obj#         number not null,                       /* table object number */
  intcol#      number not null,                    /* identity column number */
  seqobj#      number not null,                    /* sequence object number */
  startwith    number not null,                   /* sequence starting value */
  supplemental log group idnseq$_log_grp (obj#, intcol#, seqobj#) always
)
/

We’re not using identity columns in this test, but every time we run the script we do six checks against this data dictionary table to see (presumably) if we’re using a sequence that is internally associated with an identity column. The requirement doesn’t seem completely unreasonable – but it’s a bit surprising that we do every check twice. (Note – we have to check test_seq1 for both statements but test_seq2 only for the second statement, which is how we get 6,000 executions: 2 * (2 checks for test_seq1 + 1 check for test_seq2) * 1,000 executions of the script.

The doubling looks like a bug, and you have to wonder why a suitable piece of information isn’t kept in the dictionary cache anyway to allow Oracle to avoid executing the recursive statement. I should point out that the surprisingly large impact is visible because I’ve executed 2,000 separate statements from the client side; mechanisms like array inserts and pl/sql loops might only have to do this check once for each array or pl/sql call. (Tests of this hypothesis are left as exercise for the interested reader.)

 

January 13, 2020

Collections

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 2:31 pm GMT Jan 13,2020

This is a note I drafted in September 2015 and only rediscovered a couple of days ago while searching for something I was sure I’d written about collections and/or table functions. The intention of collections and table functions is that they should behave like tables when you use them in a query – but there are cases where a real table and something cast to a table() aren’t treated the same way by the optimizer – and this 4-year old note (which is still valid in 2020 for 19c) is one of those cases.

 

There was a question – with test case – on Oracle-L recently [ed: now more than 4 years ago] about the behaviour of a query that changed plans as you switched from using a global temporary table to a collection – why was Oracle doing something inefficient with the collection. The answer was: “Bad luck, it’s a limitation in the optimizer”.  (Sub-text: collections are a pain).

The test case was short and simple so I thought I’d post it – with an h/t to Patrick Jolliffe who presented the probem and Timur Akhmadeev and Stefan Koehler who explained the problems.

Here’s the script (with a little cosmetic editing) to create the necessary objects and data:

rem
rem     Script:         collections.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2015
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create or replace type number_table is table of number;
/

create table test_objects as select * from all_objects;
create /* unique */ index test_objects_idx on test_objects(object_id);

exec dbms_stats.gather_table_stats(null, 'test_objects');

create global temporary table gtt_test_objects (object_id number);
insert into gtt_test_objects values (1);


In this example I’ve created a type which is a simple table of number. In a more general case you might create a simple object type, and then a type that was a table of that object type, then you might create a function that returned a variable of that table type, or a function that was declared to return the table type “pipelined” and uses the “pipe row” instruction in the code to return one value of the simple object type at a time. Whichever variation you used you could then use the table() operator to tell Oracle to treat the content of the table type as if it were a relational table. (In recent versions of Oracle the table() operator is redundant).

Here’s the first query, which uses the global temporary table in an “IN” subquery, followed by its execution plan – again with a little cosmetic editing and the addition of query block names across the board:


prompt  ==================================
prompt  Query using global temporary table
prompt  ==================================

select  
        /*+ qb_name(main) */ 
        null 
from    (
        select
                /*+ qb_name(inline) */
                distinct object_id 
        from    test_objects
        ) 
where   object_id in (
                select 
                        /*+
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        gtt_test_objects.object_id
                from
                        gtt_test_objects 
        )
;


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                  |      1 |        |      0 |00:00:00.01 |       5 |       |       |          |
|   1 |  VIEW                 | VM_NWVW_1        |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   2 |   SORT UNIQUE NOSORT  |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   3 |    NESTED LOOPS       |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   4 |     SORT UNIQUE       |                  |      1 |      1 |      1 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      TABLE ACCESS FULL| GTT_TEST_OBJECTS |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|*  6 |     INDEX RANGE SCAN  | TEST_OBJECTS_IDX |      1 |      1 |      0 |00:00:00.01 |       2 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"="GTT_TEST_OBJECTS"."OBJECT_ID")

As you can see I’ve set statistics_level to all, and used dbms_xplan.display_cursor() to pull the actual execution plan from memory. This plan tells us that the optimizer unnested the IN subquery to generate a unique set of values and used that unique set to drive a nested loop join into the test_objects table (with an index-only probe). Moreover, before this step, the optimizer used complex view merging and cost-based query transformation to postpone the “distinct” from the original query to do the join before distinct. The E-rows at operation 5 also tells us that the optimizer “knew” that there was only one row in the GTT – it took note of my cardinality() hint.

Now we replace with gtt_test_objects table with the collection – casting it to a table() and giving Oracle the same cardinality() hint – as follows:


select 
        /*+ 
                qb_name(main)
--              no_use_hash_aggregation(@sel$1)
        */ 
        null
from    (
        select  
                /*+ inline */
                distinct object_id 
        from    test_objects
        )
where   object_id in (
                select 
                        /*+ 
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        column_value object_id
                from
                        table(number_table(1)) gtt_test_objects
        )
;

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.08 |     132 |       |       |          |
|   1 |  MERGE JOIN SEMI                        |                  |      1 |      1 |      0 |00:00:00.08 |     132 |       |       |          |
|   2 |   SORT JOIN                             |                  |      1 |  56762 |      1 |00:00:00.08 |     132 |  1470K|   606K| 1306K (0)|
|   3 |    VIEW                                 |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |       |       |          |
|   4 |     HASH UNIQUE                         |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  4122K|  2749K| 3418K (0)|
|   5 |      INDEX FAST FULL SCAN               | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
|*  6 |   SORT UNIQUE                           |                  |      1 |      1 |      0 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"=VALUE(KOKBF$))
       filter("OBJECT_ID"=VALUE(KOKBF$))

The second plan is completely different. The optimizer has unnested the subquery to produce a join, but instead of using the unique set of values that it generated from the collection to drive a nested loop it’s decide to do a merge semi-join, which has entailed an expensive fast full scan of the test_objects_idx index to acquire all the key values first.

I tried to make the optimizer use the collection to drive a nested loop, adding some carefully targeted hints to force the join order and dictate a nested loop join with pushed predicate: but the optimizer wouldn’t push the “obvious” join predicate and continued to do an index fast full scan and sort of the text_object_idx. If you’re interested here are the hints and the resulting plan:

/*+
        qb_name(main)
        leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
        use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)
        push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
*/

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.03 |     132 |       |       |          |
|   1 |  NESTED LOOPS                           |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   2 |   SORT UNIQUE                           |                  |      1 |      1 |      1 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|*  4 |   VIEW                                  |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   5 |    SORT UNIQUE                          |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  2604K|   728K| 2314K (0)|
|   6 |     INDEX FAST FULL SCAN                | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_ID"=VALUE(KOKBF$))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 6 (U - Unused (1))
---------------------------------------------------------------------------
0 - SEL$102722C0
- qb_name(subq)

1 - SEL$8969F1C9
- leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
- qb_name(main)

1 - SEL$8969F1C9 / from$_subquery$_001@MAIN
U - push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
- use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)

5 - SEL$1
- inline

In the previous post we had a “NOT IN” subquery against a collection/pipelined table function that couldn’t even be unnested (even in 19c); in this example we have an IN subquery that does unnest but then can’t drive a nested loop efficiently because the optimizer won’t push the collection values into the distinct view, and won’t do complex view merging to avoid having to do that join predicate pushdown. Collections and table functions() just don’t play nicely with the optimizer!

In fact this plan also shows one of those “generic” approaches in the optimizer that allows a human operator to see a special case that could have been further optimized: if the optimizer had used a sort unique rather than a hash unique at operation 4 then the sort join at operation 2 would have been redundant – with an overall reduction in memory and CPU usage that I managed to get in a separate test by adding the hint /*+ no_use_hash_aggregation(@sel$1) */ to the query. (Since operation 6 is also a sort unique the merge join semi could, in principle, have become a merge join with no risk of producing duplicates – but the semi-join code path is probably a little more efficient, anyway, and a balance has to be struck between the risk of introducing complexity for a special case and the potential frequency and scale of the benefit it might produce.)

Conclusion

You can often see collections and table functions behaving very like tables when you use them in the from clause of queries – but there are some restrictions on the transformations that the optimizer can use when your query isn’t using “real” tables.

Footnote

There are many ways that you can play around with this starting model to investigate where the boundaries might be. For example, if I make the index on test_objects unique the plan changes to a simple nested loop driven by the unnested collection (there’s no longer a non-mergeable view in the way). If I eliminate the distinct from the original query the same thing happens (for the same reason). If I force the join order to start with the collection (using the leading() hint) but don’t hint a nested loop Oracle produces (at least in my case) a hash join with a Bloom filter that minimised the memory and and CPU requirement.

I mentioned at the start that Timur Akhmadeev and Stefan Koehler supplied explanations for what was going on behind the scenes. Critically Stefan also referenced one of two posts from the Oracle blog on complex view merging and its restrictions: part 1, part 2.

The related problem that led me to re-discover and complete this note is at this URL (published a couple of days ago).

January 10, 2020

Collection limitation

Filed under: Execution plans,Hints,Oracle,Troubleshooting — Jonathan Lewis @ 5:43 pm GMT Jan 10,2020

The ODC SQL and PL/SQL forum came up with an example a couple of days ago that highlighted an annoying limitation in the optimizer’s handling of table functions. The requirement was for a piece of SQL that would generate “installments” information from a table of contract agreements and insert into another table any installments that were not yet recorded there.

The mechanism to turn a single row of contract data into a set of installments was a (optionally pipelined) table function that involved some business logic that (presumably) dealt with the timing and size of the installments. The final SQL to create the data that needed to be inserted was reported as follows (though it had clearly been somewhat modified):

select 
        y.*,
        trunc(sysdate,'mm'),
        user
from 
        table_a a
join 
        table(
                my_function(
                        a.loan_acct_nbr, 
                        a.start_dt,
                        a.maturity_dt,
                        a.num_of_terms
                )
        ) y
on 
        a.loan_acct_nbr = y.loan_acct_nbr
where 
        a.yr_mnth_cd = to_char(add_months(sysdate,-1),'yyyymm')       -- last month
and     a.loan_typ   = 'ABC'
and     not exists ( 
                select  1 
                from
                        final_load_table l
                where
                        l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd 
        )
;

In this query, table_a is the set of contracts, final_load_table is the set of installments, and my_function() is the pipelined function returning a table of installments derived from the start date, maturity date, and number of installments for a contract. The query needs a “not exists” subquery to eliminate any installments that are already known to the database. Once this query is operating efficiently it could be used either to drive a PL/SQL loop or (generally the better strategy) to do a simple “insert as select”.

We were told that the function would return around 60 rows per contract; that the starting size of the final_load_table would be about 60M rows, and the size of the result set from the initial join would be about 60M or 70M rows (which suggests about 1M rows acquired from table_a).

The owner of this problem seemed to be saying that the query had returned no data after 4 or 5 hours – which suggests (a) the optimizer chose a bad plan and/or (b) the PL/SQL function is working very inefficiently and/or (c) the join had generated a vast amount of data but the effect of the subquery was to discard all of it .

Clearly we needed to see an execution plan (preferably from SQL Monitor) and be given have some idea of how much of the 60M/70M rows predicted for the join would be discarded because it already existed.

The owner did send us an execution plan – which included a very obvious threat and suggested a couple of optimizer errors – but supplied it as a picture rather than a flat text.

You’ll notice, of course, that the tables and columns have changed from the original statement. More significantly, though, there are serious problems with the numbers – the estimated row count for the basic join is only 5,326 rather than 50 Million which, at first sight, is probably why the optimizer has decided that a filter subquery (operation 1) to do an index-only probe (operation 5) is a good way of handling the subquery. Perhaps if the estimates had been a little more accurate (e.g. through the addition of a couple of column groups or, if necessary, by opt_estimate() or cardinality() hints) the subquery would have been unnested and turned into a hash anti-join.

I suggested a quick test of a suitable cardinality() hint – but ran up a little model to check that I’d got the hint right – and found that I had but it wasn’t going to help. So I decided to write up the model (and a possible solution for the owner of the problem) in this blog note.

Here’s the code to create the set of objects I’m going to work with. The naming follows the naming in the original statement of the problem suggested by the owner:


rem
rem     Script:         table_function_plan.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table table_a (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        start_dt        date,
        maturity_dt     date    ,
        number_of_terms number,
        constraint ta_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

execute dbms_random.seed(0)

insert /*+ append */
into    table_a
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        sysdate-(365-mod(rownum,365)),
        sysdate+(1500+mod(rownum,365)),
        60
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table final_load_table_l(
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200),
        constraint lt_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

insert /*+ append */ into final_load_table_l
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        lpad(rownum,10),
        lpad('x',200,'x')
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'table_a',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'final_load_table_l',
                method_opt  => 'for all columns size 1'
        );
end;
/

create type my_row_type as object (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200)
);
/

create type my_table_type as table of my_row_type;
/

create  or replace function my_function (
        i_loan_acct_nbr         in      number,
        i_yr_mnth_cd            in      varchar2,
        i_start_dt              in      date,
        i_maturity_dt           in      date,
        i_number_of_terms       in      number
)       return  my_table_type pipelined
as
begin
        for i in 1..i_number_of_terms loop
                pipe row (
                        my_row_type(
                                i_loan_acct_nbr,
                                to_char(i_start_dt+32*i,'yyyymm'),
                                i,
                                lpad('x',200,'x')
                        )
                );
        end loop;
        return;
end;
/

I was planning to create some large tables – hence the option to generate 16M rows from my generator CTEs – but I discovered the critical problem almost as soon as I had some data and code in place, so I didn’t need to go large.

I’ve had to create an object type and table type in order to create a pipelined function that returns the table type by piping rows of the object type. The data I’ve created, and the way the function generates data probably doesn’t bear much resemblance to the real system of course, but I don’t think it needs to be very close to make the crucial point.

Here’s the text of the select statement the OP wants to run, with the execution plan I got from my data set after running the query and pulling the plan from memory:

alter session set statistics_level = all;

select 
        /*+ find this 1 */
        y.* 
from 
        table_a a, 
        table(my_function(
                a.loan_acct_nbr,
                a.yr_mnth_cd,
                a.start_dt,
                a.maturity_dt,
                a.number_of_terms
        )) y
where 
        a.yr_mnth_cd = '202001'
and     not exists (
                select  
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd
        )
;

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |  5059K(100)|  14580 |00:00:00.15 |   16330 |
|*  1 |  FILTER                             |             |      1 |        |            |  14580 |00:00:00.15 |   16330 |
|   2 |   NESTED LOOPS                      |             |      1 |   6283K| 10421   (8)|  14580 |00:00:00.11 |     335 |
|*  3 |    TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     297 |
|   4 |    COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    14   (8)|  14580 |00:00:00.10 |      38 |
|*  5 |   INDEX UNIQUE SCAN                 | LT_PK       |  14580 |      1 |     1   (0)|      0 |00:00:00.02 |   15995 |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NULL)
   3 - filter("A"."YR_MNTH_CD"='202001')
   5 - access("L"."LOAN_ACCT_NBR"=:B1 AND "L"."YR_MNTH_CD"=:B2)


I’ve put in a hint to tell the optimizer to unnest the subquery – and it didn’t. Oracle does not ignore hints (unless they’re illegal, or out of context, or the optimizer never got to them, or you’ve found a bug) so why did Oracle appear to ignore this hint? There’s a really nice feature in 19.3 execution plans – you can request a hint report for a plan, and here’s the hint report for this query (ignore the bits about “find this” being an error):


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1), E - Syntax error (2))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  find
         E -  this

   5 -  SEL$3
         U -  unnest / Invalid correlated predicates


I put in an /*+ unnest */ hint to unnest the subquery, and I’ve been told that the predicates are not valid. The only thing about them that could be invalid is that they come from a pipelined function that has returned an object type. The pipelined function does not behave exactly like a table. But this gives me a clue about forcing the unnest to happen – hide the pipelined function inside a non-mergeable view.


select
        /*+ find this 2 */
        v.*
from    (
        select  /*+ no_merge */
                y.* 
        from 
                table_a a, 
                table(my_function(
                        a.loan_acct_nbr,
                        a.yr_mnth_cd,
                        a.start_dt,
                        a.maturity_dt,
                        a.number_of_terms
                )) y
        where 
                a.yr_mnth_cd = '202001'
        )       v
where   not exists (
                select
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = v.loan_acct_nbr
                and     l.yr_mnth_cd    = v.yr_mnth_cd
        )
/


------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        | 10628 (100)|  14580 |00:00:00.12 |     387 |       |       |          |
|*  1 |  HASH JOIN RIGHT ANTI                |             |      1 |   6283K| 10628  (10)|  14580 |00:00:00.12 |     387 |  1878K|  1878K| 2156K (0)|
|   2 |   INDEX FAST FULL SCAN               | LT_PK       |      1 |  10000 |     6  (17)|  10000 |00:00:00.01 |      91 |       |       |          |
|   3 |   VIEW                               |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.11 |     296 |       |       |          |
|   4 |    NESTED LOOPS                      |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.10 |     296 |       |       |          |
|*  5 |     TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     296 |       |       |          |
|   6 |     COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    13   (0)|  14580 |00:00:00.10 |       0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."LOAN_ACCT_NBR"="V"."LOAN_ACCT_NBR" AND "L"."YR_MNTH_CD"="V"."YR_MNTH_CD")
   5 - filter("A"."YR_MNTH_CD"='202001')

I’ve wrapped the basic join between table_a and the table function in an inline view called (unimaginatively) v, and I’ve added a /*+ no_merge */ hint to that inline view. So the main query becomes a select from a non-mergeable view with a “not exists” subquery applied to a couple of view columns. And Oracle thinks that that’s okay – and my unnest hint encouraged the optimizer to use a hash anti-join.

So here’s an option for the owner of the problem – but with a few outstanding questions: will a rewrite of their query in this form do the same, will the performance of the select be good enough, and will the subsequent “insert as select” keep the same driving plan.

There’s one significant detail to worry about: the build table in this hash (anti-)join is going to be constructed from 50M (load_acct_bfr, yr_mnth_cd) pairs – which means the workarea requirement is likely to be about 1.2GB for an optimial (i.e. in-memory) hash join; otherwise the join may spill to disc and do a lot of I/O – probably as a one-pass hash join.

(Did you notice,by the way, that the word storage appeared at operation 3 in the original plan?  That suggests a nice big Exadata box; however, there’s no storage predicate in the Predicate Information section for that operation and you would have thought that lease_type = ‘US_SSP’ would be pushed to storage, so maybe this is a ZFS Pillar backing a less exotic server.)

Conclusion

Some (if not all) types of correlated subqueries behave badly if the correlation predicates involve table functions. But you may be able to work around the issue by hiding part of the query, including the table function, inside a non-mergeable inline view before applying the subquery to the view.

Footnote

When I realised that the table function was causing a problem unnesting I remembered that I had written about a similar problem a few years ago – after searching for a little while I discovered a draft note that I had started in September 2015 but had not quite finished; so I’ll be finishing it off and publishing it some time in the next few days. (Update: now published)

 

 

December 20, 2019

Purge Cursor

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 1:54 pm GMT Dec 20,2019

This is a note I first drafted about 5 years ago (the date stamp says March 2014) and rediscovered a few days ago when the question came up on a Twitter thread.

How do you purge a single SQL statement from the library cache without having to execute “alter system flush shared_pool”?

The answer is in the package dbms_shared_pool, specfically the purge() procedure. This package changes significantly in the upgrade from 11.2 (manual page here) to 12.1 (manual page here) so it’s best to check the reference manual for the version you’re using in case it changes again.  In 11.2 (and earlier) there’s just one option for the purge() procedure but in 12.1 the package gets 3 overloaded versions of the procedure – and one of the operloads gets an extra parameter (edition) by 19c.

Side note: In very early versions of Oracle the package wasn’t installed automatically, so you may have to execute $ORACLE_HOME/admin/rdbms/dbmspool.sql (possibly followed by prvtpool.plb) to install it. The facility to purge a cursor appeared in 11.1 and was then back-ported to 10.2.0.4. The manual pages for the procedure are, however, not up to date and don’t list all the possible flags that tell the procedure what type of object it is supposed to be purging.

The only use I’ve made of the purge() procedure is to purge a cursor from memory, though you can purge other types of object if you want to. Technically you could flush the execution plan from memory without eliminating the cursor, though you would still have to re-optimize the statement so there may be no benefit (or very little benefit) in doing so.

To demonstrate the mechanism I’m going to use three sessions – two to run a query with different optimizer environments, then a third to find and purge the cursors. Here’s the code for the first two sessions:


rem
rem     Script: purge_cursor.sql
rem     Dated:  March 2014
rem     Author: Jonathan Lewis
rem
rem     Last tested
rem             12.2.0.1
rem             11.2.0.4
rem             10.2.0.4  -- with variations to get  two child cursors
rem

/*    To be run by session 1    */

create table t1 as select * from all_objects where rownum <= 10000;
create index t1_i1 on t1(object_id) invisible;

alter session set optimizer_use_invisible_indexes = true;

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

/*    To be run by session 2    */

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

You’ll see that I’ve created an invisible index on the table then allowed one session to use invisible indexes while the other session isn’t allowed to. As a consequence session 1 will produce an execution plan that shows an index range scan for child cursor 0, and session 2 will produce an execution plan that shows a table scan for child cursor 1. I won’t show the output from these two sessions, but my calls to dbms_xplan reported the sql_id as ‘ab08hg3s62rpq’ and I’ve used that as the value for a substituion variable in the code to be run by session 3.

The final demo srcipt is a little messy because it’s going to attempt to report all the execution plans for that sql_id three times, but it will also write and execute a script to call the purge() procedure twice – once to eliminate the plans but leave the cursors in place, then a second time to purge the cursors.

define m_sql_id = 'ab08hg3s62rpq'

spool purge_cursor

prompt  =============================
prompt  Before purge- 2 child cursors
prompt  =============================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C', 64) }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set feedback on
set verify on

spool purge_cursor append

prompt  =======================================
prompt  After heap 6 purge- 2 cursors, no plans
prompt  =======================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C') }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set verify on
set feedback on

spool purge_cursor append

prompt  =================================
prompt  After complete purge - no cursors
prompt  =================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

On the first pass the select from v$sqlarea produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C', 64)

On the second pass the select produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C')

The effect of the first call is to purge heap 6 (power(2,6) = 64) for any child cursors that exist for the sql_id. The effect of the second call is to purge the entire set of child cursors. The purge_cursor.lst file ends up with the following results:


=============================
Before purge- 2 child cursors
=============================

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3320414027

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    25 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=250)

SQL_ID  ab08hg3s62rpq, child number 1
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    27 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    25 |    27   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=250)


37 rows selected.

=======================================
After heap 6 purge- 2 cursors, no plans
=======================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

SQL_ID  ab08hg3s62rpq, child number 1

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)


16 rows selected.

=================================
After complete purge - no cursors
=================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID: ab08hg3s62rpq cannot be found


2 rows selected.

As you can see we start with two child cursors and two different execution plans, then the two child cursors “lose” their execution plans, and finally the cursors disappear completely. (And the parent disappears at the same time.)

Presenting the procedure with a little more formality – the formal declaration of the procedure that I’ve been using reads:


procedure purge(
        name    varchar2, 
        flag    char    DEFAULT 'P', 
        heaps   number  DEFAULT 1
)

To purge a cursor we set the flag to ‘C’ (or ‘c’)  and the name to ‘{address},{hash_value}’ (make sure you don’t get extra spaces in that expression). If we want to purge just the execution plan we need to target heap 6 which means setting the heaps value to power(2,6). Different types of object use difference (sub)heaps so if you want to delete multiple heaps you need to add together the appropriate power(2,N); the default value for heaps is 1, which equates to heap 0, which means the whole object.

For other types of object there is a reference list (under the keep() procedure) that expands on the manuals to give us the following possible values of flag:


  --        Value        Kind of Object to {keep}
  --        -----        ------------------------
  --          P          package/procedure/function
  --          Q          sequence
  --          R          trigger
  --          T          type
  --          JS         java source
  --          JC         java class
  --          JR         java resource
  --          JD         java shared data
  --          C          cursor

You will note, of course, that I’ve used v$sqlarea rather than v$sql when I was searching for the address and hash value. When you purge a cursor you purge every child cursor you can’t identify an individual child. Even if you query v$sql instead of v$sqlarea, and use a single child_address instead of the (parent) address the purge() procedure will still purge every child for the parent.

Warning

There is a comment in my original notes about a bug that was fixed by 11.2.0.4 – if any of the child cursors is currently executing then the purge() procedure will go into a loop waiting on “cursor: pin X”, timing out every 1/100 second. Unfortunately my notes didn’t make any explicit comment about what impact this had on any session trying to parse or execute any of the children for that parent cursor – but I suspect that you’d end up with a nasty race condition and an apparent hang.

 

December 18, 2019

Wait for Java

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 9:59 am GMT Dec 18,2019

This is a note courtesy of Jack can Zanen on the Oracle-L list server who asked a question about “wait for CPU” and then produced the answer a couple of days later. It’s a simple demonstration of how Java in the database can be very deceptive in terms of indicating CPU usage that isn’t really CPU usage.

Bottom line – when you call Java Oracle knows you’re about to start doing some work on the CPU, but once you’re inside the java engine Oracle has no way of knowing whether the java code is on the CPU or waiting. So if the java starts to wait (e.g. for some slow file I/O) Oracle will still be reporting your session as using CPU.

To demonstrate the principle, I’m going to create little java procedure that simply goes to sleep – and see what I find in the active session history (ASH) after I’ve been sleeping in java for 10 seconds.

rem
rem     Script:         java_wait_for_cpu.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Based on an email from Jack van Zanen to Oracle-L
rem

set time on

create or replace procedure milli_sleep(i_milliseconds in number) 
as 
        language java
        name 'java.lang.Thread.sleep(int)';
/

set pagesize 60
set linesize 132
set trimspool on

column sample_time format a32
column event       format a32
column sql_text    format a60
column sql_id      new_value m_sql_id

set echo on
execute milli_sleep(1e4)

select 
        sample_time, sample_id, session_state, sql_id, event 
from 
        v$active_session_history
where 
        session_id = sys_context('userenv','sid')
and     sample_time > sysdate - 1/1440 
order by 
        sample_time
;

select  sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text 
from    v$sql 
where   sql_id = '&m_sql_id'
;

I’ve set timing on and set echo on so that you can see when my code starts and finishes and correlate it with the report from v$active_session_history for my session. Since I’ve reported the last minute you may find some other stuff reported before the call to milli_sleep() but you should find that you get a report of about 10 seconds “ON CPU” even though your session is really not consuming any CPU at all. I’ve included a report of the SQL that’s “running” while the session is “ON CPU”.

Here (with a little edit to remove the echoed query against v$active_session_history) are the results from a run on 12.2.0.1 (and the run on 19.3.0.0 was very similar):


Procedure created.

18:51:17 SQL> execute milli_sleep(1e4)

PL/SQL procedure successfully completed.

SAMPLE_TIME                       SAMPLE_ID SESSION SQL_ID        EVENT
-------------------------------- ---------- ------- ------------- --------------------------------
16-DEC-19 06.51.11.983 PM          15577837 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.12.984 PM          15577838 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.13.985 PM          15577839 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.14.985 PM          15577840 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.15.986 PM          15577841 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.16.996 PM          15577842 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.17.995 PM          15577843 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.18.999 PM          15577844 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.20.012 PM          15577845 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.21.018 PM          15577846 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.22.019 PM          15577847 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.23.019 PM          15577848 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.24.033 PM          15577849 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.25.039 PM          15577850 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.26.047 PM          15577851 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.27.058 PM          15577852 ON CPU  4jt6zf4nybawp

16 rows selected.

18:51:27 SQL>

SQL_ID          CPU_TIME    ELAPSED SQL_TEXT
------------- ---------- ---------- ------------------------------------------------------------
4jt6zf4nybawp       .004     10.029 BEGIN milli_sleep(1e4); END;


As you can see I had a statement executing for a few seconds before the call to milli_sleep(), but then we see milli_sleep() “on” the CPU for 10 consecutive samples; but when the sleep ends the query for actual usage shows us that the elapsed time was 10 seconds but the CPU usage was only 4 milliseconds.

December 16, 2019

IOT Bug

Filed under: Bugs,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 3:58 pm GMT Dec 16,2019

Here’s a worrying bug that showed up a couple of days ago on the Oracle-L mailing list. It’s a problem that I’ve tested against 12.2.0.1 and 19.3.0.0 – it may be present on earlier versions of Oracle. One of the nastiest things about it is that you might not notice it until you get an “out of space” error from the operating system. You won’t get any wrong results from it, but it may well be adding an undesirable performance overhead.

Basically it seems that (under some circumstances, at least) Oracle is setting the “block guess” component of the secondary index on Index Organized Tables (IOTs) to point to blocks in the overflow segment instead of blocks in the primary key segment. As a result, when you execute a query that accesses the IOT through the secondary index and has to do reads from disc to satisfy the query – your session goes through the following steps:

  • Identify index entry from secondary index – acquire “block guess”
  • Read indicated block and discover the object number on the block is wrong, and the block type is wrong
  • Write a (silent) ORA-01410 error and do a block dump into the trace file
  • Use the “logical rowid” from the secondary index (i.e. the stored primary key value) to access the primary key index by key value

So your query runs to completion and you get the right result because Oracle eventually gets there using the primary key component stored in the secondary index, but it always starts with the guess[see sidebar] and for every block you read into the cache because of the guess you get a dump to the trace file.

Here’s a little code to demonstrate. The problem with this code is that everything appears to works perfectly, you have to be able to find the trace file for your session to see what’s gone wrong. First we create some data – this code is largely copied from the original posting on Oracle-L, with a few minor changes:


rem
rem     Script:         iot_bug_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Notes
rem     THe OP had tested on 19.5.0.0 to get the same effect, see:
rem     //www.freelists.org/post/oracle-l/IOT-cannot-get-valid-consensus-bug-or-unexplained-behavio
rem

drop table randomload purge;

create table randomload(
        roll number,
        name varchar2(40),
        mark1 number,
        mark2 number,
        mark3 number,
        mark4 number,
        mark5 number,
        mark6 number,
        primary key (roll)
) 
organization index 
including mark3 overflow
;

create index randomload_idx on randomload(mark6);

insert into randomload 
select 
        rownum, 
        dbms_random.string(0,40) name, 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,10000)) 
from 
        dual 
connect by 
        level < 1e5 -- > comment to avoid wordpress format issue
;

commit;

exec dbms_stats.gather_table_stats(null,'randomload', cascade=>true);

prompt  ==================================================
prompt  pct_direct_access should be 100 for randomload_idx
prompt  ==================================================

select 
        table_name, index_name, num_rows, pct_direct_access, iot_redundant_pkey_elim  
from 
        user_indexes
where
        table_name = 'RANDOMLOAD'
;

It should take just a few seconds to build the data set and you should check that the pct_direct_access is 100 for the index called randomload_idx.

The next step is to run a query that will do an index range scan on the secondary index.

 
column mark6 new_value m_6

select 
        mark6, count(*) 
from
        randomload 
group by 
        mark6
order by 
        count(*)
fetch first 5 rows only
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';
set serveroutput off

select avg(mark3) 
from 
        randomload 
where 
        mark6 = &m_6
;

select * from table(dbms_xplan.display_cursor);

alter session set events '10046 trace name context off';
set serveroutput on

I’ve started by selecting one of the least frequencly occuring values of m_6 (a column I know to be in the overflow); then I’ve flushed the buffer cache so that any access I make to the data will have to start with disk reads (the original poster suggested restarting the database at this point, but that’s not necessary).

Then I’ve enabled sql_trace to show wait states (to capture details of what blocks were read and which object they belong to),, and I’ve run a query for m_3 (a column that is in the primary key (TOP) segment of the IOT) and pulled its execution plan from memory to check that the query did use a range scan of the secondary index. Here’s the plan:

----------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       |    11 (100)|          |
|   1 |  SORT AGGREGATE    |                   |     1 |     7 |            |          |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_77298 |    10 |    70 |    11   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN| RANDOMLOAD_IDX    |    10 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MARK6"=1316)
   3 - access("MARK6"=1316)

As you can see the plan shows what we are hoping to see – an index range scan of the secondary index that let’s it follow up with a unique scan of the primary key segment. It’s just a little odd that the access predicate reported for operation 2 (unique scan of TOP) suggests that the access is on a column that isn’t in the primary key and isn’t even in the TOP section.

So the query works and gives the right answer. But what do we find in the trace directory ? If you’re running 12c (possibly only 12.2), each time the error occurs the following pattern of information will be written to the alert log (it didn’t appear in 19.3)


ORCL(3):Hex dump of (file 22, block 16747) in trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_7888.trc
ORCL(3):
ORCL(3):Corrupt block relative dba: 0x0580416b (file 22, block 16747)
ORCL(3):Bad header found during multiblock buffer read (logical check)
ORCL(3):Data in bad block:
ORCL(3): type: 6 format: 2 rdba: 0x0580416b
ORCL(3): last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
ORCL(3): spare3: 0x0
ORCL(3): consistency value in tail: 0x484c0601
ORCL(3): check value in block header: 0x4408
ORCL(3): computed block checksum: 0x0
ORCL(3):

And the following pattern of information is written to the trace file [Update: a follow-up test on 11.2.0.4 suggests that the basic “wrong block address” error also happens in that version of Oracle, but doesn’t result in a dump to the trace file]:


kcbzibmlt:: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 makecr 0 line 15461 ds_blk (22, 16747) bh_blk (22, 16747)
kcbds 0x7ff1ca8c0b30: pdb 3, tsn 8, rdba 0x0580416b, afn 22, objd 135348, cls 1, tidflg 0x8 0x80 0x0
    dsflg 0x108000, dsflg2 0x0, lobid 0x0:0, cnt 0, addr 0x0, exf 0x10a60af0, dx 0x0, ctx 0
    whr: 'qeilwh03: qeilbk'
env [0x7ff1ca8e3e54]: (scn: 0x00000b860e364893   xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid:  0x0000.000.00000000  st-scn: 0x0000000000000000  hi-scn: 0x0000000000000000  ma-scn: 0x00000b860e364879  flg: 0x00000660)
BH (0xb1fd6278) file#: 22 rdba: 0x0580416b (22/16747) class: 1 ba: 0xb1c34000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 763,14
  dbwrid: 0 obj: 135348 objn: 135348 tsn: [3/8] afn: 22 hint: f
  hash: [0x9eff0528,0x77cff808] lru: [0xb1fd2578,0x9ff84658]
  ckptq: [NULL] fileq: [NULL]
  objq: [0xb6f654c0,0x9ff84680] objaq: [0xb6f654d0,0x9ff84690]
  use: [0x77b78128,0x77b78128] wait: [NULL]
  st: READING md: EXCL tch: 0
  flags: only_sequential_access
  Printing buffer operation history (latest change first):
  cnt: 5
  01. sid:10 L122:zgb:set:st          02. sid:10 L830:olq1:clr:WRT+CKT
  03. sid:10 L951:zgb:lnk:objq        04. sid:10 L372:zgb:set:MEXCL
  05. sid:10 L123:zgb:no:FEN          06. sid:10 L083:zgb:ent:fn
  07. sid:08 L192:kcbbic2:bic:FBD     08. sid:08 L191:kcbbic2:bic:FBW
  09. sid:08 L604:bic2:bis:REU        10. sid:08 L190:kcbbic2:bic:FAW
  11. sid:08 L602:bic1_int:bis:FWC    12. sid:08 L822:bic1_int:ent:rtn
  13. sid:08 L832:oswmqbg1:clr:WRT    14. sid:08 L930:kubc:sw:mq
  15. sid:08 L913:bxsv:sw:objq        16. sid:08 L608:bxsv:bis:FBW
Hex dump of (file 22, block 16747)

   ... etc.

Corrupt block relative dba: 0x0580416b (file 22, block 16747)
Bad header found during multiblock buffer read (logical check)
Data in bad block:
 type: 6 format: 2 rdba: 0x0580416b
 last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
 spare3: 0x0
 consistency value in tail: 0x484c0601
 check value in block header: 0x4408
 computed block checksum: 0x0
TRCMIR:kcf_reread     :start:  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
TRCMIR:kcf_reread     :done :  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf

The nasty bit, of course, is the bit I’ve removed and replaced with just “etc.”: it’s a complete block dump (raw and symbolic) which in my example was somthing like 500 lines and 35KB in size.

It’s not immediately obvious exactly what’s going on and why, but the 10046 trace helps a little. From another run of the test (on 19.3.0.0) I got the following combination of details – which is an extract showing the bit of the wait state trace leading into the start of the first block dump:

WAIT #140478118667016: nam='db file scattered read' ela= 108 file#=13 block#=256 blocks=32 obj#=77313 tim=103574529210
WAIT #140478118667016: nam='db file scattered read' ela= 2236 file#=13 block#=640 blocks=32 obj#=77313 tim=103574531549
WAIT #140478118667016: nam='db file scattered read' ela= 534 file#=13 block#=212 blocks=32 obj#=77312 tim=103574532257
kcbzibmlt: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 warm_up abort 0 makecr 0 line 16082 ds_blk (13, 212) bh_blk (13, 212)

Object 77313 is the secondary index, object 77312 is the primary key index (IOT_TOP). It may seem a little odd that Oracle is using db file scattered reads of 32 blocks to read the indexes but this is a side effect of flushing the buffer – Oracle may decide to prefeetch many extra blocks of an object to “warmup” the cache just after instance startup or a flush of the buffer cache. The thing I want to check, though, is what’s wrong with the blocks that Oracle read from object 77312:


alter system dump datafile 13 block min 212 block max 243;

BH (0xc8f68e68) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xc8266000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa7fd6c38) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xa7c2a000
  set: 12 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa5f75780) file#: 13 rdba: 0x034000d5 (13/213) class: 0 ba: 0xa5384000
  set: 11 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xdafe9220) file#: 13 rdba: 0x034000d5 (13/213) class: 1 ba: 0xdadcc000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

I’ve reported the first few lines of the symbolic dump for the first few blocks of the resulting trace file. Look at the third line of each group of three BH lines: it’s reporting object 77311 (the overflow segment), not 77312 (the TOP segment). And every single block reported in the db file scattered read of 32 blocks for object 77312 reports itself, when dumped, as being part of object 77311. And that’s possibly the immediate cause of the ORA-01410.

We can take the investigation a little further by dumping a leaf block or two from the secondary index.


alter session set events 'immediate trace name treedump level 77313';

----- begin tree dump
branch: 0x3400104 54526212 (0: nrow: 542, level: 1)
   leaf: 0x340010d 54526221 (-1: row:278.278 avs:2479)
   leaf: 0x340075e 54527838 (0: row:132.132 avs:5372)
   leaf: 0x34005fb 54527483 (1: row:41.41 avs:7185)

alter system dump datafile 13 block 1886   -- leaf: 0x340075e

BH (0xd5f5d090) file#: 13 rdba: 0x0340075e (13/1886) class: 1 ba: 0xd5158000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77313 objn: 77313 tsn: [3/6] afn: 13 hint: f
...
row#6[5796] flag: K------, lock: 0, len=18
col 0; len 2; (2):  c1 1d
col 1; len 4; (4):  c3 07 41 5c
tl: 8 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 4]  03 40 05 7c

I’ve done a treedump of the secondary index and picked a leaf block address from the treedump and dumped that leaf block, and from that leaf block I’ve extracted one index entry to show you the three components: the key value (c1 1d), the primary key for the row (c3 07 41 5c), and the block guess (03 40 05 75). Read the block guess as a 4 byte hex number, and it translates to file 13, block 1397 – which should belong to the TOP segment. So the exciting question is – what object does block (13, 1397) think it belongs to ?


alter system dump datafile 13 block 1397;

Block header dump:  0x03400575
 Object id on Block? Y
 seg/obj: 0x12dff  csc:  0x00000b860e308c46  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x3400501 ver: 0x01 opc: 0
     inc: 0  exflg: 0

Converting from Hex to Decimal: obj: 0x12dff = 77311 which is the overflow segment. The secondary index block guess is pointing at a block in the overflow segment.

There are two ways to handle this problem – you could simply rebuild the index (alter index rebuild) or, as the original poster did, use the “update block references” command to correct all the block guesses: “alter index randomload_idx update block references;”. Neither is desirable, but if you’re seeing a lot of large trace files following the pattern above then it may be necessary.

There was one particular inconsistency in the tests – which I ran many times – occasionally the pct_direct_access for the secondary index would be reported as zero (which, technically, should always happen given the error).  If it did, of course, Oracle wouldn’t follow the guess but would go straight to the step where it used the primary key “logical rowid” – thus bypassing the error and block dump.

tl;dr

In some circumstances the block guesses in the secondary indexes of IOTs may be pointing to the overflow segment instead of the primary key (TOP) segment. If this happens then queries will still run and give the right answers, but whenever they read a “guessed” block from disc they will report an ORA-01410 error and dump a block trace. This will affect performance and may cause space problems at the O/S level.

Sidebar

An entry in the secondary index of an Index Organized Table (IOT) consists of three parts, which intially we can think in the form:

({key-value}, {logical rowid}, {block guess})

Since IOTs don’t have real rowids the “logical rowid” is actually the primary key of the row where the {key value} will be found. As a short cut for efficient execution Oracle includes the block address (4 bytes) where that primary key value was stored when the row was inserted. Because an IOT is an index “rows” in the IOT can move as new data is inserted and leaf blocks split, so eventually any primary key may move to a different block – this is why we refer to the block address as a guess – a few days, hours, or minutes after you’ve inserted the row the block address may no longer be correct.)

To help the runtime engine do the right thing Oracle collects a statistic called pct_direct_access for secondary indexes of IOTs. This is a measure of what percentage of the block guesses are still correct at the time that the statistics are gathered. If this value is high enough the run-time engine will choose to try using the block guesses while executing a query (falling back to using the logical rowid if it turns out that the guess is invalid), but if the value drops too low the optimizer will ignore the block guesses and only use the logical rowid.

Not relevant to this note – but a final point about secondary indexes and logical rowids – if the definition of the index includes  some of the columns from the primary keys Oracle won’t store those columns twice (in more recent version, that is) – the code is clever enough to use the values stored in the (key value) component when it needs to use the (logical rowid) component.

 

Update (Jan 2020)

I passed this example on to Oracle, and there are now two bugs recorded for it:

  • Bug 30733525 – ALERT LOG ENTRIES RE BLOCK GUESSES IN THE SECONDARY INDEXES OF IOTS POINTING TO OVERFLOW SEGMENT INSTEAD OF INDEX SEGMENT
  • Bug 30733563 – WRONG GUESS DBA IN INDEX

 

 

 

December 6, 2019

Temp space

Filed under: Execution plans,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 12:18 pm GMT Dec 6,2019

A question about hunting down the source of the error “ORA-01652 unable to extend temp segment by NNN in tablespace XXX” shows up on the Oracle-L mailing list or the Oracle developer community forum from time to time. In most cases the tablespace referenced is the temporary tablespace, which means the session reporting the error was probably trying to allocate some space for sorting, or doing a hash join, or instantiating a GTT (global temporary table) or a CTE (common table expression / “with” subquery). The difficulty in cases like this is that the session reporting the error might be the victim of some other session’s greed – so looking at what the session was doing won’t necessarily point you to the real problem.

Of course you then run into a further problem tracking down the source of the problem. By the time you hear the complaint (even if it’s only seconds after the error appeared) the session that had been hogging the temporary tablespace may have finished what it was doing, leaving a huge amount of free space in the temporary tablespace and suggesting (to the less experienced and cynical user) that there’s something fundamentally wrong with the way Oracle has been accounting for space usage.

If you find yourself in this situation remember that (if you’re licensed to take advantage of it) the active session history may be able to help.  One of the columns in v$active_session_history is called temp_space_allocated with the slightly misleading description: “Amount of TEMP memory (in bytes) consumed by this session at the time this sample was taken”. A simple query against v$active_session_history may be enough to identify the session and SQL  statement that had been holding the temporary space when the error was raised, for example:


column pga_allocated        format 999,999,999,999
column temp_space_allocated format 999,999,999,999

break on session_id skip 1 on session_serial#

select
        session_id, session_serial#, 
        sample_id, 
        sql_id, 
        pga_allocated,
        temp_space_allocated
from
        v$active_session_history
where
        sample_time between sysdate - 5/1440 and sysdate
and     nvl(temp_space_allocated,0) != 0
order by
        session_id, sample_id
/

All I’ve done for this example is query v$active_session_history for the last 5 minutes reporting a minimum of information from any rows that show temp space usage. As a minor variation on the theme you can obviously change the time range, and you might want to limit the output to rows reporting more than 1MB (say) of temp space usage.

You’ll notice that I’ve also reported the pga_allocated (Description: Amount of PGA memory (in bytes) consumed by this session at the time this sample was taken) in this query; this is just a little convenience – a query that’s taking a lot of temp space will probably start by acquiring a lot of memory so it’s nice to be able to see the two figures together.

There are plenty of limitations and flaws in the usefulness of this report and I’ll say something about that after showing an example of usage. Let’s start with a script to build some data before running a space-consuming query:


rem
rem     Script:         allocate_tempspace.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 as 
select * from all_objects
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

execute dbms_stats.gather_table_stats(null,'t1')

execute dbms_lock.sleep(20)

set pagesize  60
set linesize 255
set trimspool on
set serveroutput off
alter session set statistics_level = all;

with ttemp as (
        select /*+ materialize */ * from t1
)
select 
        /*+ no_partial_join(@sel$2 t1b) no_place_group_by(@sel$2) */ 
        t1a.object_type,
        max(t1a.object_name)
from
        ttemp t1a, ttemp t1b
where
        t1a.object_id = t1b.object_id
group by
        t1a.object_type
order by
        t1a.object_type
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

My working table t1 consists of 16 copies of the view all_objects – so close to 1 million rows in my case – and the query is hinted to avoid any of the clever transformations that the optimizer could use to reduce the workload so it’s going to do a massive hash join and aggregation to report a summary of a couple of dozen rows. Here’s the execution plan (in this case from 12.2.0.1, though the plan is the same for 19.3 with some variations in the numbers).


SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D665B_E2772D3 |      1 |        |      0 |00:00:01.51 |   28915 |      0 |   9217 |  2068K|  2068K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    989K|    989K|00:00:00.24 |   19551 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     29 |     29 |00:00:08.51 |   18493 |  21345 |   2910 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     15M|     15M|00:00:03.93 |   18493 |  21345 |   2910 |    48M|  6400K|   65M (1)|   25600 |
|   6 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.36 |    9233 |   9218 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.35 |    9233 |   9218 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.40 |    9257 |   9217 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.39 |    9257 |   9217 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")

Critically this plan shows us two uses of the temp space but only reports one of them as Used-Tmp. The “hash join” at operation 5 tells us that it reached 65MB of (tunable PGA) memory before going “1-pass”, eventually dumping 25,600 KB to disc. This space usage is corroborated by the 2,910 writes (which, at an 8KB block size, would be 23,280 KB). The missing Used-Tmp, however, is the space taken up by the materialized CTE. We can see that operation 2 is a “load as select” that writes 9,217 blocks to disc (subsequently read back twice – the tablescans shown in operations 7 and 9). That’s  74,000 KB of temp space that doesn’t get reported Used-Tmp.

If we take a look at the plan from 19.3 we see different numbers, but the same “error of omission”:

SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6624_E259E68 |      1 |        |      0 |00:00:01.26 |   23706 |      0 |   5593 |  2070K|  2070K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    907K|    907K|00:00:00.21 |   18024 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     25 |     25 |00:00:06.89 |   11193 |  13843 |   2655 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     14M|     14M|00:00:03.55 |   11193 |  13843 |   2655 |    44M|  6400K|   64M (1)|      23M|
|   6 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.26 |    5598 |   5594 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.25 |    5598 |   5594 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.34 |    5595 |   5594 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.33 |    5595 |   5594 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")


With slightly fewer rows in t1 (907K vs. 989K) we write 5,593 blocks for the materialized CTE  (instead of 9,217) and spill 2,655 blocks during the hash join (instead of 2,910). But again it’s only the hash join spill that is reported under Used-Tmp. Note, by the way, that the Used-Tmp in 12.2 was reported in KB when it’s reported in MB in 19.3.0.0.

Side note: comparing the number of rows created and blocks written for the CTE, it looks as if 19.3 is using the data blocks much more efficiently than 12.2. There’s no obvious reason for this (though a first guess would be that the older mechanism is to write a GTT with pctfree=10 while the new avoid any free space and transactional details) so, as ever, I now have another draft for a blog note reminding me to investigate (eventually) what differences there are in CTE storage on the upgrade. It’s something that might make a difference in a few special cases.

With the figures from the execution plans in mind we can now look at the results of the query against v$active_session_history. Conveniently the queries took a few seconds to complete, so we’re going to see several rows for each execution.

First the results from 12.2.0.1

SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
        14           22234   15306218 1cwabt12zq6zb       95,962,112            1,048,576
                             15306219 1cwabt12zq6zb       97,731,584           37,748,736
                             15306220 1cwabt12zq6zb      148,194,304           77,594,624
                             15306221 1cwabt12zq6zb      168,117,248           85,983,232
                             15306222 1cwabt12zq6zb      168,117,248           90,177,536
                             15306223 1cwabt12zq6zb      168,117,248           95,420,416
                             15306224 1cwabt12zq6zb      168,117,248           98,566,144
                             15306225 1cwabt12zq6zb      168,117,248          102,760,448
                             15306226 1cwabt12zq6zb      116,933,632          103,809,024
                             15306227 1cwabt12zq6zb      116,933,632          103,809,024
                             15306228 b66ycurnwpgud        8,602,624            1,048,576

I pointed out that we had 25,600 KB reported as Used-Tmp and roughly 74,000 KB unreported – a total of nearly 100,000 KB that is reasonably close to the 103,800,000 bytes reported by ASH. Moreover the timing of the plan (loading the CTE in the first 2 seconds) seems to agree with the growth to 77,590,000 of temp_space_allocated by the time we get to sample_id 15306220 in ASH. Then we have several seconds of slow growth as the hash join takes place and feeds its resulte up to the sort group by. At the end of the query we happen to have been lucky enough to catch one last sample just before the session had released all its temp space and ceased to be active.  (Note: however, that the sql_id at that sample point was not the sql_id of our big query – and that’s a clue about one of the limitations of using ASH to find the greedy SQL.)

We see the same pattern of behaviour in 19.3.0.0:


SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
       136           42767    2217500 1cwabt12zq6zb      143,982,592           46,137,344
                              2217501 1cwabt12zq6zb      193,527,808           54,525,952
                              2217502 1cwabt12zq6zb      193,527,808           57,671,680
                              2217503 1cwabt12zq6zb      193,527,808           61,865,984
                              2217504 1cwabt12zq6zb      197,722,112           67,108,864
                              2217505 1cwabt12zq6zb      150,601,728           70,254,592
                              2217506 1cwabt12zq6zb      150,601,728           70,254,592

We start with an almost instantaneous jump to 46MB of temp_space_allocated in the first second of the query – that’s the 5,593 blocks of the CTE being materialized, then the slow growth of temp space as the hash join runs, spills to disc, and passes its data up to the sort group by. Again we can see that the peak usage was the CTE (46MB) plus the reported spill of 23MB (plus rounding errors and odd bits).

Preliminary Observations

Queries against ASH (v$active_session_history) can show us sessions that were holding space in the temporary tablespace at the moment a sample of active sessions was taken. This may allow us to identify greedy sessions that were causing other sessions to fail with ORA-01652 (unable to allocate temp segment).

We have seen that there is at least one case where we get better information about temp space allocation from ASH than we do from the variants on v$sql_plan that include the SQL Workarea information (v$sql_workarea, v$sql_workarea_active) because the space acquired during materialization of CTEs is not reported as a “tunable SQL workarea” but does appear in the ASH temp_space_allocated.

At first sight it looks as if we may be able to use the query against ASH to identify the statement (by sql_id) that was the one being run by the greedy session when it consumed all the space. As we shall see in a further article, there are various reasons why this may over-optimistic, however in many cases there’s a fair chance that when you see the same sql_id appearing in a number of consecutive rows of the report then that statement may be the thing that is responsible for the recent growth in temp space usage – and you can query v$sql to find the text and call dbms_xplan.display_cursor() to get as much execution plan information as possible.

Further questions

  • When does a session release the temp_space_allocated? Will the space be held (locked) as long as the cursor is open, or can it be released when it is no longer needed? Will it be held, but releasable, even after the cursor has (from the client program’s perspective) been closed?
  • Could we be fooled by a report that said a session was holding a lot of space when it didn’t need it and would have released it if the demand had appeared?
  • Under what conditions might the temp_space_allocated in an ASH sample have nothing to do with the sql_id reported in the same sample?
  • Are there any other reasons why ASH might report temp_space_allocated when an execution plan doesn’t?
  • Is temp_space_allocated only about the temporary tablespace, or could it include informatiom about other (“permanent”) tablespaces ?

Stay tuned for further analysis of the limitations of using v$active_session_history.temp_space_allocated to help identify the srouce of a space management ORA-01652 issue.

 

 

November 17, 2019

Parse Time

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 7:37 pm GMT Nov 17,2019

This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.

Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.

I was prompted a  couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used.  The default for “too long” can be adjusted by setting a “fix control”.  This makes it a lot easier to find out where the time went if you see a very long parse time.

But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty  performance issue.

=============================================================================================

 

What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     46.27      46.53          0          5          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.33       0.63        129      30331          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     46.60      47.17        129      30336          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144  
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us)
       863        863        863   VIEW  VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1)
       ... and lots more lines of plan

According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.

Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE 
  NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') 
  NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), 
  NVL(SUM(C2),:"SYS_B_01") 
FROM
 (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") 
  NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, 
  CASE WHEN
    ...
  END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" 
  SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" 
  WHERE ... 
 ) SAMPLESUB
 
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch        5     21.41      24.14      11108      37331          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15     21.41      24.15      11108      37331          0           5
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144     (recursive depth: 1)
Number of plan statistics captured: 3
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us)
   3137126    3137126    3137126   PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264)
   3137126    3137126    3137126    TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)

This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.

This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.

The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.

Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.

My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).

Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:

[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc  | wc -l
6266

That’s 6,266 join orders considered – let’s take a slightly closer look:

[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt
[jonathan@linux01 big_trace]$ tail -2 temp.txt
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...... from$_subquery$_008[TBL_000020]#2
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):

Number of join permutations tried: 581

So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.

[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc
11495:Number of join permutations tried: 2
11849:Number of join permutations tried: 1
12439:Number of join permutations tried: 2
13826:Number of join permutations tried: 2
14180:Number of join permutations tried: 1
14552:Number of join permutations tried: 2
15938:Number of join permutations tried: 2
16292:Number of join permutations tried: 1
16665:Number of join permutations tried: 2
18141:Number of join permutations tried: 2
18550:Number of join permutations tried: 2
18959:Number of join permutations tried: 2
622799:Number of join permutations tried: 374
624183:Number of join permutations tried: 2
624592:Number of join permutations tried: 2
624919:Number of join permutations tried: 1
625211:Number of join permutations tried: 2
1759817:Number of join permutations tried: 673
1760302:Number of join permutations tried: 1
1760593:Number of join permutations tried: 2
1760910:Number of join permutations tried: 1
1761202:Number of join permutations tried: 2
2750475:Number of join permutations tried: 674
2751325:Number of join permutations tried: 2
2751642:Number of join permutations tried: 1
2751933:Number of join permutations tried: 2
2752250:Number of join permutations tried: 1
2752542:Number of join permutations tried: 2
3586276:Number of join permutations tried: 571
3587133:Number of join permutations tried: 2
3587461:Number of join permutations tried: 1
3587755:Number of join permutations tried: 2
3588079:Number of join permutations tried: 1
3588374:Number of join permutations tried: 2
4458608:Number of join permutations tried: 581
4458832:Number of join permutations tried: 1

The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:

4452004:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4452086:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453254:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453382:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454573:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454655:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455823:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455905:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4457051:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:

****** Recost for ORDER BY (using join row order) *******

The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:

[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c
    452 ****** Recost for ORDER BY (using index) ************
   2896 ****** Recost for ORDER BY (using join row order) *******

So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).

We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.

[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l
43989

Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)

We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:

       Cost: 206984.61  Degree: 1  Resp: 206984.61  Card: 0.00 Bytes: 632
***********************
Best so far:  Table#: 0  cost: 56.9744  card: 1.0000  bytes: 30
              Table#: 3  cost: 59.9853  card: 0.0000  bytes: 83
              Table#: 6  cost: 60.9869  card: 0.0000  bytes: 151
              Table#:10  cost: 61.9909  card: 0.0000  bytes: 185
              Table#: 5  cost: 62.9928  card: 0.0000  bytes: 253
              Table#: 2  cost: 65.0004  card: 0.0000  bytes: 306
              Table#: 1  cost: 122.4741  card: 0.0000  bytes: 336
              Table#: 8  cost: 123.4760  card: 0.0000  bytes: 387
              Table#: 4  cost: 125.4836  card: 0.0000  bytes: 440
              Table#: 7  cost: 343.2625  card: 0.0000  bytes: 470
              Table#: 9  cost: 345.2659  card: 0.0000  bytes: 530
              Table#:11  cost: 206981.5979  card: 0.0000  bytes: 564
              Table#:12  cost: 206982.6017  card: 0.0000  bytes: 598
              Table#:13  cost: 206984.6055  card: 0.0000  bytes: 632
***********************

As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:

{
        if (index($0,"Best so far") != 0) {print NR m2}
        m2 = m1; m1 = $0;
}

awk -f cost.awk orcl_ora_25306.trc >temp.txt

There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.

Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far.  I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula.  The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.

At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).

However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.

The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.

QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
***********************************
Cost-Based Join Predicate Push-down
***********************************
JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4)
JPPD:   Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6)
Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$799AD133 (#3)
JPPD:   Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7)
Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2)
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3)
Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4)
Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5)
Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5)
JPPD:     Passed validity checks
JPPD: JPPD:   Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks.
Join-Predicate push-down on query block SEL$C2AA4F6A (#2)
JPPD: Using search type: linear
JPPD: Considering join predicate push-down
JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)

As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”

[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc
9934:GBP: Starting iteration 1, state space = (20,21) : (0,0)
11529:GBP: Starting iteration 2, state space = (20,21) : (0,C)
11562:GBP: Starting iteration 3, state space = (20,21) : (F,0)
12479:GBP: Starting iteration 4, state space = (20,21) : (F,C)
12517:GBP: Starting iteration 1, state space = (18,19) : (0,0)
13860:GBP: Starting iteration 2, state space = (18,19) : (0,C)
13893:GBP: Starting iteration 3, state space = (18,19) : (F,0)
14587:GBP: Starting iteration 4, state space = (18,19) : (F,C)
14628:GBP: Starting iteration 1, state space = (16,17) : (0,0)
15972:GBP: Starting iteration 2, state space = (16,17) : (0,C)
16005:GBP: Starting iteration 3, state space = (16,17) : (F,0)
16700:GBP: Starting iteration 4, state space = (16,17) : (F,C)
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)

There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”


16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622799:Number of join permutations tried: 374

622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759817:Number of join permutations tried: 673

1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750475:Number of join permutations tried: 674

2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
3586276:Number of join permutations tried: 571

4458608:Number of join permutations tried: 581

At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.

The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.

Version 12c and beyond

That was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.

Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:


$ grep -nT TIMER or19_ora_21051.trc

16426  :TIMER:      bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec
252758 :TIMER:     costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec
309460 :TIMER:      bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec
312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec
313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec
315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec
316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec
318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec
318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec
328948 :TIMER:      bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec
632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec
633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec

The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:


$ grep -nT TIMER or19_ora_22205.trc
14884  :TIMER:      bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec
15228  :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec
15234  :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec

Not only was optimisation faster, the runtime was quicker too.

Warning – it’s not always that easy.

 

Library Cache Stats

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 9:36 am GMT Nov 17,2019

In response to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode to convert indx numbers into names).


rem
rem     Script:         snap_11_libcache.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2001 (updated for 11gR2)
rem     Purpose:        Package to get snapshot start and delta of library cache stats
rem
rem     Notes
rem             Lots of changes needed for 11.2.x.x where x$kglst holds
rem             two types - TYPE (107) and NAMESPACE (84)
rem             
rem             Has to be run by SYS to create the package
rem             No longer needs a complex decode.
rem
rem     Usage:
rem             set serveroutput on size 1000000 format wrapped
rem             set linesize 144
rem             set trimspool on
rem             execute snap_libcache.start_snap
rem             -- do something
rem             execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
        procedure start_snap;
        procedure end_snap;
end;
/

create or replace package body snap_libcache as

        cursor c1 is
                select
                        indx,
                        kglsttyp        lib_type,
                        kglstdsc        name,
                        kglstget        gets,
                        kglstght        get_hits,
                        kglstpin        pins,
                        kglstpht        pin_hits,
                        kglstrld        reloads,
                        kglstinv        invalidations,
                        kglstlrq        dlm_lock_requests,
                        kglstprq        dlm_pin_requests,
--                      kglstprl        dlm_pin_releases,
--                      kglstirq        dlm_invalidation_requests,
                        kglstmiv        dlm_invalidations
                from x$kglst
                ;


        type w_type1 is table of c1%rowtype index by binary_integer;
        w_list1         w_type1;
        w_empty_list    w_type1;

        m_start_time    date;
        m_start_flag    char(1);
        m_end_time      date;

procedure start_snap is
begin

        m_start_time := sysdate;
        m_start_flag := 'U';
        w_list1 := w_empty_list;

        for r in c1 loop
                w_list1(r.indx).gets := r.gets;
                w_list1(r.indx).get_hits := r.get_hits;
                w_list1(r.indx).pins := r.pins;
                w_list1(r.indx).pin_hits := r.pin_hits;
                w_list1(r.indx).reloads := r.reloads;
                w_list1(r.indx).invalidations := r.invalidations;
        end loop;

end start_snap;


procedure end_snap is
begin

        m_end_time := sysdate;

        dbms_output.put_line('---------------------------------');
        dbms_output.put_line('Library Cache - ' || 
                                to_char(m_end_time,'dd-Mon hh24:mi:ss') 
        );

        if m_start_flag = 'U' then
                dbms_output.put_line('Interval:-      '  || 
                                trunc(86400 * (m_end_time - m_start_time)) ||
                                ' seconds'
                );
        else
                dbms_output.put_line('Since Startup:- ' || 
                                to_char(m_start_time,'dd-Mon hh24:mi:ss')
                );
        end if;

        dbms_output.put_line('---------------------------------');

        dbms_output.put_line(
                rpad('Type',10) ||
                rpad('Description',41) ||
                lpad('Gets',12) ||
                lpad('Hits',12) ||
                lpad('Ratio',6) ||
                lpad('Pins',12) ||
                lpad('Hits',12) ||
                lpad('Ratio',6) ||
                lpad('Invalidations',14) ||
                lpad('Reloads',10)
        );

        dbms_output.put_line(
                rpad('-----',10) ||
                rpad('-----',41) ||
                lpad('----',12) ||
                lpad('----',12) ||
                lpad('-----',6) ||
                lpad('----',12) ||
                lpad('----',12) ||
                lpad('-----',6) ||
                lpad('-------------',14) ||
                lpad('------',10)
        );

        for r in c1 loop
                if (not w_list1.exists(r.indx)) then
                    w_list1(r.indx).gets := 0;
                    w_list1(r.indx).get_hits := 0;
                    w_list1(r.indx).pins := 0;
                    w_list1(r.indx).pin_hits := 0;
                    w_list1(r.indx).invalidations := 0;
                    w_list1(r.indx).reloads := 0;
                end if;

                if (
                           (w_list1(r.indx).gets != r.gets)
                        or (w_list1(r.indx).get_hits != r.get_hits)
                        or (w_list1(r.indx).pins != r.pins)
                        or (w_list1(r.indx).pin_hits != r.pin_hits)
                        or (w_list1(r.indx).invalidations != r.invalidations)
                        or (w_list1(r.indx).reloads != r.reloads)
                ) then

                        dbms_output.put(rpad(substr(r.lib_type,1,10),10));
                        dbms_output.put(rpad(substr(r.name,1,41),41));
                        dbms_output.put(to_char( 
                                r.gets - w_list1(r.indx).gets,
                                        '999,999,990')
                        );
                        dbms_output.put(to_char( 
                                r.get_hits - w_list1(r.indx).get_hits,
                                        '999,999,990'));
                        dbms_output.put(to_char( 
                                (r.get_hits - w_list1(r.indx).get_hits)/
                                greatest(
                                        r.gets - w_list1(r.indx).gets,
                                        1
                                ),
                                        '999.0'));
                        dbms_output.put(to_char( 
                                r.pins - w_list1(r.indx).pins,
                                        '999,999,990')
                        );
                        dbms_output.put(to_char( 
                                r.pin_hits - w_list1(r.indx).pin_hits,
                                        '999,999,990'));
                        dbms_output.put(to_char( 
                                (r.pin_hits - w_list1(r.indx).pin_hits)/
                                greatest(
                                        r.pins - w_list1(r.indx).pins,
                                        1
                                ),
                                        '999.0'));
                        dbms_output.put(to_char( 
                                r.invalidations - w_list1(r.indx).invalidations,
                                        '9,999,999,990')
                        );
                        dbms_output.put(to_char( 
                                r.reloads - w_list1(r.indx).reloads,
                                        '9,999,990')
                        );
                        dbms_output.new_line;
                end if;

        end loop;

end end_snap;

begin
        select
                startup_time, 'S'
        into
                m_start_time, m_start_flag
        from
                v$instance;

end snap_libcache;
/


drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;


You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

Next Page »

Powered by WordPress.com.