Oracle Scratchpad

January 30, 2023

Lost Or-Expand

Filed under: 12c,19c,CBO,Oracle,Transformations,Upgrades — Jonathan Lewis @ 1:39 pm GMT Jan 30,2023

I’ve commented previously on the “new” cost-based Or-Expansion introduced in 12c to replace the “legacy” Concatenation transformation, and I’ve been re-running some of my concatenation scripts to see whether the most recent versions of the optimizer will use Or-expansion unhinted in places where I’ve previously had to use hints to force concatenation to appear.

The latest test has produced a surprising result – I’ve got an example where 19c and 21c will use concatenation when hinted with use_concat(), but will not obey the or_expand() hint on the grounds that there’s “No valid predicate for OR expansion”

It’s worth knowing this could happen if you’re upgrading from 11g to 19c (as many people seem to be doing at present) as you may find that you have some statements that used to use concatenation unhinted, but now need to be hinted to do so as they can’t switch to or-expansion and won’t use concatenation unless hinted to do so.

tl;dr (the rest of the note is just a demonstration.) When you upgrade from 11g to 19c (or later) you may find that some queries perform badly because they stop using the legacy “concatenation” operator but can’t be transformed by the new “cost-based Or Expand” operator, and need to be hinted with a use_concat() hint.

Here’s a statement I can use to demonstrate the effect – I’ll post the code to create the tables at the end of the note:

select  /*+ gather_plan_statistics */
        n1, n2, small_vc
from
        t1
where
        (n1 = 1 and n2 = 10000)
or      (n1 = 10000 and n2 = 1)
;

I’ve rigged the data so that there are 9,999 distinct values of n1 each with one row, and 10,001 rows with the value 10,000; and I’ve done the same with n2 – 9,999 distinct values with one row each and 10,001 rows with the value 10,000.

I’ve gathered stats that include histograms on n1 and n2 (separately) and I’ve created indexes on n1 and n2 (separately). As a result the ideal path for this query is to use the index on n1 to find rows for the first of the two compound predicates and use the index on n2 to find rows for the second of the predicates, which should be possible if the optimizer first transforms the query using OR-expansion.

You’ll notice I’ve included the hint to capture rowsource execution statistics, so I’ll be executing this query with various hints and reporting the actual execution plans and workload. Using 19.11.0.0 and 21.3.0.0 with no special parameter settings the execution plan that appeared used B-tree/bitmap conversion:

| Id  | Operation                           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |    45 (100)|      2 |00:00:00.01 |      50 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |    45   (3)|      2 |00:00:00.01 |      50 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |       |      1 |        |            |      2 |00:00:00.01 |      48 |
|   3 |    BITMAP OR                        |       |      1 |        |            |      1 |00:00:00.01 |      48 |
|   4 |     BITMAP AND                      |       |      1 |        |            |      1 |00:00:00.01 |      24 |
|   5 |      BITMAP CONVERSION FROM ROWIDS  |       |      1 |        |            |      1 |00:00:00.01 |       2 |
|*  6 |       INDEX RANGE SCAN              | T1_N1 |      1 |        |     1   (0)|      1 |00:00:00.01 |       2 |
|   7 |      BITMAP CONVERSION FROM ROWIDS  |       |      1 |        |            |      1 |00:00:00.01 |      22 |
|*  8 |       INDEX RANGE SCAN              | T1_N2 |      1 |        |    21   (0)|  10001 |00:00:00.01 |      22 |
|   9 |     BITMAP AND                      |       |      1 |        |            |      1 |00:00:00.01 |      24 |
|  10 |      BITMAP CONVERSION FROM ROWIDS  |       |      1 |        |            |      1 |00:00:00.01 |       2 |
|* 11 |       INDEX RANGE SCAN              | T1_N2 |      1 |        |     1   (0)|      1 |00:00:00.01 |       2 |
|  12 |      BITMAP CONVERSION FROM ROWIDS  |       |      1 |        |            |      1 |00:00:00.01 |      22 |
|* 13 |       INDEX RANGE SCAN              | T1_N1 |      1 |        |    21   (0)|  10001 |00:00:00.01 |      22 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("N1"=1)
   8 - access("N2"=10000)
  11 - access("N2"=1)
  13 - access("N1"=10000)

This is a fairly clever plan but not what I wanted to test so I set the hidden parameter ‘_b_tree_bitmap_plans’ to false for all subsequent tests. With this block in place the plan changed to a full tablescan:

-------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    99 (100)|      2 |00:00:00.01 |     349 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |      1 |    99   (2)|      2 |00:00:00.01 |     349 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter((("N1"=1 AND "N2"=10000) OR ("N1"=10000 AND "N2"=1)))


Definitely not what I wanted – so I added a hint telling the optimizer I wanted to see OR-expansion. The optimizer produced the same full tablescan! Since I had included the format option ‘hint_report’ in my call to dbms_xplan.display_cursor() I can show you the extra lines of output that explained why the optimizer “ignored” my hint:

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1
         U -  or_expand(@sel$1 (1) (2)) / No valid predicate for OR expansion

As you can see the hint was not “N – unresolved” or “E – Syntax error”. It was recognised, syntactically correct, notionally applicable but unused because the optmizer couldn’t see a way to use it (even though we can see an obvious way to use it).

Idle curiosity then prompted me to try the use_concat() hint, in the form: “use_concat(@sel$1 1)” – here’s the resulting execution plan:

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     4 (100)|      2 |00:00:00.01 |       7 |
|   1 |  CONCATENATION                       |       |      1 |        |            |      2 |00:00:00.01 |       7 |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       4 |
|*  3 |    INDEX RANGE SCAN                  | T1_N2 |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       3 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|*  5 |    INDEX RANGE SCAN                  | T1_N1 |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N1"=10000)
   3 - access("N2"=1)
   4 - filter(("N2"=10000 AND (LNNVL("N2"=1) OR LNNVL("N1"=10000))))
   5 - access("N1"=1)

Exactly the plan I wanted to see from or_expand(), although the two subqueries are in the reverse order to the order I would expect from or_expand(). So the new cost-based or-expansion says there’s no valid predicate available for expansion, but the old, deprecated, heuristic, concatenation transformation manages to find a disjunct (OR) that can be expanded.

Of course the next thing to do is look at the predicted cost and actual work (mostly buffer gets) that Oracle reported for each plan:

  • bitmap conversion: (cost 45, buffers 50)
  • full tablescan: (cost 99, buffers 349)
  • concatenation: (cost 4, buffers 7)

The predicted costs are actually fairly consistent with buffer gets (which, if I flushed the cache, would also be mostly disk reads). I had been fairly impressed that the optimizer picked bitmap conversion, but it would have been so much better if the optimizer could see that this (slightly complex) set of predicates included an opportunity for or-expansion.

Footnote 1

This query shows an example of disjunctive normal form (DNF), i.e the where clause is a disjunct (OR) of conjuncts (ANDs). I understand that optimizers (in general) quite like this form, but there is another “nice” form which is CNF (conjunctive normal form) i.e. where the where clause is a conjuct (AND) of disjuncts (ORs). So, for entertainment, I rewrote the where clause in conjunctive normal form. You have to be a little careful when you play the “normal form” game, it’s quite easy to get it wrong, so here are the steps I took (using A, B, C, D instead of my 4 atomic predicates):

(A and B) or (C and D) ==
        (A or (C and D)) and (B or (C and D)) ==               -- distributing the (A and B)
        (A or C) and (A or D) and (B or C) and (B or D)        -- distributing the two occurrences of (C and D)

Here’s the restulting query and unhinted execution plan after substituting “n = 1” etc. back into the symbolic presentation (and it probably gives you some idea why I played safe by starting with A, B, C, D):

select  /*+ gather_plan_statistics */
        n1, n2, small_vc
from
        t1
where
        (n1 = 1 or n2 = 1) 
and     (n1 = 1 or n1 = 10000) 
and     (n2 = 10000 or n2 = 1)
and     (n2 = 10000 or n1 = 10000)
;

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                 |      1 |        |     4 (100)|      2 |00:00:00.01 |       7 |
|   1 |  VIEW                                 | VW_ORE_BA8ECEFB |      1 |      2 |     4   (0)|      2 |00:00:00.01 |       7 |
|   2 |   UNION-ALL                           |                 |      1 |        |            |      2 |00:00:00.01 |       7 |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1              |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       4 |
|*  4 |     INDEX RANGE SCAN                  | T1_N1           |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       3 |
|*  5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1              |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|*  6 |     INDEX RANGE SCAN                  | T1_N2           |      1 |      1 |     1   (0)|      1 |00:00:00.01 |       2 |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("N2"=10000)
   4 - access("N1"=1)
   5 - filter(("N1"=10000 AND LNNVL("N1"=1)))
   6 - access("N2"=1)

It’s the OR-expansion I wanted to see.

If I can do an algorithmic rewrite that produces the desired plan the optimizer can be coded to do the rewrite – so I think you can expect to see this limitation removed at some future point. This plan, however, did still depend on my disabling B-tree/bitmap conversion; when I enabled B-tree/bimap conversion the optimizer used it to produce the following plan:

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |     2 (100)|      2 |00:00:00.01 |       6 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |       |      1 |        |            |      2 |00:00:00.01 |       4 |
|   3 |    BITMAP OR                        |       |      1 |        |            |      1 |00:00:00.01 |       4 |
|   4 |     BITMAP CONVERSION FROM ROWIDS   |       |      1 |        |            |      1 |00:00:00.01 |       2 |
|*  5 |      INDEX RANGE SCAN               | T1_N1 |      1 |        |     1   (0)|      1 |00:00:00.01 |       2 |
|   6 |     BITMAP CONVERSION FROM ROWIDS   |       |      1 |        |            |      1 |00:00:00.01 |       2 |
|*  7 |      INDEX RANGE SCAN               | T1_N2 |      1 |        |     1   (0)|      1 |00:00:00.01 |       2 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter((INTERNAL_FUNCTION("N1") AND INTERNAL_FUNCTION("N2") AND ("N2"=10000 OR "N1"=10000)))
   5 - access("N1"=1)
   7 - access("N2"=1)

The thing to note in this case, though, is that the B-tree/bitmap conversion is logically the correct thing to choose when you compare the estimated cost and actual workload:

  • or-expansion: (cost 4, buffers 7)
  • bitmap conversion: (cost 2, buffers 6)

Footnote 2

Mohamed Houri wrote an article on Or-expansion a year ago explaining the possible settings for the hidden parameter “_optimizer_cbqt_or_expansion”, which can off, on, linear, greedy or two_pass. I tried all the options to see if that would make any difference (apart from the obvious impact of “off”); but it didn’t.

Source code

If you want to do further experiments, here’s the script I used to generate the data:

rem
rem     Script:         concat_3b.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2008 / Jan 2003
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem             21.3.0.0
rem

create table t1
as
with generator as (
        select
                rownum  id
        from    dual
        connect by level <= 10000
)
select
        rownum                  n1,
        10000                   n2,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1
;

insert /*+ append */ into t1
select
        n2, n1, small_vc, padding
from
        t1
;

commit;

create index t1_n1 on t1(n1);
create index t1_n2 on t1(n2);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for columns size 100 n1, n2'
        );
end;
/


November 21, 2022

Row_number() sorts

Filed under: Oracle,Troubleshooting,Tuning,Upgrades — Jonathan Lewis @ 5:47 pm GMT Nov 21,2022

An email on the Oracle-L list server a few days ago described a performance problem that had appeared after an upgrade from 11.2.0.4 to 19c (19.15). A long running statement (insert as select, running parallel 16) that had run to completion in 11g using about 20GB of temporary space (with 50GB read and written) had failed after running for a couple of hours in 19c and consuming 2.5 TB of temporary space even when the 11g execution plan was recreated through an SQL Profile.

When I took a look at the SQL Monitor report for 19c it turned out that a large fraction of the work done was in an operation called WINDOW CHILD PUSHED RANK which was there to deal with a predicate:

row_number() over(partition by t.ds_no, t.c_nbr order by c.cpcl_nbr desc) = 1

Checking the succesful 11g execution, this operation had taken an input rowsource of 7 Billion rows and produced an output rowsource of 70 Million rows.

Checking the SQL Monitor report for the failed executions in 19c the “pure” 19c plan had reported 7 billion input rows, 6GB memory allocated and 1TB of temp space at the same point, the plan with the 11g profile had reported 10 billion input rows, but the operation had not yet reported any output rows despite reporting 9GB as the maximum memory allocation and 1TB as the maximum temp space usage. (Differences in row counts were probably due to the report being run for different dates.)

So, the question to the list server was: “is this a bug in 19c?”

Modelling

It’s a little unfortunate that I couldn’t model the problem in 19c at the time because my 19c VM kept crashing; but I built a very simple model to allow me to emulate the window sort and row_number() predicate in an 11g instance, then re-played the model in an instance of 21c.

For the model data I took 50 copies of the first 50,000 rows from view all_objects to produce a table of 2,500,000 rows covering 35,700 blocks and 279 MB, (55,000 blocks / 430 MB in 21c); then I ran the query below and reported its execution plan with a basic call to dbms_xplan.display_cursor():

select
        /*+ dynamic_sampling(0) */
        owner, max(object_name)
from    (
        select 
                /*+ no_merge */
                owner, object_name 
        from    (
                select 
                        owner, object_name,
                        row_number() over (partition by object_name order by object_type desc) orank 
                from 
                        t1
                )  where orank= 1
        )
group by 
        owner
order by
        owner
/

-------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       |       | 29491 (100)|          |
|   1 |  SORT GROUP BY             |      |     8 |   184 |       | 29491   (9)| 00:02:28 |
|   2 |   VIEW                     |      |  2500K|    54M|       | 28532   (6)| 00:02:23 |
|*  3 |    VIEW                    |      |  2500K|   112M|       | 28532   (6)| 00:02:23 |
|*  4 |     WINDOW SORT PUSHED RANK|      |  2500K|    95M|   124M| 28532   (6)| 00:02:23 |
|   5 |      TABLE ACCESS FULL     | T1   |  2500K|    95M|       |  4821   (8)| 00:00:25 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("ORANK"=1)
   4 - filter(ROW_NUMBER() OVER ( PARTITION BY "OBJECT_NAME" ORDER BY
              INTERNAL_FUNCTION("OBJECT_TYPE") DESC )<=1)

Oracle 21c produced the same execution plan – though the row estimate for the VIEW operations (numbers 2 and 3) was a more realistic 46,236 (num_distinct recorded for object_name) compared to the unchanged 2,500,000 from 11g. (Of course it should have been operation 4 that showed the first drop in cardinality.)

With my first build, the timings weren’t what I expected: In 21c the query completed in 3.3 seconds, in 11g it took 11.7 seconds. Most of the difference was due to a large (55MB) spill to temp space that appeared in 11g but not in 21c. This would have been because myb11g wasn’t allowed a large enough PGA, so I set the workarea_size_policy to manual and the sort_area_size to 100M, which looks as if it should have been enough to cover the 11g requirement – it wasn’t and I had to grow the sort_area_size to 190 MB before the 11g operation completed in memory, allocating roughly 155MB. By comparison 21c reported an increase of only 19MB of PGA to run the query, claiming that it needed only 4.7MB to handle the critical operation.

For comparison purposes here are the two run-time execution plans, with rowsource execution stats (which messed the timing up a little) and the column projection information;

Results for 11g

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 29491 (100)|      8 |00:00:03.96 |   35513 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |      8 |   184 |       | 29491   (9)|      8 |00:00:03.96 |   35513 |  3072 |  3072 | 2048  (0)|
|   2 |   VIEW                     |      |      1 |   2500K|    54M|       | 28532   (6)|  28575 |00:00:04.07 |   35513 |       |       |          |
|*  3 |    VIEW                    |      |      1 |   2500K|   112M|       | 28532   (6)|  28575 |00:00:03.93 |   35513 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|    95M|   124M| 28532   (6)|   1454K|00:00:08.82 |   35513 |   189M|  4615K|  168M (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|    95M|       |  4821   (8)|   2500K|00:00:10.85 |   35513 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("ORANK"=1)
   4 - filter(ROW_NUMBER() OVER ( PARTITION BY "OBJECT_NAME" ORDER BY INTERNAL_FUNCTION("OBJECT_TYPE") DESC )<=1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1) "OWNER"[VARCHAR2,30], MAX("OBJECT_NAME")[30]
   2 - "OWNER"[VARCHAR2,30], "OBJECT_NAME"[VARCHAR2,30]
   3 - "OWNER"[VARCHAR2,30], "OBJECT_NAME"[VARCHAR2,30], "ORANK"[NUMBER,22]
   4 - (#keys=2) "OBJECT_NAME"[VARCHAR2,30], INTERNAL_FUNCTION("OBJECT_TYPE")[19], "OWNER"[VARCHAR2,30], ROW_NUMBER() OVER ( PARTITION BY
       "OBJECT_NAME" ORDER BY INTERNAL_FUNCTION("OBJECT_TYPE") DESC )[22]
   5 - "OWNER"[VARCHAR2,30], "OBJECT_NAME"[VARCHAR2,30], "OBJECT_TYPE"[VARCHAR2,19]

It’s an interesting oddity, and possibly a clue about the excess memory and temp space, that the A-Rows column for the Window Sort operation reports 1,454K rows output when it surely ought to be the final 45,982 at that point. It’s possible to imagine a couple of strategies that Oracle might be following to do the window sort that would reasult in the excess volume appearing, and I’ll leave it to the readers to investigate that

Results for 21c

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 48864 (100)|     12 |00:00:02.98 |   54755 |  54750 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |     12 |   852 |       | 48864   (1)|     12 |00:00:02.98 |   54755 |  54750 |  5120 |  5120 | 4096  (0)|
|   2 |   VIEW                     |      |      1 |  46236 |  3205K|       | 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |       |       |          |
|*  3 |    VIEW                    |      |      1 |  46236 |  6547K|       | 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|   131M|   162M| 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |  5297K|   950K| 4708K (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|   131M|       | 15028   (1)|   2500K|00:00:00.28 |   54755 |  54750 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("ORANK"=1)
   4 - filter(ROW_NUMBER() OVER ( PARTITION BY "OBJECT_NAME" ORDER BY INTERNAL_FUNCTION("OBJECT_TYPE") DESC )<=1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1; rowset=256) "OWNER"[VARCHAR2,128], MAX("OBJECT_NAME")[128]
   2 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128]
   3 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128], "ORANK"[NUMBER,22]
   4 - (#keys=2; rowset=256) "OBJECT_NAME"[VARCHAR2,128], "OBJECT_TYPE"[VARCHAR2,23], "OWNER"[VARCHAR2,128], ROW_NUMBER() OVER ( PARTITION BY
       "OBJECT_NAME" ORDER BY INTERNAL_FUNCTION("OBJECT_TYPE") DESC )[22]
   5 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128], "OBJECT_TYPE"[VARCHAR2,23]

In this case we see the A-rows from the Window Sort meeting our expectations – but that may be a beneficial side effect of the operation completing in memory.

Given the dramatically different demands for memory for a query that ought to do the same thing in both versions it looks as if 21c may be doing something clever that 11g doesn’t do, or maybe doesn’t do very well, or maybe tries to do but has a bug that isn’t dramatic enough to be obvious unless you’re looking closely.

Modelling

Here’s a script that I used to build the test data, with scope for a few variations in testing. You’ll notice that the “create table” includes an “order by” clause that is close to the sorting requirement of the over() clause that appears in the query. The results I’ve show so far were for data that didn’t have this clause in place.

rem
rem     Script:         analytic_sort_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2022
rem
rem     Last tested
rem             21.3.0.0
rem             19.11.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 nologging 
as
select 
        ao.*
from
        (select * from all_objects where rownum <= 50000) ao,
        (select rownum from dual connect by rownum <= 50)
order by
        object_name, object_type -- desc
/

--
--      Stats collection to get histograms
--

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 254'
        );
end;
/

--
-- reconnect here to maximise visibility of PGA allocation
--

connect xxxxxxxx/xxxxxxxx

set linesize 180
set trimspool on
set tab off

-- alter session set workarea_size_policy = manual;
-- alter session set sort_area_size = 199229440;

alter session set events '10046 trace name context forever, level 8';
-- alter session set statistics_level = all;
-- alter session set "_rowsource_execution_statistics"= true;

spool analytic_sort_2

select
        /*  monitoring */
        owner, max(object_name)
from    (
        select 
                /*+ no_merge */
                owner, object_name 
        from    (
                select 
                        owner, object_name,
                        row_number() over (partition by object_name order by object_type desc) orank 
                from 
                        t1
                )  where orank= 1
        )
group by 
        owner
order by
        owner
/

select * from table(dbms_xplan.display_cursor(format=>'cost bytes allstats last projection'));

alter session set events '10046 trace name context off';
alter session set "_rowsource_execution_statistics"= false;
alter session set statistics_level = typical;
alter session set workarea_size_policy = auto;

spool off

The results I’m going to comment on now are the ones I got after running the script with the order by clause in place, then reconnecting and flushing the shared pool before repeat the second half of the script (i.e. without recreating the table).

In 11g, going back to the automatic workarea sizing the session used 37MB of memory and then spilled (only) 3MB to temp. The run time was approximately 3 seconds – which is a good match for the “unsorted” 21c run time. As with the original tests, the value reported in A-rows is larger than we would expect (in this case suspiciously close to twice the correct values – but that’s more likely to be a coincidence than a clue). Interestingly, when I switched to the manual workarea_size_policy and set the sort_area_size to 190MB Oracle said “that’s the optimum memory” and used nearly all of it to complete in memory – for any value less than that (even down to 5MB) Oracle spilled just 3 MB to disk in a one-pass operation. So it looks as if Oracle “knows” it doesn’t need to sort the whole data set, but still uses as much memory as is available to do something before it starts to get clever.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 29491 (100)|      8 |00:00:01.76 |   35523 |   2145 |    331 |       |       |          |         |
|   1 |  SORT GROUP BY             |      |      1 |      8 |   184 |       | 29491   (9)|      8 |00:00:01.76 |   35523 |   2145 |    331 |  2048 |  2048 | 2048  (0)|         |
|   2 |   VIEW                     |      |      1 |   2500K|    54M|       | 28532   (6)|  28575 |00:00:02.00 |   35523 |   2145 |    331 |       |       |          |         |
|*  3 |    VIEW                    |      |      1 |   2500K|   112M|       | 28532   (6)|  28575 |00:00:01.83 |   35523 |   2145 |    331 |       |       |          |         |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|    95M|   124M| 28532   (6)|  57171 |00:00:02.10 |   35523 |   2145 |    331 |  2979K|   768K|   37M (1)|    3072 |
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|    95M|       |  4821   (8)|   2500K|00:00:11.84 |   35513 |   1814 |      0 |       |       |          |         |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

In 21c there’s essentially no difference between the sorted and unsorted tests, which suggests that with my data the session had been able to apply its optimisation strategy at the earliest possible moment rather than waiting until it had no alternative but to spill to disc.

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 48864 (100)|     12 |00:00:00.98 |   54753 |  54748 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |     12 |   852 |       | 48864   (1)|     12 |00:00:00.98 |   54753 |  54748 |  4096 |  4096 | 4096  (0)|
|   2 |   VIEW                     |      |      1 |  46236 |  3205K|       | 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |       |       |          |
|*  3 |    VIEW                    |      |      1 |  46236 |  6547K|       | 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|   131M|   162M| 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |  5155K|   940K| 4582K (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|   131M|       | 15028   (1)|   2500K|00:00:00.42 |   54753 |  54748 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Bug description: possibly

Given the way that 11g reports a very small spill to disc, which stays fairly constant in size no matter how large or small the available PGA allocation is, when the input data is sorted to help the over() clause, and given how large the spill to disc can become when the data is not sorted, I feel that Oracle has an optimisation that discards input rows early in the analytic window sort. But we also have some evidence of a flaw in the code in versions prior to 21c that means Oracle fails to re-use memory that becomes available from rows that have been discarded.

This means the OP’s problem may have been just bad luck in terms of available memory and (relatively) tiny variations in demands for space between the 11g and 19c instances perhaps due to differences in the quantity or distribution of data.

Although the impact was dramatic in this case, a query that is supposed to return 70 million rows (irrespective of how many it starts with) is an extreme case, and one that deserves a strong justification and a significant investment in time spent on finding cunning optimisation strategies.

So maybe this is a bug that doesn’t usually get noticed that will go away on an upgrade to 21c; and maybe there’s a backport and patch already available if you can find a bug number in the 21c patch release notes.

Strategy

I’ve said in the past that if you’re using analytic functions you ought to minimise the size of the data you’re processing before you apply the analytic part. Another step that can help is to make sure you’ve got the data into a (fairly well) sorted order before you reach the analytic part.

In the case of versions of Oracle prior to 21c, it also seems to make sense (if you can arrange it) to minimise the amount of memory the session is allowed to use for a sort operation, as this will reduce the CPU used by the session and avoid grabbing excess redundant memory that could be used more effectively by other sessions.

Addendum

Just before publishing I found a way of keeping my 19.11.0.0 instance alive long enough to run the tests, then also ran them on an instance of 12.2.0.1. Both versions showed the same pattern of doing a large allocation of memory and large spill to disc when the data was not sorted, and a large allocation of memory but a small spill to disc when the data was sorted.

As a little sanity check I also exported the 19c data and imported it to 21c in case it was a simple variation in the data that allwoed made 21c to operate more efficiently than19c. The change in data made no difference to the way in which 21c handled it, in both cases it called for a small allocation of memory with no spill to disc.

July 26, 2022

Hinting

Filed under: Execution plans,Hints,Oracle,Upgrades — Jonathan Lewis @ 1:05 pm BST Jul 26,2022

This is just a lightweight note on the risks of hinting (which might also apply occasionally to SQL Plan Baselines). I’ve just rediscovered a little script I wrote (or possibly last tested/edited) in 2007 with a solution to the problem of how to structure a query to use an “index fast full scan” (index_ffs) following by a “table access by rowid” – a path that is not available to the optimizer for select statements (even when hinted) though it became available (sometimes inappropriately) for deletes and updates in 12c.

It’s possible that this method was something I designed for a client using 9i, but the code still behaves as expected in 11.1.0.7. Here’s the setup and query:

rem
rem     Script:         wildcard.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2007
rem
rem     Last tested
rem             11.1.0.7
rem

create table t1
as
select
        cast(dbms_random.string('a',8) as varchar2(8))  str,
        rpad('x',100)                                   padding
from
        all_objects
where
        rownum <= 10000
;

alter table t1 modify str not null;
create index t1_i1 on t1(str);

begin
        dbms_stats.gather_table_stats(
                user, 't1', 
                cascade => true,
                method_opt => 'for all columns size 1'
        );
end;
/

explain plan for
select  
        /*+ 
                qb_name(main) 
                unnest(@subq1)
                leading(@sel$2fc9d0fe t1@subq1 t1@main)
                index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))
                use_nl(@sel$2fc9d0fe t1@main)
                rowid(@sel$2fc9d0fe t1@main)
        */
        * 
from    t1 
where   rowid in (
                select  /*+ qb_name(subq1) */
                        rowid 
                from    t1 
                where   upper(str) like '%CHD%'
)
;

select * from table(dbms_xplan.display(format=>'outline alias'));

As you can see, I’ve got an IN subquery (query block subq1) to generate a list of rowids from the table for the rows that match my predicate and then my main query (query block main) selects the rows identified by that list.

I’ve added hints to the main query block to unnest the subquery (which will result in a new query block appearing) then do a nested loop from the t1 referenced in subq1 (t1@subq1) to the t1 referenced in main (t1@main), starting with an index fast full scan of t1@subq1 and accessing t1@main by rowid.

The unnest hint was actually redundant – unnesting happened automatically and uncosted. You’ll notice all the other hints are directed at a query block called sel$2fc9d0fe which is the resulting query block name when subq1 is unnested into main.

Here’s the resulting execution plan showing, amongst other details in the Outline Data, that this really was running on 11.1.0.7

Plan hash value: 1953350015

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |   500 | 65500 |   509   (0)| 00:00:07 |
|   1 |  NESTED LOOPS               |       |   500 | 65500 |   509   (0)| 00:00:07 |
|*  2 |   INDEX FAST FULL SCAN      | T1_I1 |   500 | 10500 |     9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1    |     1 |   110 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$2FC9D0FE
   2 - SEL$2FC9D0FE / T1@SUBQ1
   3 - SEL$2FC9D0FE / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      USE_NL(@"SEL$2FC9D0FE" "T1"@"MAIN")
      LEADING(@"SEL$2FC9D0FE" "T1"@"SUBQ1" "T1"@"MAIN")
      ROWID(@"SEL$2FC9D0FE" "T1"@"MAIN")
      INDEX_FFS(@"SEL$2FC9D0FE" "T1"@"SUBQ1" ("T1"."STR"))
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      UNNEST(@"SUBQ1")
      OUTLINE_LEAF(@"SEL$2FC9D0FE")
      ALL_ROWS
      DB_VERSION('11.1.0.7')
      OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(UPPER("STR") LIKE '%CHD%')

Running the test under 19.11.0.0 (and adding the hint_report option to the dbms_xplan format) this is the resulting plan:

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   500 | 55000 |    47   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   500 | 55000 |    47   (0)| 00:00:01 |
--------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$48592A03 / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$48592A03" "T1"@"MAIN")
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      ELIMINATE_SQ(@"SUBQ1")
      OUTLINE_LEAF(@"SEL$48592A03")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(UPPER("T1"."STR") LIKE '%CHD%')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 5 (U - Unused (1), N - Unresolved (4))
---------------------------------------------------------------------------
   0 -  SEL$2FC9D0FE
         N -  index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))
         N -  leading(@sel$2fc9d0fe t1@subq1 t1@main)
         N -  rowid(@sel$2fc9d0fe t1@main)
         N -  use_nl(@sel$2fc9d0fe t1@main)

   0 -  SUBQ1
         U -  unnest(@subq1)

Clearly the plan has changed – but the hint report says that Oracle has NOT ignored my hints; instead it tells us that they cannot be resolved. If we check the Query Block Name / Object Alias list and the Outline Data we see why: there is no query block named @sel$2fc9d0fe and the reason it doesn’t exist is that the optimizer has applied a previously non-existent transformation ‘eliminate_sq’ (which appeared in 12c) to subq1.

So, on the upgrade from 11.1.0.7 to 19.11.0.0 an SQL Plan Baseline that forced the path we wanted would no longer work (though it might be reported as “applied”) because there is a new transformation that we had (necessarily) not been blocking.

The solution is easy: add the hint no_eliminate_sq(@subq1) to our query and try again.

We still get the full tablescan even though the hint report tells us that the optimizer used the new hint. Here’s the new Outline Data, and the Hint Report showing that the hint was used.

  Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$8C456B9A" "T1"@"SUBQ1")
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      UNNEST(@"SUBQ1")
      OUTLINE(@"SEL$2FC9D0FE")
      ELIMINATE_JOIN(@"SEL$2FC9D0FE" "T1"@"MAIN")
      OUTLINE_LEAF(@"SEL$8C456B9A")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(UPPER("STR") LIKE '%CHD%')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 7 (U - Unused (4))
---------------------------------------------------------------------------
   0 -  SUBQ1
           -  no_eliminate_sq(@subq1)
           -  qb_name(subq1)

   1 -  SEL$8C456B9A
         U -  leading(@sel$2fc9d0fe t1@subq1 t1@main)
           -  qb_name(main)

   1 -  SEL$8C456B9A / T1@MAIN
         U -  rowid(@sel$2fc9d0fe t1@main)
         U -  use_nl(@sel$2fc9d0fe t1@main)

   1 -  SEL$8C456B9A / T1@SUBQ1
         U -  index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))

But now the Outline Data is showing us a new hint – eliminate_join(@sel$2fc9dofe t1@main). So we’re not losing the subquery, but we’ve lost the join thanks to a transformation that was actually available in 10.2 but presumably couldn’t be applied to our code pattern until at least 12.1. So let’s try again adding in another blocking hint no_eliminate_join(@sel$2fc9dofe t1@main).

We still get the full tablescan – and this time the Outline Data tells us that the problem hint is now eliminate_join(@sel$2fc9dofe t1@subq1) – which we might have anticipated, and now address by adding no_eliminate_join(@sel$2fc9dofe t1@subq1) to the query and having one more go. This finally gets us back to the path that we had previously seen in 11.1.0.7.

(In fact, adding the hint optimizer_features_enable(‘11.1.0.’) to the original set of hints would have been – in this case, at least – would have been enough to get the original execution plan.)

Summary

This note is just another simple demonstration that hints do not guarantee plan stability across upgrades – and then showing that it can take a few experimental steps to discover what’s new in the optimizer that is making your previous set of hints ineffective.

Typically the problem will be the availability of new transformations (or enhancements to existing transformations) which manage to invalidate the old hints before the optimizer has had a chance to consider them. This is (to some extent) why a SQL Plan Baseline always captures the value of optimiser_features_enable() as part of the baseline.

April 11, 2022

Index Upgrade

Filed under: 19c,Indexing,Oracle,Statistics,Upgrades — Jonathan Lewis @ 4:55 pm BST Apr 11,2022

Sometimes wishes come true and in 19c – with fix_control QKSFM_DBMS_STATS_27268249 – one of mine did. The description of this fix (which is enabled by default) is: “use approximate ndv for computing leaf blocks and distinct keys”.

Here’s a key item in the output file from running tkprof against the trace file generated by a simple call to:

execute dbms_stats.gather_index_stats(user,'t1_i2')

You’ll notice tthat I haven’t included a value for the estimate_percent parameter in the call, which means it will default to dbms_stats.auto_sample_size. (unless I’ve done something a little silly with set_table_prefs or set_global_prefs). The index is a two_column index on t1(x1, x2) with a size of roughly 16,000 blocks on a table of approximately 6 million rows.

select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,
  approx_count_distinct(sys_op_lbid(106818,'L',t.rowid)) as nlb,
  approx_count_distinct(sys_op_combined_hash("X1","X2")) as ndk,
  sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
 "TEST_USER"."T1" t where "X1" is not null or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE APPROX (cr=15821 pr=0 pw=0 time=2812116 us starts=1)
   6018750    6018750    6018750   INDEX FAST FULL SCAN T1_I2 (cr=15821 pr=0 pw=0 time=894658 us starts=1 cost=2117 size=192000000 card=6000000)(object id 106818)


The first point of interest is the appearance of the approx_count_distinct() function calls used for the nlb (number of leaf blocks) and ndk (number of distinct keys) columns. It’s also worth nothing that the ndk value is derived from a call to sys_op_combined_hash() applied to the two base columns which means the number of distinct keys for a multi-column index is calculated in exactly the same way as the number of distinct values for a column group.

There are two more important details though: first that the mechanism uses a fast full scan of the whole index, secondly that the size of this index is about 16,000 blocks.

A final (unrelated) point is the little reminder in the query’s hints that 19c includes an automatic indexing mechanism. It’s easy to forget such things when your overnight batch job takes longer than usual.

For comparison purposes, the following shows the effect of disabling the feature:

alter session set "_fix_control"='27268249:0';


select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,count(distinct sys_op_lbid(106818,
  'L',t.rowid)) as nlb,count(distinct hextoraw(sys_op_descend("X1")
  ||sys_op_descend("X2"))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as
  clf
from
 "TEST_USER"."T1" sample block (  7.0114135742,1)  t where "X1" is not null
  or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT GROUP BY (cr=1132 pr=0 pw=0 time=460459 us starts=1)
    421761     421761     421761   INDEX SAMPLE FAST FULL SCAN T1_I2 (cr=1132 pr=0 pw=0 time=67203 us starts=1 cost=150 size=8413700 card=420685)(object id 106818)

The calculations for nlb and ndk are simple count()s and the thing that ndk counts is a messy concatenation of the columns hextoraw(sys_op_descend(“X1”) || sys_op_descend(“X2”)) that Oracle has used to ensure that counts for like ‘AB’ || ‘CD’ and ‘ABC’||’D’ don’t get combined.

Perhaps most significantly for some people is that the execution plan shows us that the index fast full scan was a SAMPLE and only analyzed (a fairly typical) 1,132 blocks out of 16,000 and 400,000 rows out of 6 million This looks a bit of a threat, of course; but there may be a few critical indexes where this extra workload will stop random variations in execution plans when it really matters.

As with so many details of Oracle there are likely to be cases where the new method is hugely beneficial, and some where it’s a nuisance, so it’s good to know that you can be a little selective about when it gets used.

Footnote

Don’t forget that it’s a good idea to think about setting the table preference “table_cached_blocks” to allow Oracle to produce a better value for the clustering_factor. This is another mechanism that increases the CPU required to gather index stats.

It’s an odd little detail that the fixed control appeared in 19.3.0.0 according to my archived copies of v$system_fix_control and certainly wasn’t in 18.3.0.0 – but the entry in the 19.3.0.0 view lists it under control that were available from Oracle 8.0.0.0 !

March 22, 2022

Upgrade Surprise

Filed under: 19c,Bugs,Oracle,Transformations,Upgrades — Jonathan Lewis @ 10:04 am GMT Mar 22,2022

Here’s a little surprise that showed up in the most recent (March 2022) article that I sent to Simpletalk for the series on transformations. I had been using 19c (19.11.0.0) to create and run my little demos but the editor had used 12.2.0.1 to check the examples and questioned a comment I had made about a “default plan”.

Here’s the query in question. I was using the emp and dept tables from the Scott schema to demonstrate a point about subquery execution:

rem
rem     Script:         unnest_demo_simpletalk_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2022
rem 

break on deptno skip 1

select
        /*+ 
                qb_name(main)
                gather_plan_statistics 
        */
        *
from    emp e1
where   e1.sal + nvl(e1.comm,0) > (
                select  /*+ qb_name(subq) */
                        avg(e2.sal + nvl(e2.comm,0))
                from    emp e2
                where   e2.deptno = e1.deptno
        )
order by
        e1.deptno, e1.empno
/

As you can see, I’ve used a correlated aggregate subquery to report all employees who earned more than the average for their department, where “earnings” is calculated as the sum of salary and commission.

Here’s the plan I got when I ran this query under 19c:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      6 |00:00:00.01 |      24 |       |       |          |
|   1 |  SORT ORDER BY       |      |      1 |      1 |      6 |00:00:00.01 |      24 |  2048 |  2048 | 2048  (0)|
|*  2 |   FILTER             |      |      1 |        |      6 |00:00:00.01 |      24 |       |       |          |
|   3 |    TABLE ACCESS FULL | EMP  |      1 |     14 |     14 |00:00:00.01 |       6 |       |       |          |
|   4 |    SORT AGGREGATE    |      |      3 |      1 |      3 |00:00:00.01 |      18 |       |       |          |
|*  5 |     TABLE ACCESS FULL| EMP  |      3 |      5 |     14 |00:00:00.01 |      18 |       |       |          |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E1"."SAL"+NVL("E1"."COMM",0)>)
   5 - filter("E2"."DEPTNO"=:B1)

When my editor ran the query on 12.2.0.1, and when I started up an instance of 12.2.0.1 and ran the query, the plan looked like this:

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      6 |00:00:00.02 |      29 |      6 |       |       |          |
|*  1 |  FILTER              |      |      1 |        |      6 |00:00:00.02 |      29 |      6 |       |       |          |
|   2 |   SORT GROUP BY      |      |      1 |      4 |     14 |00:00:00.02 |      29 |      6 |  2048 |  2048 | 2048  (0)|
|*  3 |    HASH JOIN         |      |      1 |     70 |     70 |00:00:00.02 |      29 |      6 |  1922K|  1922K| 1053K (0)|
|   4 |     TABLE ACCESS FULL| EMP  |      1 |     14 |     14 |00:00:00.01 |       7 |      6 |       |       |          |
|   5 |     TABLE ACCESS FULL| EMP  |      1 |     14 |     14 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$9E18A930")
      MERGE(@"SEL$AA0D0E02" >"SEL$B4BE209F")
      OUTLINE(@"SEL$B4BE209F")
      UNNEST(@"SUBQ")
      OUTLINE(@"SEL$AA0D0E02")
      OUTLINE(@"SEL$D6166863")
      OUTLINE(@"SUBQ")
      OUTLINE(@"MAIN")
      FULL(@"SEL$9E18A930" "E2"@"SUBQ")
      FULL(@"SEL$9E18A930" "E1"@"MAIN")
      LEADING(@"SEL$9E18A930" "E2"@"SUBQ" "E1"@"MAIN")
      USE_HASH(@"SEL$9E18A930" "E1"@"MAIN")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("E1"."SAL"+NVL("E1"."COMM",0)>SUM("E2"."SAL"+NVL("E2"."COMM",0))/COUNT("E2"."SAL"+NVL("E2"."COMM",0))
              )
   3 - access("E2"."DEPTNO"="E1"."DEPTNO")

(I’ve added in a request for the ‘outline’ in the display_cursor() format.) The correlated subquery has been unnested and the resulting inline view has been subject to complex view merging. It was only at this point that I realised that the 19c plan was a little surprising and not what I should have expected.

After checking that the configuration and statistics (including the system stats) were the same on the two instances I re-ran the query on 12c with the /*+ no_unnest */ hint to make it use the plan that 19c had produced and I found (as expected) that the plan with filter subquery produced by 19c had a higher cost than the unnesting plan produced by 12c.

Next I re-ran the query on 19c with the /*+ unnest */ hint to make it use the plan that 12c had produced – but it didn’t! 19c “ignored” the hint and carried on using the filter subquery plan. It did, however, supply the following extra information when I added the ‘hint_report’ option to the to the display_cursor() format:

Total hints for statement: 3 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  MAIN
           -  qb_name(main)

   4 -  SUBQ
         U -  unnest / Failed basic validity checks
           -  qb_name(subq)

The code in 19c thinks that it would be illegal to unnest the subquery that 12c was unnesting (does this mean that some people may be getting wrong results in 12c). So I checked the 10053 (CBO) trace file to see if there was any further information there that would “explain” the problem. This is what I found:

SU: Checking validity for Null Accepting Semi JoinUnnesting for query block MAIN(#1)
SU:   Checking validity of unnesting subquery SUBQ (#2)
SU:     SU bypassed: More than 1 column in connect condition.
SU:     SU bypassed: Failed basic validity checks.
SU:   Validity checks failed.

The reference to “Null accepting” looks a little suspect but prompted a few experiments (modifying the data to eliminate nulls, adding not null declarations to columns, simplifying the query etc.) that suggested that the problem was essentially that the optimizer did not want to unnest when the comparison was with the expression (sal + comm) regardless of the operator, and even when all the relevant columns had been populated, declared not null, and the nvl() function had been removed.

It doesn’t seem reasonable in this case, but possibly the block is a generic strategy to avoid errors in some more subtle cases, and perhaps the block will be refined and relaxed in future versions. (Or maybe it’s a bug that the wrong test is being called at this point – there was nothing in the query requiring “null acceptance” by the time I got through the last test.)

I did find a workaround that you could use to avoid any code rewrite:

alter table emp add nvl_earnings 
        invisible 
        generated always as (sal + nvl(comm,0)) 
        virtual
;

There’s seems to be no good reason why this should work – but it does. The subquery unnests and the Predicate Information in the plan doesn’t give any clue that it’s using a virtual column.

Summary:

When you upgrade from 12c there are some queries involving correlated subqueries that no longer unnest the subquery. This may have a significant impact on performance and it may not be possible to bypass the problem unless you rewrite the query to do a manual unnest although I did find a virtual column workaround for my specific example. So far I’ve tested the query on 19.11.0.0 and 21.3.0.0, the behaviour is the same in both versions.

Footnote:

After failing to find anything on MOS about the problem I emailed a draft of this note to Nigel Bayliss at Oracle – who did find a promising match on MOS.

The failure to unnest may be the consequence of the fix for bug 30593046: “A query having a scalar subquery returned a wrong result”. The fix was introduced in 19.9.0.0 but was too restrictive, leading to the creation of bug 33325981: “Query Loses Subquery Unnesting From Plan in 19.9 and Above”.

The fix for 33325981 was distributed in 19.13.0.0 and 21.4.0.0 (plus a couple of earlier RURs, with patches available for various versions back to 12.2.0.1). Unfortunately the newer fix still doesn’t go far enough in reducing the restrictions and my example still doesn’t unnest.

Make sure you check any code that depends on “expression-based” subquery unnesting before you upgrade to 19.9, as it may change plan, which may make a difference to performance and a requirement for a workaround.

July 9, 2021

19c tweak 2

Filed under: 19c,CBO,Oracle,Performance,Upgrades — Jonathan Lewis @ 4:40 pm BST Jul 9,2021

Trying to find out why a plan had changed in the upgrade from 11g to 19c I came across this cunning little tweak that must have appeared in the 19c timeline. I’ll start with a simple query, then the execution plans (autotrace traceonly) from 19.11.0.0 – first with the parameter optimizer_features_enable set to 18.1.0, then with the it set to 19.1.0. The table t1 is a copy of the first 10,000 rows of view all_objects:

SQL> alter session set optimizer_features_enable = '18.1.0';
SQL> select count(data_object_id) from t1 where f1(object_id) = 'Y';

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     7 |    38  (37)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   100 |   700 |    38  (37)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("F1"("OBJECT_ID")='Y')



SQL> alter session set optimizer_features_enable = '19.1.0';
SQL> select count(data_object_id) from t1 where f1(object_id) = 'Y';

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     7 |    26   (8)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     5 |    35 |    26   (8)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATA_OBJECT_ID" IS NOT NULL AND "F1"("OBJECT_ID")='Y')

Optimising with the optimizer features set back to 18.1 the cardinality estimate is 100 (that’s 1% of the rows in the table, the standard guess for “function() = constant”) with a cost of 38, of which 37% is CPU cost.

Running with the optimizer features of 19c enabled the cardinality estimate drops to 5 and the cost drops to 26 with CPU making up 5% of the cost. Where does the difference come from?

As ever you have to look at the Predicate Information. Running as 18c Oracle has decided to call my function for every row in the table; running as 19c Oracle has decided that since I’m counting non-null entries of column data_object_id it need only call the function when data_object_id is not null, so it’s introduced an extra predicate to make that happen, and that extra predicate has reduced the cardinality and cost estimates. (In my sample data set there are 9,456 nulls and 544 distinct values for data_object_id – so the difference in workload is significant. And 1% of 544 is 5, which explains the cardinality estimate.)

This looks like fix control 24761824 “add is not null for high null column in set function” introduced in 19.1.0. The description suggests that the feature will only be used in cases where the column is “often” null, but we have no clue, yet, about what “often” means. [Update 12th July: thanks to comment #1 below from Andi Schloegl we now have a pretty good idea that the break point is at 5%.]

Warning: this means that there may be cases where an execution plan changes on an upgrade to 19c because a tablescan has become cheaper or a cardinality estimate has been reduced.

Just as a confirmation of how the change in plan is echoing reality, here are the execution plans pulled from memory after executing them with the statistics_level set to all to enable collection of the rowsource execution statistics. First the 18c plan, then the 19c plan:

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:11.14 |    1780K|
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:11.14 |    1780K|
|*  2 |   TABLE ACCESS FULL| T1   |      1 |    100 |  10000 |00:00:11.14 |    1780K|
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("F1"("OBJECT_ID")='Y')



-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.46 |   97010 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.46 |   97010 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |      5 |    544 |00:00:00.46 |   97010 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("DATA_OBJECT_ID" IS NOT NULL AND "F1"("OBJECT_ID")='Y'))

As you can see, the buffer gets has dropped from 1,780K in 18c to 97K in 19c (mainly because the function results in a tablescan of a table of 178 blocks and the number of calls has dropped from 10,000 to 544), and the run time has dropped from 11.14 seconds to 0.46 seconds.

Code

If you want to run and refine this test, here’s the code I used to generate the data.

rem
rem     Script:         19c_not_null_tweak.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem

create table t1 as select * from all_objects where rownum <= 10000;
create table t2 as select * from t1;

create or replace function f1(i_obj in number) return varchar2
is
        n1 number;
begin
        select count(*) into n1 from t2 where object_id = i_obj;

        if n1 = 0 then
                return 'N';
        else
                return 'Y';
        end if;
end;
/

set autotrace traceonly explain

alter session set optimizer_features_enable = '18.1.0';
select count(data_object_id) from t1 where f1(object_id) = 'Y';

alter session set optimizer_features_enable = '19.1.0';
select count(data_object_id) from t1 where f1(object_id) = 'Y';

set autotrace off

set serveroutput off
alter session set statistics_level = all;

alter session set optimizer_features_enable = '18.1.0';
select count(data_object_id) from t1 where f1(object_id) = 'Y';
select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

alter session set optimizer_features_enable = '19.1.0';
select count(data_object_id) from t1 where f1(object_id) = 'Y';
select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

alter session set statistics_level = typical;
set serveroutput on

April 5, 2021

Case Study

Filed under: lateral view,Oracle,Tuning,Upgrades — Jonathan Lewis @ 3:36 pm BST Apr 5,2021

A recent question on the Oracle Developer forum posed an interesting question on “finding the closest match” to a numeric value. The OP supplied SQL to create two tables, first a set of “valid” values each with an Id, then a set of measures. The requirement was to find, for each measure, the closest valid value and report its id.

In this note I’m going to make a few comments on three topics:

  • how the question was posed,
  • general thoughts on modelling,
  • some ideas on what to look for when testing possible solutions

We’ll start with the data (almost) as supplied:

rem
rem     Script:         closest_match.sql
rem     Author:         Jonathan Lewis / user626688
rem     Dated:          Apr 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4         (with event 22829)
rem
 
create table nom_val_lkp(
        lkp_id  number       not null,
        nom_val number(3,2)  primary key
)
-- organization index
/

insert into nom_val_lkp values(1, 0.1);
insert into nom_val_lkp values(2, 0.2);
insert into nom_val_lkp values(3, 0.3);
insert into nom_val_lkp values(4, 0.4);
insert into nom_val_lkp values(5, 0.5);
insert into nom_val_lkp values(6, 0.6);
insert into nom_val_lkp values(7, 0.7);
insert into nom_val_lkp values(8, 0.8);
insert into nom_val_lkp values(9, 0.9);
commit;

create table measure_tbl(
        id              number      not null, 
        measure_val     number(3,2) not null
)
/

insert into measure_tbl values(1, 0.24);
insert into measure_tbl values(2, 0.5);
insert into measure_tbl values(3, 0.14);
insert into measure_tbl values(4, 0.68);
commit;

insert into measure_tbl values(5, 1.38);
insert into measure_tbl values(6, 0.05);
commit;


execute dbms_stats.gather_table_stats(null,'measure_tbl')
execute dbms_stats.gather_table_stats(null,'nom_val_lkp')

There are a couple of differences between the original and the SQL I’ve listed above. Given the nature of the requirement I’ve added not null constraints to both the lkp_id and nom_val columns of the “valid values” table. I think it’s also reasonable to assume that both columns outght to be (individually) unique and could both be candidate keys for the table although I’ve not bothered to add a uniqueness constraint to the lkp_id. I have made the nom_val (the interesting bit) the primary key because that’s potentially an important feature of a good solution. Obviously this is guesswork on my part, but I think they’re reasonable guesses of what the “real application” will look like and they’re details that ought to be been included in the original specification.

You’ll see that I’ve also included the option for making the table an index organized table – but that’s a generic implementation choice for small look-up tables not something that you could call an omission in the specification of requirements.

One thing to note about the nom_val_lkp table is that the nom_val is strictly constrained to be 3 digits with 2 decimal places, which means values between -9.99 to +9.99. It’s going to be a pretty small table – no more than 1,999 rows. (In “real life” it’s possible that the measure all have to be postive – and if so that’s another detail that could have gone into the specification – so the column could also have a check constraint to that effect.)

Looking at the measure_tbl (which is the “big data” table) I’ve added not null constraints to both columns; I’ve also added a couple of extra rows to the table to make sure that we can test boundary conditions when we write the final SQL statement. We’re looking for “the closest match” so we’ll be looking in the nom_val_lkp table for values above and below the measure value – so we ought to have a measure row where there is no “below” value and one with no “above” value. A common oversight in modelling is to forget about checking special cases, and simple boundary conditions are often overlooked (or inadequately covered).

Thinking about the “above / below / closest” requirement, an immediate follow-up questions springs to mind. What if there is no exact match and the valid values either side are the same distance from the measure? If there’s a tie should the query return the lower value or the higher value, or does it not matter? The specification is not complete, and the most efficient solution may depend on this detail.

Interestingly the measure_val column is constrained in exactly the same way as the nom_val column -3 digits with 2 d.p. Apparently the requirement isn’t something like “take a measurement to 6 decimal places then give me a value to 2 d.p.”; no matter how large the measure_val table gets the number of distinct values it records is pretty small – which means caching considerations could become important. With this thought in mind I added a few more lines (before gathering stats) to make multiple copies of the supplied measures data to model (approximately, and with a very large bias) a large table with a small number of distinct measures.

insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
insert into measure_tbl select * from measure_tbl;
commit;

update measure_tbl set id = rownum;
commit;

execute dbms_stats.gather_table_stats(null,'measure_tbl')

This doubling-up code resulted in a total of 6 * 1,024 = 6,144 rows in the table. I only increased the data volume after I’d checked that I had a query that produced the correct results, of course.

A possible solution

By the time I saw the thread on the Oracle forum there were already three solutions on offer, but they all took the route of using analytic functions, including one that used keep(dense_rank …), and these all involved sorting the entire measures dataset; so I thought I’d try an approach that demonstrated a completely different method that was visibly following the strategy: “for each row do two high-precision lookups”. I implemented this by joining two lateral views of the lookup table to the measures table. Since I was sitting in front of a copy of 11.2.0.4 at the time I had to set the event 22829 to enable the feature – here’s the basic code with the plan produced by 11g:

select  /*+ qb_name(main) */
        mt.id,
        mt.measure_val,
        case
                when
                        nt_high.nom_val - mt.measure_val <=
                        mt.measure_val - nt_low.nom_val
                then    nvl(nt_high.lkp_id,nt_low.lkp_id)
                else    nvl(nt_low.lkp_id,nt_high.lkp_id)
        end     lkp_id,
        nt_low.nom_val  low_val,
        nt_low.lkp_id   low_lkp,
        nt_high.nom_val high_val,
        nt_high.lkp_id  high_lkp 
from
        measure_tbl     mt,
        lateral(
                select
                        /*+ qb_name(low) index_rs_desc(nt (nom_val)) */
                        nt.lkp_id, nt.nom_val
                from    nom_val_lkp nt
                where   nt.nom_val <= mt.measure_val
                and     rownum = 1
        )(+) nt_low,
        lateral(
                select
                        /*+ qb_name(high) index_rs_asc(nt (nom_val)) */
                        nt.lkp_id, nt.nom_val
                from    nom_val_lkp nt
                where   nt.nom_val >= mt.measure_val
                and     rownum = 1
        ) (+) nt_high
/

        ID MEASURE_VAL     LKP_ID    LOW_VAL    LOW_LKP   HIGH_VAL   HIGH_LKP
---------- ----------- ---------- ---------- ---------- ---------- ----------
         1         .24          2         .2          2         .3          3
         2          .5          5         .5          5         .5          5
         3         .14          1         .1          1         .2          2
         4         .68          7         .6          6         .7          7
         5        1.38          9         .9          9
         6         .05          1                               .1          1


6 rows selected.

You’ll notice that (for debugging purposes) I’ve included columns in my output for the lkp_id and nom_val just lower than (or matching) and just higher than (or matching) the measure_val. The blanks this produces in two of the rows conveniently highlights the cases where the measure is “out of bounds”.

With my tiny data set I had to include the index_rs_desc() hint. Of course I should really have included an “order by” clause in the two subqueries and used an extra layer of inline views to introduce the rownum = 1 predicate, viz:

        lateral(
                select  * 
                from    (
                        select  /*+ qb_name(low) */
                                nt.lkp_id, nt.nom_val
                        from    nom_val_lkp nt
                        where   nt.nom_val <= mt.measure_val
                        order by
                                nom_val desc
                )
                where   rownum = 1
        )(+) nt_low,

There were two reasons I didn’t do this: first I wanted to keep the code short, secondly it wouldn’t have worked with 11g because it was only in 12c that a correlated subquery could correlate more than one level up – the predicate referencing mt.measure_val would have raised error “ORA-00904: invalid identifier”.

If you’re not familiar with lateral views, the idea is quite simple: as with any inline view in the from clause it’s just a query that returns a result set that looks like a table, but it has the special condition that the predicates in the query can reference columns from tables (or views) that have appeared further to the left in (or, equivalently, further up) the from clause. In this case both of my inline views query nom_val_lkp and both of them reference a column in measure_tbl which was the first table in the from clause.

There are two distinguishing details that are a consequence of the lateral view. First, the view effectively has a join to the driving table built into it so my main query doesn’t have any where clause predicates joining the views to the rest of the query. Se,condly I want to do outer joins (to deal with the cases where there isn’t a nom_val higher/ lower than the measure_val) so in the absence of a join predicate in the main query the necessary syntax simply adds Oracle’s traditional “(+)” to the lateral() operator itself. (If you want to go “full-ANSI” you would use outer apply() instead of lateral()(+) at this point – but 11g doesn’t support outer apply().

Here’s the execution plan from 11g for this query – I’ve enabled rowsource execution stats and pulled the plan from memory using the ‘allstats last’ format option:

select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |              |      1 |        |      6 |00:00:00.01 |      29 |
|   1 |  NESTED LOOPS OUTER              |              |      1 |      6 |      6 |00:00:00.01 |      29 |
|   2 |   NESTED LOOPS OUTER             |              |      1 |      6 |      6 |00:00:00.01 |      18 |
|   3 |    TABLE ACCESS FULL             | MEASURE_TBL  |      1 |      6 |      6 |00:00:00.01 |       7 |
|   4 |    VIEW                          |              |      6 |      1 |      5 |00:00:00.01 |      11 |
|*  5 |     COUNT STOPKEY                |              |      6 |        |      5 |00:00:00.01 |      11 |
|   6 |      TABLE ACCESS BY INDEX ROWID | NOM_VAL_LKP  |      6 |      2 |      5 |00:00:00.01 |      11 |
|*  7 |       INDEX RANGE SCAN DESCENDING| SYS_C0072287 |      6 |      6 |      5 |00:00:00.01 |       6 |
|   8 |   VIEW                           |              |      6 |      1 |      5 |00:00:00.01 |      11 |
|*  9 |    COUNT STOPKEY                 |              |      6 |        |      5 |00:00:00.01 |      11 |
|  10 |     TABLE ACCESS BY INDEX ROWID  | NOM_VAL_LKP  |      6 |      1 |      5 |00:00:00.01 |      11 |
|* 11 |      INDEX RANGE SCAN            | SYS_C0072287 |      6 |      4 |      5 |00:00:00.01 |       6 |
-----------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter(ROWNUM=1)
   7 - access("NT"."NOM_VAL"<="MT"."MEASURE_VAL")
       filter("NT"."NOM_VAL"<="MT"."MEASURE_VAL")
   9 - filter(ROWNUM=1)
  11 - access("NT"."NOM_VAL">="MT"."MEASURE_VAL")


As you can see we’ve done a full tablescan of measure_tbl, then performed an outer join to each of two (unnamed) views for each row, and each time we’ve accessed a view we’ve done an index range scan (descending in one case) into nom_val_lkp. passing in (according to the Predicate Information) the measure_val from measure_tbl.

It’s a little oddity I hadn’t noticed before that the ascending and descending range scans behave slightly differently – the descending range scan says we’ve used the predicate as both an access and a filter predicate. I’ll have to check whether this is always the case or whether it’s version-dependent or whether it’s only true under some conditions.

The only other detail to mention is the expression I’ve used to report the closest match – which is a little messy to allow for “out of range” measures::

        case
                when
                        nt_high.nom_val - mt.measure_val <=
                        mt.measure_val - nt_low.nom_val
                then    nvl(nt_high.lkp_id,nt_low.lkp_id)
                else    nvl(nt_low.lkp_id,nt_high.lkp_id)
        end     lkp_id,

This case expression says that if the higher nom_val is closer to (or, to be precise, not further from) the meause_val than the lower nom_val then report the higher lkp_id. otherwise report the lower lkp_id. The ordering of the comparison means that when the differences are the same the higher value will always be reported; and the “cross-over” use of the nvl() function ensures that when the measure_val is out of range (which means one of the nom_val subqueries will have returned null) we see the nom_val that’s at the end of the range rather than a null.

Some bad news

At first sight the lateral() view looks as if it might be a candidate for scalar subquery caching – so when I create multiple copies of the 6 rows in the measure_tbl and run my query against the expanded data set I might hope to get excellent performance because Oracle might only have to call each lateral view once and and cache the subquery inputs and results from that point onwards. But here are the stats I get from the 11g plan after exanding the data to 6,144 rows:

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |              |      1 |        |   6144 |00:00:00.82 |   22953 |
|   1 |  NESTED LOOPS OUTER              |              |      1 |   6144 |   6144 |00:00:00.82 |   22953 |
|   2 |   NESTED LOOPS OUTER             |              |      1 |   6144 |   6144 |00:00:00.47 |   11689 |
|   3 |    TABLE ACCESS FULL             | MEASURE_TBL  |      1 |   6144 |   6144 |00:00:00.03 |     425 |
|   4 |    VIEW                          |              |   6144 |      1 |   5120 |00:00:00.28 |   11264 |
|*  5 |     COUNT STOPKEY                |              |   6144 |        |   5120 |00:00:00.20 |   11264 |
|   6 |      TABLE ACCESS BY INDEX ROWID | NOM_VAL_LKP  |   6144 |      2 |   5120 |00:00:00.12 |   11264 |
|*  7 |       INDEX RANGE SCAN DESCENDING| SYS_C0072291 |   6144 |      5 |   5120 |00:00:00.04 |    6144 |
|   8 |   VIEW                           |              |   6144 |      1 |   5120 |00:00:00.32 |   11264 |
|*  9 |    COUNT STOPKEY                 |              |   6144 |        |   5120 |00:00:00.19 |   11264 |
|  10 |     TABLE ACCESS BY INDEX ROWID  | NOM_VAL_LKP  |   6144 |      2 |   5120 |00:00:00.11 |   11264 |
|* 11 |      INDEX RANGE SCAN            | SYS_C0072291 |   6144 |      3 |   5120 |00:00:00.04 |    6144 |
-----------------------------------------------------------------------------------------------------------


Look at the Starts column: the two views were called once each for every single row in the expanded measure_tbl, there’s no scalar subquery caching going on.

Bug time (1)

Of course, this is 11g and I’ve enabled lateral views by setting an event; it’s not an officially supported feature so maybe if I upgrade to 12c (or 19c), where the feature is official, Oracle will do better.

Here are the results of the original query against the original data set in 12c and 19c:

        ID MEASURE_VAL     LKP_ID    LOW_VAL    LOW_LKP   HIGH_VAL   HIGH_LKP
---------- ----------- ---------- ---------- ---------- ---------- ----------
         6         .05          1                               .1          1
         3         .14          1         .1          1
         1         .24          1         .1          1
         2          .5          1         .1          1
         4         .68          1         .1          1
         5        1.38          1         .1          1

On the upgrade I’ve got the wrong results! So what does the execution plan look like:

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                 |      1 |        |      6 |00:00:00.01 |      17 |       |       |          |
|   1 |  MERGE JOIN OUTER       |                 |      1 |      6 |      6 |00:00:00.01 |      17 |       |       |          |
|   2 |   SORT JOIN             |                 |      1 |      6 |      6 |00:00:00.01 |      12 |  2048 |  2048 | 2048  (0)|
|   3 |    MERGE JOIN OUTER     |                 |      1 |      6 |      6 |00:00:00.01 |      12 |       |       |          |
|   4 |     SORT JOIN           |                 |      1 |      6 |      6 |00:00:00.01 |       7 |  2048 |  2048 | 2048  (0)|
|   5 |      TABLE ACCESS FULL  | MEASURE_TBL     |      1 |      6 |      6 |00:00:00.01 |       7 |       |       |          |
|*  6 |     SORT JOIN           |                 |      6 |      1 |      5 |00:00:00.01 |       5 |  2048 |  2048 | 2048  (0)|
|   7 |      VIEW               | VW_DCL_A18161FF |      1 |      1 |      1 |00:00:00.01 |       5 |       |       |          |
|*  8 |       COUNT STOPKEY     |                 |      1 |        |      1 |00:00:00.01 |       5 |       |       |          |
|   9 |        TABLE ACCESS FULL| NOM_VAL_LKP     |      1 |      1 |      1 |00:00:00.01 |       5 |       |       |          |
|* 10 |   SORT JOIN             |                 |      6 |      1 |      1 |00:00:00.01 |       5 |  2048 |  2048 | 2048  (0)|
|  11 |    VIEW                 | VW_DCL_A18161FF |      1 |      1 |      1 |00:00:00.01 |       5 |       |       |          |
|* 12 |     COUNT STOPKEY       |                 |      1 |        |      1 |00:00:00.01 |       5 |       |       |          |
|  13 |      TABLE ACCESS FULL  | NOM_VAL_LKP     |      1 |      1 |      1 |00:00:00.01 |       5 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(INTERNAL_FUNCTION("NOM_VAL")<=INTERNAL_FUNCTION("MT"."MEASURE_VAL"))
       filter(INTERNAL_FUNCTION("NOM_VAL")<=INTERNAL_FUNCTION("MT"."MEASURE_VAL"))
   8 - filter(ROWNUM=1)
  10 - access("NOM_VAL">="MT"."MEASURE_VAL")
       filter("NOM_VAL">="MT"."MEASURE_VAL")
  12 - filter(ROWNUM=1)


Check what’s appeared in the Name for the view operations 7 and 11: VW_DCL_ A18161FF (DCL = “decorrelate”), I was expecting to see names starting with VW_LAT (LAT = “lateral”). And then I remembered reading this article by Sayan Malakshinov – Oracle (12c+) can decorrelate lateral views but gets the wrong results with rownum. So let’s add in a few hints to avoid decorrelation and check the results and execution plan.

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |      1 |        |      6 |00:00:00.01 |      30 |       |       |          |
|   1 |  MERGE JOIN OUTER                        |                 |      1 |      6 |      6 |00:00:00.01 |      30 |       |       |          |
|   2 |   MERGE JOIN OUTER                       |                 |      1 |      6 |      6 |00:00:00.01 |      19 |       |       |          |
|   3 |    TABLE ACCESS FULL                     | MEASURE_TBL     |      1 |      6 |      6 |00:00:00.01 |       8 |       |       |          |
|   4 |    BUFFER SORT                           |                 |      6 |      1 |      5 |00:00:00.01 |      11 |  2048 |  2048 | 2048  (0)|
|   5 |     VIEW                                 | VW_LAT_D77DA787 |      6 |      1 |      5 |00:00:00.01 |      11 |       |       |          |
|*  6 |      COUNT STOPKEY                       |                 |      6 |        |      5 |00:00:00.01 |      11 |       |       |          |
|   7 |       TABLE ACCESS BY INDEX ROWID BATCHED| NOM_VAL_LKP     |      6 |      2 |      5 |00:00:00.01 |      11 |       |       |          |
|*  8 |        INDEX RANGE SCAN                  | SYS_C0055681    |      6 |      3 |      5 |00:00:00.01 |       6 |       |       |          |
|   9 |   BUFFER SORT                            |                 |      6 |      1 |      5 |00:00:00.01 |      11 |  2048 |  2048 | 2048  (0)|
|  10 |    VIEW                                  | VW_LAT_D77DA787 |      6 |      1 |      5 |00:00:00.01 |      11 |       |       |          |
|* 11 |     COUNT STOPKEY                        |                 |      6 |        |      5 |00:00:00.01 |      11 |       |       |          |
|  12 |      TABLE ACCESS BY INDEX ROWID BATCHED | NOM_VAL_LKP     |      6 |      2 |      5 |00:00:00.01 |      11 |       |       |          |
|* 13 |       INDEX RANGE SCAN DESCENDING        | SYS_C0055681    |      6 |      5 |      5 |00:00:00.01 |       6 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - filter(ROWNUM=1)
   8 - access("NT"."NOM_VAL">="MT"."MEASURE_VAL")
  11 - filter(ROWNUM=1)
  13 - access("NT"."NOM_VAL"<="MT"."MEASURE_VAL")
       filter("NT"."NOM_VAL"<="MT"."MEASURE_VAL")

Blocking decorrelation was sufficient to get the correct result but there’s still a funny little glitch in the execution plan: why do we have merge join (outer) for operations 1 and 2?

It’s not quite the threat you might think; we’re not multiplying up rows catastrophically. For each row in measures_tbl Oracle does a Cartesian merge join to (at most) one row in each view – so there’s no accidental explosion in data volume, and there’s no real sorting. Nevertheless there may be unnecessary CPU usage so let’s add a few more hints to try and get a nested loop by adding the following hints to the start of the query:

        /*+
                qb_name(main)
                leading(@main mt@main nt_high@main nt_low@main)
                use_nl(@main nt_high@main)
                use_nl(@main nt_low@main)
        */

I was a little surprised at the benefit – roughly a 30% saving on CPU for the same data set.

But there’s more to investigate – I didn’t like the index hints that I’d had to use in 11g, but 12c allows for the more complex “two layer” lateral subquery with its deeply correlated predicate – so what happens if I use the following corrected query (with minimal hinting) in 12c or 19c:

select
        /*+
                qb_name(main)
--              leading(@main mt@main nt_high@main nt_low@main)
--              use_nl(@main nt_high@main)
--              use_nl(@main nt_low@main)
        */
        mt.id,
        mt.measure_val,
        case
                when
                        nt_high.nom_val - mt.measure_val <=
                        mt.measure_val - nt_low.nom_val
                then    nvl(nt_high.lkp_id,nt_low.lkp_id)
                else    nvl(nt_low.lkp_id,nt_high.lkp_id)
        end     lkp_id,
        nt_low.nom_val  low_val,
        nt_low.lkp_id   low_lkp,
        nt_high.nom_val high_val,
        nt_high.lkp_id  high_lkp 
from
        measure_tbl     mt,
        lateral(
                select  *
                from    (
                        select  /*+ qb_name(low) */
                                nt.lkp_id, nt.nom_val
                        from    nom_val_lkp nt
                        where   nt.nom_val <= mt.measure_val
                        order by
                                nom_val desc
                        )
                where   rownum = 1
        )(+) nt_low,
        lateral(
                select  *
                from    (
                        select  /*+ qb_name(high) */
                                nt.lkp_id, nt.nom_val
                        from    nom_val_lkp nt
                        where   nt.nom_val >= mt.measure_val
                        order by
                                nom_val
                )
                where   rownum = 1
        )(+) nt_high
/

First – Oracle doesn’t use decorrelation so I get the right results; secondly Oracle uses the correct index descending without hinting, which is an important part of getting the right results. Unfortunately I still see merge joins unless I include the use_nl() hints (with the leading() hint as an extra safety barrier) to get that 30% reduction in CPU usage.

The sad news is that I still don’t see scalar subquery caching. If I have 6,144 rows in measure_tbl I still see 6,144 executions of both the lateral subqueries.

Since 12c onwards supports “outer apply” it’s worth testing to see what happens if I replace my lateral()(+) operator with the outer apply() mechanism. It works, but behaves very much like the lateral approach (including the unexpected merge joins unless hinted), except it introduces another layer of lateral joins. Here’s the plan (12c and 19c) with 6,144 rows:

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |   6144 |00:00:00.14 |   22954 |       |       |          |
|   1 |  MERGE JOIN OUTER                   |                 |      1 |   6144 |   6144 |00:00:00.14 |   22954 |       |       |          |
|   2 |   MERGE JOIN OUTER                  |                 |      1 |   6144 |   6144 |00:00:00.08 |   11690 |       |       |          |
|   3 |    TABLE ACCESS FULL                | MEASURE_TBL     |      1 |   6144 |   6144 |00:00:00.01 |     426 |       |       |          |
|   4 |    BUFFER SORT                      |                 |   6144 |      1 |   5120 |00:00:00.06 |   11264 |  2048 |  2048 | 2048  (0)|
|   5 |     VIEW                            | VW_LAT_F8C248CF |   6144 |      1 |   5120 |00:00:00.04 |   11264 |       |       |          |
|   6 |      VIEW                           | VW_LAT_A18161FF |   6144 |      1 |   5120 |00:00:00.04 |   11264 |       |       |          |
|*  7 |       COUNT STOPKEY                 |                 |   6144 |        |   5120 |00:00:00.03 |   11264 |       |       |          |
|   8 |        VIEW                         |                 |   6144 |      2 |   5120 |00:00:00.03 |   11264 |       |       |          |
|   9 |         TABLE ACCESS BY INDEX ROWID | NOM_VAL_LKP     |   6144 |      6 |   5120 |00:00:00.02 |   11264 |       |       |          |
|* 10 |          INDEX RANGE SCAN DESCENDING| SYS_C0023500    |   6144 |      2 |   5120 |00:00:00.01 |    6144 |       |       |          |
|  11 |   BUFFER SORT                       |                 |   6144 |      1 |   5120 |00:00:00.06 |   11264 |  2048 |  2048 | 2048  (0)|
|  12 |    VIEW                             | VW_LAT_F8C248CF |   6144 |      1 |   5120 |00:00:00.04 |   11264 |       |       |          |
|  13 |     VIEW                            | VW_LAT_E88661A9 |   6144 |      1 |   5120 |00:00:00.04 |   11264 |       |       |          |
|* 14 |      COUNT STOPKEY                  |                 |   6144 |        |   5120 |00:00:00.03 |   11264 |       |       |          |
|  15 |       VIEW                          |                 |   6144 |      1 |   5120 |00:00:00.02 |   11264 |       |       |          |
|  16 |        TABLE ACCESS BY INDEX ROWID  | NOM_VAL_LKP     |   6144 |      1 |   5120 |00:00:00.02 |   11264 |       |       |          |
|* 17 |         INDEX RANGE SCAN            | SYS_C0023500    |   6144 |      4 |   5120 |00:00:00.01 |    6144 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   7 - filter(ROWNUM=1)
  10 - access("NT"."NOM_VAL"<="MT"."MEASURE_VAL")
  14 - filter(ROWNUM=1)
  17 - access("NT"."NOM_VAL">="MT"."MEASURE_VAL")

Note operations 5 and 6, then 12 and 13: the “ANSI” syntax outer apply seems to be another case of Oracle doing more work because it has to transform the query before optimising.

A Traditional Solution

Having worked through a few of the newer mechanisms in Oracle, why not think back to how the same pattern of implementation could have been achieved in older versions of Oracle. What’s wrong, for example, with using scalar subqueries in the select list? If we can expect plenty of scalar subquery caching this might be a very effective way of writing the query.

The immediate problem, though, is that scalar subqueries in the select list only allow one column to be returned (unless you want to fake things through by playing nasty games with user-defined types). So our two lateral views will have to change to four scalar subqueres to get all the data we need.

Here’s a possible solution (I’ve stuck with the hinted shorter, but bad practice, “first row” mechanism for compactness) – with execution stats:

select
        id,
        measure_val,
        case
                when
                        nt_high_nom_val - measure_val <=
                        measure_val - nt_low_nom_val
                then    nvl(nt_high_lkp_id,nt_low_lkp_id)
                else    nvl(nt_low_lkp_id,nt_high_lkp_id)
        end     lkp_id,
        nt_low_nom_val,
        nt_low_lkp_id,
        nt_high_nom_val,
        nt_high_lkp_id
from    (
        select
                mt.id,
                mt.measure_val,
                (
                        select
                                /*+ index_rs_asc(nt (nom_val)) */
                                nt.lkp_id
                        from    nom_val_lkp nt
                        where   nt.nom_val >= mt.measure_val
                        and     rownum = 1
                ) nt_high_lkp_id,
                (
                        select
                                /*+ index_rs_asc(nt (nom_val)) */
                                nt.nom_val
                        from    nom_val_lkp nt
                        where   nt.nom_val >= mt.measure_val
                        and     rownum = 1
                ) nt_high_nom_val,
                (
                        select
                                /*+ index_rs_desc(nt (nom_val)) */
                                nt.lkp_id
                        from    nom_val_lkp nt
                        where   nt.nom_val <= mt.measure_val
                        and     rownum = 1
                ) nt_low_lkp_id,
                (
                        select
                                /*+ index_rs_desc(nt (nom_val)) */
                                nt.nom_val
                        from    nom_val_lkp nt
                        where   nt.nom_val <= mt.measure_val
                        and     rownum = 1
                ) nt_low_nom_val
        from
                measure_tbl     mt
        )
/

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |      1 |        |   6144 |00:00:00.01 |     426 |
|*  1 |  COUNT STOPKEY                          |              |      6 |        |      5 |00:00:00.01 |       6 |
|*  2 |   INDEX RANGE SCAN                      | SYS_C0023507 |      6 |      1 |      5 |00:00:00.01 |       6 |
|*  3 |   COUNT STOPKEY                         |              |      6 |        |      5 |00:00:00.01 |       6 |
|*  4 |    INDEX RANGE SCAN DESCENDING          | SYS_C0023507 |      6 |      1 |      5 |00:00:00.01 |       6 |
|*  5 |    COUNT STOPKEY                        |              |      6 |        |      5 |00:00:00.01 |      11 |
|   6 |     TABLE ACCESS BY INDEX ROWID BATCHED | NOM_VAL_LKP  |      6 |      1 |      5 |00:00:00.01 |      11 |
|*  7 |      INDEX RANGE SCAN                   | SYS_C0023507 |      6 |      1 |      5 |00:00:00.01 |       6 |
|*  8 |     COUNT STOPKEY                       |              |      6 |        |      5 |00:00:00.01 |      11 |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| NOM_VAL_LKP  |      6 |      1 |      5 |00:00:00.01 |      11 |
|* 10 |       INDEX RANGE SCAN DESCENDING       | SYS_C0023507 |      6 |      1 |      5 |00:00:00.01 |       6 |
|  11 |  TABLE ACCESS FULL                      | MEASURE_TBL  |      1 |   6144 |   6144 |00:00:00.01 |     426 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM=1)
   2 - access("NT"."NOM_VAL">=:B1)
   3 - filter(ROWNUM=1)
   4 - access("NT"."NOM_VAL"<=:B1)
       filter("NT"."NOM_VAL"<=:B1)
   5 - filter(ROWNUM=1)
   7 - access("NT"."NOM_VAL">=:B1)
   8 - filter(ROWNUM=1)
  10 - access("NT"."NOM_VAL"<=:B1)
       filter("NT"."NOM_VAL"<=:B1)

I’ve left the index hints in place in this example so that the code can run on 11g and earlier (without setting any special events, of course); but in 12c and 19c if you replace the subqueries with the double-layer subqueries (inline order by, then rownum = 1) as shown further up the page the hints (specifically the descending hints) are no longer necessary.

The key performance benefit of this approach is visible in the Starts column – although I now have 4 subqueries to run (which should mean doing more work) each one runs only once thanks to an extremely “lucky” level of scalar subquery caching.

This, really, is where this note takes us back to the beginning. Will this be a fantastic solution for the end-user, or does the pattern of the data mean that it’s going to be a total disaster. It’s nice to see the SQL that defines the tables and supplies a bit of test data – but there’s not point in trying to provide a solution without a better idea of what the data really looks like and what the critical usage is in production.

Bug time (2)

Nothing’s perfect, of course – and even though this last SQL statement is pretty simple and its execution plan is (for the right data pattern) very efficient, the shape of the plan is wrong – and in more complex plans you could be fooled into thinking that Oracle isn’t doing what you want it do.

Operations 1,3,5,8 and 11 should all be at the same depth (you’ll find that they all have parent_id = 0 if you look at the underlying data in v$sql_plan): there’s a defect in Oracle’s calculation of the depth column of v$sql_plan (et. al.) that introduces a pattern of indentation that shouldn’t be there.

Summary

This has been a fairly informal ramble through the playing around that I did after I read the original post. It holds some comments about the way the question was asked, the test data as supplied and corrected, and the observations and tweaks as the testing progressed.

On the plus size, the OP has supplied code to create and populate a model, and described what they wanted to see as a result. However the requirement didn’t mention (and the model therefore didn’t cater for) a couple of special cases. There were also a few cases where unique and mandatory columns were likely to be appropriate but were not mentioned, even though they could affect the correctness or performance of any suggested solutions.

More importantly, although the model implied some fairly narrow restrictions on what the production data might look like this information wasn’t presented explcitily, and there were no comments about the ultimate scale and distribution patterns of the data that might give some clues about the most appropriate features of SQL to use.

February 1, 2021

Pivot upgrade

Filed under: Oracle,Upgrades — Jonathan Lewis @ 1:34 pm GMT Feb 1,2021

I’ve hardly ever touched the pivot/unpivot feature in SQL, but a recent comment by Jason Bucata on a note I’d written about Java names and the effects of newer versions of Oracle allowing longer object and column names prompted me to look at a script I wrote several years ago for 11g.

As Jason pointed out, it’s another case where the output from a script might suffer some cosmetic changes because of an upgrade. Here’s the script to generate some data and run a query:

rem
rem     Script:         pivot_upgrade.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2021
rem
rem     Last tested 
rem             19,3,0,0
rem             12.2.0.1
rem             11.2.0.4
rem

set linesize 144
set trimspool on

create table t1
as
with generator as (
        select  --+ materialize
                rownum id 
        from dual 
        connect by 
                rownum <= 10000
)
select
        rownum                  id,
        rownum                  n1,
        rpad(chr(65 + mod(rownum,3)), 35, 
             chr(65 + mod(rownum,3))
        )                       state,
        mod(rownum,4)           flag,
        lpad(rownum,10,'0')     small_vc
from
        generator       v1,
        generator       v2
where
        rownum <= 10000
/

select  *
from
        (
        select
                flag, state, n1
        from
                t1
        )       piv
        pivot   (
                        avg(n1)
                 for    state in (
                                'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',
                                'BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB',
                                'CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC'
                        )
        )
order by
        flag
;

I’ve hijacked (cloned and hacked) a script I wrote for another little test so don’t read too much into the data that I’ve created and how I’ve created it. All that matters is that I have a column with three distinct values and I want a report that summarises the data across the page according to the value of those three columns.

To be awkward (and demonstrate the point of the blog note), the values in the columns are all 35 character strings – created using rpad(), but reported in the pivot() using the literal string value.

Here’s the result of the query from 12c (in my case 12.2.0.1) onwards:

      FLAG 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' 'BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB' 'CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC'
---------- ------------------------------------- ------------------------------------- -------------------------------------
         0                                  5004                                  5002                                  5000
         1                                  5001                                  4999                                  4997
         2                                  4998                                  5002                                  5000
         3                                  5001                                  4999                                  5003

You’ll notice that the pivoted column heading include the single-quote marks, plus the 35 defining characters. Compare this with the result from 11.2.0.4:

      FLAG 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 'BBBBBBBBBBBBBBBBBBBBBBBBBBBBB 'CCCCCCCCCCCCCCCCCCCCCCCCCCCCC
---------- ------------------------------ ------------------------------ ------------------------------
         0                           5004                           5002                           5000
         1                           5001                           4999                           4997
         2                           4998                           5002                           5000
         3                           5001                           4999                           5003

Including the initial single-quote mark the headings are exactly 30 characters long – the historical limit under Oracle’s naming conventions.

So if you’re still using 11g, an upgrade to a more recent version of Oracle could end up forcing you to do a few little adjustments to some of your code simply to ensure column widths (and subsequent line lengths) don’t change.

December 23, 2020

19c tweak

Filed under: CBO,Hints,Oracle,Upgrades — Jonathan Lewis @ 2:15 pm GMT Dec 23,2020

There are times when an upgrade makes a big difference to performance because an enhancement to the optimizer code path changes the costing of a plan that was always possible, but badly costed. I noticed an example of this while testing the code in the email I mentioned in last month’s posting on the “Incremental Sort” that Postgres 13 can do. Here’s a model to create some data and demonstrate the principle – the code is a modified (reduced) version of the code published by Phil Florent describing the feature.

rem
rem     Script:         fetch_first_postgres.sql
rem     author:         Phil Florent
rem     Dated:          6th Nov 2020
rem
rem     Last tested
rem             19.3.0.0        Uses index descending unhinted at low cost
rem             18.3.0.0        Used index desc efficiently if hinted, but high cost
rem             12.2.0.1        Used index desc efficiently if hinted, but high cost
rem

create table lancers(dtl timestamp, idg integer not null, perf integer);

insert into lancers(dtl, idg, perf)
with serie(i) as (
        select 25e4 from dual
        UNION ALL
        select i - 1 from serie where i > 1
)
select
        current_timestamp - (i / 1440),
        trunc(dbms_random.value * 1e5 + 1),
        case
                when dbms_random.value <= 0.001 then 50000 + trunc(dbms_random.value * 50000 + 1) 
                else trunc(dbms_random.value * 50000 + 1) 
        end
from serie
/

execute dbms_stats.gather_table_stats(user,'lancers',method_opt=>'for all columns size 1')

create index perf_i1 on lancers(perf, dtl);
alter table lancers modify (perf not null, dtl not null);

This is the basic statement I want to execute – but in some versions of Oracle it will have to be hinted to produce the execution plan I want to see.

select  
        idg, perf 
from  
        lancers 
order by
        perf desc  
fetch first 5 rows only
/

If you check the order by clause and the definition of the index perf_i1 you’ll see that Oracle could (in principle) walk the index in descending order, stopping after just 5 rows, to produce the result.

But here are the execution plans from 19.3.0.0, 18.3.0.0, and 12.2.0.1, with their plans pulled from memory and showing the rowsource execution statistics (hinted by gather_plan_statistics) to show you what happens – starting from the newest first:

19.3.0.0: (unhinted)
--------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |     8 (100)|      5 |00:00:00.01 |       9 |
|*  1 |  VIEW                         |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|     8   (0)|      5 |00:00:00.01 |       9 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |      5 |     3   (0)|      5 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

You can see an index_desc() hint in the output, but it has been commented out. The key feature to note is that the optimizer has found the path I was hoping to see, and it’s a low-cost path, although there is one oddity in the plan – the E-rows (cardinality estimate) for the table access doesn’t allow for the stopkey and, since there are no predicates in the query, reports the 250K rows that exist in the table.

For 18.3.0.0 I had to include the hint, and you’ll see why:

18.3.0.0 (hinted with index_desc)
---------------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)



Again we see the plan is possible, but the optimizer’s cardinality estimate for the hinted index scan is 250K rows – the full size of the index, and it has allowed for that in the cost of the query. So the cost of this plan is high and in the absence of the hint the optimizer would have used a full tablescan with sort.

Finally we get down to 12.2.0.1 – and I’ve shown the hinted and unhinted plans.

12.2.0.1 (hinted index_desc)
-----------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)


12.2.0.1 Unhinted
------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 1374242431

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |      1 |        |  1102 (100)|      5 |00:00:00.24 |     822 |       |       |          |
|*  1 |  VIEW                    |         |      1 |      5 |  1102  (10)|      5 |00:00:00.24 |     822 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|         |      1 |    250K|  1102  (10)|      5 |00:00:00.24 |     822 |  2048 |  2048 | 2048  (0)|
|   3 |    TABLE ACCESS FULL     | LANCERS |      1 |    250K|   132  (16)|    250K|00:00:00.13 |     822 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

As you can see, 12.2.0.1 and 18.3.0.0 behave exactly the same way when hinted – the path is acceptable, but the cost is high. Consequently when I remove the hint the optimizer switches to using a full tablescan with sort because it’s cost is lower (thanks, in part, to the pattern in the data) than the indexed access path.

Summary

Two thoughts to take away from this note.

  • First, there were two possible execution plans for the same query and the optimizer in versions below 19c was picking the one that was clearly a bad idea. The presence of alternatives, though, means that the patterns in the data, the index definition and statistics (especially the clustering_factor) the number of rows to fetch, and various other optimizer settings may mean that you find yourself in the unlucky position that the optimizer’s arithmetic is on the boundary between the two plans and it switches randomly between them from day to day.
  • Secondly, when you upgrade to 19c the optimizer seems to be more likely to pick the indexed access path for a query like this – and that will probably be a good thing, but in a few cases it might turn out to be a bad thing.

December 21, 2020

Upgrade trivia

Filed under: 18c,Oracle,Upgrades — Jonathan Lewis @ 9:57 am GMT Dec 21,2020

Sometimes it’s the little things that catch you out (perhaps only briefly) on an upgrade. Here’s one that came up recently on the Oracle Developer Community Forum.

The problem was with a database trigger that had been doing home-grown auditing to catch any DDL changes to non-SYS objects. The code was quite simple:

create or replace trigger system.audit_ddl_trg 
after ddl on database
begin
        if (ora_sysevent='TRUNCATE') then

                null; -- I do not care about truncate

        elsif ora_dict_obj_owner!='SYS' then

                insert into audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent)
                values(
                        sysdate,
                        sys_context('USERENV','OS_USER') ,
                        sys_context('USERENV','CURRENT_USER') ,
                        sys_context('USERENV','HOST') , 
                        sys_context('USERENV','TERMINAL') ,
                        ora_dict_obj_owner,
                        ora_dict_obj_type,
                        ora_dict_obj_name,
                        ora_sysevent
                );

        end if;
end;
/

The issue was that after an upgrade from 12c (release not specified) to Oracle 19c the trigger was failing.

Here’s the definition for the table used by the trigger as the target of the insert statement – can you see any reasons why it might be failing:

create table audit_ddl (
        d               date,
        osuser          varchar2(255 byte),
        current_user    varchar2(255 byte),
        host            varchar2(255 byte),
        terminal        varchar2(255 byte),
        owner           varchar2(30 byte),
        type            varchar2(30 byte),
        name            varchar2(30 byte),
        sysevent        varchar2(30 byte)
)
/

If it’s not immediately obvious it’s probably because you’ve forgotten that object names (and various other identifiers) are allowed to be up to 128 characters in 19c (and a little earlier) – so defining the owner and name as varchar2(30) is an accident waiting to happen.

It didn’t take the user long to work out why there was a problem but the more interesting part of the issue was why there were now objects in the database with names exceeding the old 30 character limit. The OP supplied an (obfuscated) example: after the upgrade Oracle was reporting object names “using the full path name” like: “/some/path/name/object_name”.

The structure is a clue – for this user it’s all about Java classes. Here’s a little query against dba_objects with the results from 11.2.0.4 and 12.2.0.1

select  object_name 
from    dba_objects 
where   object_type = 'JAVA CLASS' 
and     object_name like '%TimeZoneNamesBundle'
/

OBJECT_NAME (11.2.0.4)
------------------------------
/2ea59ec_TimeZoneNamesBundle

12.2.0.1
OBJECT_NAME (12.2.0.1)
--------------------------------------
sun/util/resources/TimeZoneNamesBundle

Java is a particularly enthusiastic user of long object names in Oracle – but it’s not the only culprit, there are a few others as we can see with another query against dba_objects – this time from 19c:

select  object_type, count(*)
from    dba_objects 
where   length(object_name) > 30 
group by object_type 
order by count(*)
/

OBJECT_TYPE               COUNT(*)
----------------------- ----------
PROCEDURE                        1
INDEX                            2
JAVA RESOURCE                 1286
SYNONYM                       4337
JAVA CLASS                   31739

If you’ve made much use of Java in the database before now you’re probably familiar with the call to dbms_java.long_name(). Since Oracle has a limit of 30 characters for identifiers it trims the leading edge (and sometimes a bit of the trailing edge) of the long names used by the public java libraries and uses a hashing function to create a short prefix. If you look in the sys.javasnm$ table (java short name?) in earlier versions of Oracle you’ll see that it has two columns – (short, longdbcs), and we can see the relationship between them:

select  short, longdbcs, dbms_java.longname(short) long_name 
from    javasnm$ 
where   rownum <= 10
/

SHORT                          LONGDBCS                                           LONG_NAME
------------------------------ -------------------------------------------------- --------------------------------------------------
/2ea59ec_TimeZoneNamesBundle   sun/util/resources/TimeZoneNamesBundle             sun/util/resources/TimeZoneNamesBundle
/8acf0d3a_OpenListResourceBund sun/util/resources/OpenListResourceBundle          sun/util/resources/OpenListResourceBundle
/e3e70b06_LocaleNamesBundle    sun/util/resources/LocaleNamesBundle               sun/util/resources/LocaleNamesBundle
/cc11c9d8_SerialVerFrame       sun/tools/serialver/SerialVerFrame                 sun/tools/serialver/SerialVerFrame
/1f9f2fa_N2AFilter             sun/tools/native2ascii/N2AFilter                   sun/tools/native2ascii/N2AFilter
/b6b3d680_UnsupportedEncodingE java/io/UnsupportedEncodingException               java/io/UnsupportedEncodingException
/7994ade2_CharsetEncoder       java/nio/charset/CharsetEncoder                    java/nio/charset/CharsetEncoder
/73841741_IllegalCharsetNameEx java/nio/charset/IllegalCharsetNameException       java/nio/charset/IllegalCharsetNameException
/f494d94e_UnsupportedCharsetEx java/nio/charset/UnsupportedCharsetException       java/nio/charset/UnsupportedCharsetException
/3092d940_MissingResourceExcep java/util/MissingResourceException                 java/util/MissingResourceException

10 rows selected.

With the appearance of long(er) identifiers in 18c, Oracle no longer needs to mess about with short names and a conversion function – it has just put the “fully qualified” name into obj$. I doubt if this will catch anyone out for long, but it might be nice to know about in advance.

October 8, 2020

Direct Path

Filed under: Infrastructure,Oracle,Performance,Troubleshooting,Tuning,Upgrades — Jonathan Lewis @ 12:29 pm BST Oct 8,2020

This is a little addendum to a note I wrote a couple of days ago about serial direct path reads and KO (fast object checkpoint) enqueue waits.

The original note was prompted by a problem where someone had set the hidden parameter “_serial_direct_read” to ‘always’ because they were running 11g and wanted some “insert as select” statements to use direct path reads on the select portion – and 11g wasn’t co-operating.

Serial direct path reads were introduced as a possibility in (at least) the 8.1.7.4 timeline, but the parameter was set to false until 11gR2 where it changed to auto. (Legal values are: true, false, always, never, auto)

In 11.2, though, even though a simple select statement could use serial direct path reads for segment scans, Oracle would not use the mechanism for “insert as select”.

This note is just a little piece of code to demonstrate the point.  Run it on 11g and (unless your buffer cache is large enough to make the test table “small”) Oracle will use direct path reads on the pure select, but scattered reads for the insert as select. Upgrade to 12.1 and Oracle will use direct path reads on both.

rem
rem     Script:         serial_fail.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem

create table t1
as
select
        ao.*
from
        all_objects     ao,
        (select rownum from dual connect by level <= 16) mult
/

create table t2
as
select  *
from    t1
where   rownum = 0
/

alter system flush buffer_cache;

prompt  =============
prompt  Simple Select
prompt  =============

execute snap_events.start_snap
select * from t1 where object_id = 98765;
execute snap_events.end_snap

prompt  ================
prompt  Insert as select
prompt  ================

execute snap_events.start_snap
insert into t2
select * from t1 where object_id = 98765;
execute snap_events.end_snap

prompt  =====================
prompt  Insert as select with
prompt  _serial_direct=always
prompt  =====================

alter session set "_serial_direct_read"=always;

execute snap_events.start_snap
insert /* serial direct */ into t2
select * from t1 where object_id = 98765;
execute snap_events.end_snap

alter session set "_serial_direct_read"=auto;

The calls to the snap_events package are the to produce the change in v$session_event for my session during the SQL.

You’ll notice I’ve included three main SQL statements rather than two – the third statement (2nd execution of the insert) is to demonstrate that it is possible to get direct path reads on the insert by setting the hidden parameter to ‘always’.

One detail to remember when testing this particular feature (and the same guideline applies to some other features), the “direct / not direct” becomes an attribute of the cursor, it’s not an attribute of the execution plan. This is why I’ve added a comment to the 2nd insert; if I hadn’t done so Oracle would have reused the (identical text) cursor from the first insert, which would have resulted in scattered reads being used instead of direct path reads. This distinction between cursor and plan explains why there is not hint that will allow you to force direct path reads for a specific query (not even the infamous opt_param() hint).

Here are the three sets of results from a system running 11.2.0.4:

=============
Simple Select
=============

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                               1           0           0.10        .100           4
direct path read                                    114           0          20.86        .183           6
SQL*Net message to client                             4           0           0.00        .000           0
SQL*Net message from client                           4           0           0.11        .028     174,435

================
Insert as select
================

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                              22           0           0.60        .027           4
db file scattered read                              130           0          35.97        .277           5
SQL*Net message to client                             4           0           0.01        .002           0
SQL*Net message from client                           4           0           0.10        .025     174,435

=====================
Insert as select with
_serial_direct=always
=====================

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
direct path read                                    108           0          17.44        .161           6
SQL*Net message to client                             4           0           0.00        .000           0
SQL*Net message from client                           4           0           0.09        .022     174,435


Note the db file scattered read waits in the mddle test. If you re-run the test on 12.1.0.x (or later) you’ll find that the middle set of results will change to direct path read waits.

For reference, this limitation is covered by MOS note13250070.8: Bug 13250070 – Enh: Serial direct reads not working in DML. The actual bug note is not public.

Footnote (a couple of hours later):

A response from Roger MacNicol to my publication tweet has told us that the bug note says only that direct path reads had been restricted unnecessarily and the restriction has been removed.

September 30, 2020

Serial Bloom

Filed under: Execution plans,Oracle,Parallel Execution,Performance,Upgrades — Jonathan Lewis @ 1:35 pm BST Sep 30,2020

Following the recent note I wrote about an enhancement to the optimizer’s use of Bloom filters, I received a question by email asking about the use of Bloom filters in serial execution plans:

I’m having difficulty understanding the point of a Bloom filter when used in conjunction with a hash join where everything happens within the same process.

I believe you mentioned in your book (Cost Based Oracle) that hash joins have a mechanism similar to a Bloom filter where a row from the probe table is checked against a bitmap, where each hash table bucket is indicated by a single bit. (You have a picture on page 327 of the hash join and bitmap, etc).

The way that bitmap looks and operates appears to be similar to a Bloom filter to me…. So it looks (to me) like hash joins have a sort of “Bloom filter” already built into them.

My question is… What is the advantage of adding a Bloom filter to a hash join if you already have a type of Bloom filter mechanism thingy built in to hash joins?

I can understand where it would make sense with parallel queries having to pass data from one process to another, but if everything happens within the same process I’m just curious where the benefit is.

 

The picture on page 327 of CBO-F is a variation on the following, which is the penultimate snapshot of the sequence of events in a multi-pass hash join. The key feature is the in-memory bitmap at the top of the image describing which buckets in the (partitioned and spilled) hash table hold rows from the build table. I believe that it is exactly this bitmap that is used as the Bloom filter.

The question of why it might be worth creating and using a Bloom filter in a simple serial hash join is really a question of scale. What is the marginal benefit of the Bloom filter when the basic hash join mechanism is doing all the hash arithmetic and comparing with a bitmap anyway?

If the hash join is running on an Exadata machine then the bitmap can be passed as a predicate to the cell servers and the hash function can be used at the cell server to minimise the volume of data that has to be passed back to the database server – with various optimisations dependent on the version of the Exadata software. Clearly minimising traffic through the interconnect is going to have some benefit.

Similarly, as the email suggests, for a parallel query where (typically) one set of parallel processes will read the probe table and distribute the data to the second set of parallel processes which then do the hash join it’s clearly sensible to allow the first set of procsses to apply the hash function and discard as many rows as possible before distributing the survivors – minimising inter-process communication.

In both these cases, of course, there’s a break point to consider of how effective the Bloom filter needs to be before it’s worth taking advantage of the technology. If the Bloom filter allows 99 rows out of every hundred to be passed to the database server / second set of parallel processes then Oracle has executed the hash function and checked the bitmap 100 times to avoid sending one row (and it will (may) have to do the same hash function and bitmap check again to perform the hash join); on the other hand if the Bloom filter discards 99 rows and leaves only one row surviving then that’s a lot of traffic eliminated – and that’s likely to be a good thing. This is why there are a few hidden parameters defining the boundaries of when Bloom filters should be used – in particular there’s a parameter “_bloom_filter_ratio” which defaults to 35 and is, I suspect, a figure which says something like “use Bloom filtering only if it’s expected to reduce the probe data to 35% of the unfiltered size”.

So the question then becomes: “how could you benefit from a serial Bloom filter when it’s the same process doing everything and there’s no “long distance” traffic going on between processes?” The answer is simply that we’re operating at a much smaller scale. I’ve written blog notes in the past where the performance of a query depends largely on the number of rows that are passed up a query plan before being eliminated (for example here, where the volume of data moving results in a significant fraction of the total time).

If you consider a very simple hash join its plan is going to be shaped something like this:


-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    45 |   720 |    31 |
|*  1 |  HASH JOIN         |      |    45 |   720 |    31 |
|*  2 |   TABLE ACCESS FULL| T2   |    15 |   120 |    15 |
|   3 |   TABLE ACCESS FULL| T1   |  3000 | 24000 |    15 |
-----------------------------------------------------------

If you read Tanel Poder’s article on execution plans as a tree of Oracle function calls you’ll appreciate that you could translate this into informal English along the lines of:

  • Operation 1 calls a function (at operation 2) to do a tablescan of t1 and return all the relevant rows, building an in-memory hash table by applying a hashing function to the join column(s) of each row returned by the call to the tablescan. As the hash table is populated the operation also constructs a bitmap to flag buckets in the hash table that have been populated.
  • Operation 1 then calls a function (at operation 3) to start a tablescan and then makes repeated calls for it to return one row (or, in newer versions, a small rowset) at a time from table t2. For each row returned operation 1 applies the same hash function to the join column(s) and checks the bitmap to see if there’s a potential matching row in the relevant bucket of the hash table, and if there’s a potential match Oracle examines the actual contents of the bucket (which will be stored as a linked list) to see if there’s an actual match.

Taking the figures above, let’s imagine that Oracle is using a rowset size of 30 rows. Operation 1 will have to make 100 calls to Operation 3 to get all the data, and call the hashing function 3,000 times.  A key CPU component of the work done is that the function represented by operation 3 is called 100 times and (somehow) allocates and fills an array of 30 entries each time it is called.

Now assume operation 1 passes the bitmap to operation 3 as an input and it happens to be a perfect bitmap. Operation 3 starts its tablescan and will call the hash function 3,000 times, but at most 45 rows will get past the bitmap. So operation 1 will only have to call operation 3 twice.  Admittedly operation 1 will (possibly) call the hash function again for each row – but maybe operation 3 will supply the hash value in the return array. Clearly there’s scope here for a trade-off between the reduction in work due to the smaller number of calls and the extra work needed to take advantage of the bitmap technology.

Here’s an example that shows the potential for savings – if you want to recreate this test you’ll need about 800MB of free space in the database, the first table takes about 300MB and the second about 450MB.


rem
rem     Script:         bloom_filter_serial_02.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,30,'0')             v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7 -- > comment to avoid WordPress format issue
;

create table t2
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        round(rownum + 0.5,2)           id,
        mod(rownum,1e5)                 n1,
        lpad(rownum,10)                 v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7 -- > comment to avoid WordPress format issue
;


prompt  =================
prompt  With Bloom filter
prompt  =================

select 
        /*+ 
                px_join_filter(t1) 
                monitor
        */
        t1.v1, t2.v1
from 
        t2, t1
where 
        t2.n1 = 0
and 
        t1.id = t2.id
/

prompt  ===============
prompt  No Bloom filter
prompt  ===============

select 
        /*+
                monitor
        */
        t1.v1, t2.v1
from 
        t2, t1
where 
        t2.n1 = 0
and 
        t1.id = t2.id
/

I’ve created tables t1 and t2 with an id column that never quite matches, but the range of values is set so that the optimizer thinks the two tables might have a near-perfect 1 to 1 match. I’ve given t2 an extra column with 105 distinct values in its 107 rows, so it’s going to have 100 rows per distinct value. Then I’ve presented the optimizer with a query that looks as if it’s going to find 100 rows in t2 and needs to find a probable 100 rows of matches in t1. For my convenience, and to highlight a couple of details of Bloom filters, it’s not going to find any matches.

In both runs I’ve enabled the SQL Monitor feature with the /*+ monitor */ hint, and in the first run I’ve also hinted the use of a Bloom filter. Here are the resulting SQL Monitor outputs. Bear in mind we’re looking at a reasonably large scale query – volume of input data – with a small result set.

First without the Bloom filter:


Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    3.00 |    2.24 |     0.77 |     1 |  96484 |  773 | 754MB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=2959412835)
==================================================================================================================================================
| Id |      Operation       | Name |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity | Activity Detail |
|    |                      |      | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |   (# samples)   |
==================================================================================================================================================
|  0 | SELECT STATEMENT     |      |         |       |         2 |     +2 |     1 |        0 |      |       |     . |          |                 |
|  1 |   HASH JOIN          |      |     100 | 14373 |         2 |     +2 |     1 |        0 |      |       |   2MB |          |                 |
|  2 |    TABLE ACCESS FULL | T2   |      99 |  5832 |         2 |     +1 |     1 |      100 |  310 | 301MB |     . |          |                 |
|  3 |    TABLE ACCESS FULL | T1   |     10M |  8140 |         2 |     +2 |     1 |      10M |  463 | 453MB |     . |          |                 |
==================================================================================================================================================

According to the Global Stats the query has taken 3 seconds to complete, of which 2.24 seconds is CPU. (The 750MB read in 0.77 second would be due to the fact that I’m running off SSD, and I’ve got a 1MB read size that helps). A very large fraction of the CPU appears because of the number of calls from operation 1 to operation 3 (the projection information pulled from memory reports a rowset size of 256 rows, so that’s roughly 40,000 calls to the function.

When we force the use of a Bloom filter the plan doesn’t change much (though the creation and use of the Bloom filter has to be reported) – but the numbers do change quite significantly.

Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    1.97 |    0.99 |     0.98 |     1 |  96484 |  773 | 754MB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=4148581417)
======================================================================================================================================================
| Id |       Operation       |  Name   |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity | Activity Detail |
|    |                       |         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |   (# samples)   |
======================================================================================================================================================
|  0 | SELECT STATEMENT      |         |         |       |         1 |     +1 |     1 |        0 |      |       |     . |          |                 |
|  1 |   HASH JOIN           |         |     100 | 14373 |         1 |     +1 |     1 |        0 |      |       |   1MB |          |                 |
|  2 |    JOIN FILTER CREATE | :BF0000 |      99 |  5832 |         1 |     +1 |     1 |      100 |      |       |     . |          |                 |
|  3 |     TABLE ACCESS FULL | T2      |      99 |  5832 |         1 |     +1 |     1 |      100 |  310 | 301MB |     . |          |                 |
|  4 |    JOIN FILTER USE    | :BF0000 |     10M |  8140 |         1 |     +1 |     1 |    15102 |      |       |     . |          |                 |
|  5 |     TABLE ACCESS FULL | T1      |     10M |  8140 |         1 |     +1 |     1 |    15102 |  463 | 453MB |     . |          |                 |
======================================================================================================================================================


In this case, the elapsed time dropped to 1.97 seconds (depending on your viewpoint that’s either a drop of “only 1.03 seconds” or drop of “an amazing 34.3%”; with the CPU time dropping from 2.24 seconds to 0.99 seconds (55.8% drop!)

In this case you’ll notice that the tablescan of t1 produced only 15,102 rows to pass up to the hash join at operation 1 thanks to the application of the predicate (not reported here): filter(SYS_OP_BLOOM_FILTER(:BF0000,”T1″.”ID”)). Instead of 40,000 calls for the next rowset the hash function has been applied during the tablescan and operation 5 has exhausted the tablescan after only about 60 calls. This is what has given us the (relatively) significant saving in CPU.

This example of the use of a Bloom filter highlights up the two points I referred to earlier.

  • First, although we see operations 4 and 5 as Join (Bloom) filter use and Table access full respectively I don’t think the data from the tablescan is being “passed up” from operation 5 to 4; I believe operation 4 can be views as a “placeholder” in the plan to allow us to see the Bloom filter in action, the hashing and filtering actually happening during the tablescan.
  • Secondly, we know that there are ultimately no rows in the result set, yet the application of the Bloom filter has not eliminated all the data. Remember that the bitmap that Oracle constructs of the hash table identifies used buckets, not actual values. Those 15,102 rows are rows that “might” find a match in the hash table because they belong in buckets that are flagged. A Bloom filter won’t discard any data that is needed, but it might fail to eliminate data that subsequently turns out to be unwanted.

How parallel is parallel anyway?

I’ll leave you with one other thought. Here’s an execution plan from 12c (12.2.0.1) which joins three dimension tables to a fact table. There are 343,000 rows in the fact table and the three joins individually identify about 4 percent of the data in the table. In a proper data warehouse we might have been looking at a bitmap star transformation solution for this query, but in a mixed system we might want to run warehouse queries against normalised data – this plan shows what Bloom filters can do to minimise the workload. The plan was acquired from memory after enabling rowsource execution statistics:

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |      1 |        |        |      |            |      1 |00:00:00.05 |      22 |      3 |       |       |          |
|   1 |  SORT AGGREGATE              |          |      1 |      1 |        |      |            |      1 |00:00:00.05 |      22 |      3 |       |       |          |
|   2 |   PX COORDINATOR             |          |      1 |        |        |      |            |      2 |00:00:00.05 |      22 |      3 | 73728 | 73728 |          |
|   3 |    PX SEND QC (RANDOM)       | :TQ10000 |      0 |      1 |  Q1,00 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   4 |     SORT AGGREGATE           |          |      2 |      1 |  Q1,00 | PCWP |            |      2 |00:00:00.09 |    6681 |   6036 |       |       |          |
|*  5 |      HASH JOIN               |          |      2 |     26 |  Q1,00 | PCWP |            |     27 |00:00:00.09 |    6681 |   6036 |  2171K|  2171K|     2/0/0|
|   6 |       JOIN FILTER CREATE     | :BF0000  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|*  7 |        TABLE ACCESS FULL     | T3       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|*  8 |       HASH JOIN              |          |      2 |    612 |  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6634 |   6026 |  2171K|  2171K|     2/0/0|
|   9 |        JOIN FILTER CREATE    | :BF0001  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 10 |         TABLE ACCESS FULL    | T2       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 11 |        HASH JOIN             |          |      2 |  14491 |  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6614 |   6022 |  2171K|  2171K|     2/0/0|
|  12 |         JOIN FILTER CREATE   | :BF0002  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 13 |          TABLE ACCESS FULL   | T1       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|  14 |         JOIN FILTER USE      | :BF0000  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  15 |          JOIN FILTER USE     | :BF0001  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  16 |           JOIN FILTER USE    | :BF0002  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  17 |            PX BLOCK ITERATOR |          |      2 |    343K|  Q1,00 | PCWC |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|* 18 |             TABLE ACCESS FULL| T4       |     48 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.05 |    6594 |   6018 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------

It’s a parallel plan, but it’s used the 12c “PQ_REPLICATE” strategy. The optimizer has decided that all the dimension tables are so small that it’s going to allow every PX process to read every (dimension) table through the buffer cache and build its own hash tables from them. (In earlier versions you might have seen the query coordinator scanning and broadcasting the three small tables, or one set of PX processes scanning and broadcasting to the other set).

So every PX process has an in-memory hash table of all three dimension tables and then (operation 17) they start a tablescan of the fact table, picking non-overlapping rowid ranges to scan. But since they’ve each created three in-memory hash tables they’ve also been able to create three Bloom filters each, which can all be applied simultaneously as the tablescan takes place; so instead of 343,000 rows being passed up the plan and through the first hash join (where we see from operation 11 that the number of surviving rows would have been about 14,500 ) we see all but 27 rows discarded very early on in the processing. Like bitmap indexes part of the power of Bloom filters lies in the fact that with the right plan the optimizer can combine them and identify a very small data set very precisely, very early.

The other thing I want you to realise about this plan, though, is that it’s not really an “extreme” parallel plan. It’s effectively running as a set of concurrent, non-interfering, serial plans. Since I was running (parallel 2) Oracle started just 2 PX processes: they both built three hash tables from the three dimension tables then split the fact table in half and took half each to do all the joins, and passed the nearly complete result to the query co-ordinator at the last moment. That’s as close as you can get to two serial, non-interfering, queries and still call it a parallel query. So, if you wonder why there might be any benefit in serial Bloom filters – Oracle’s actually being benefiting from it under the covers for several years.

Summary

Bloom filters trade a decrease in messaging against an increase in preparation and hashing operations. For Exadata systems with predicate offloading it’s very easy to see the potential benefit; for general parallel execution; it’s also fairly easy to see the potential benefit for parallel query execution what inter-process message between two sets of PX processes can be resource intensive; but even for serial queries there can be some benefit though, in absolute terms, they are likely to be only a small saving in CPU.

 

September 9, 2020

Bloom Upgrade

Filed under: 18c,CBO,Joins,Oracle,Tuning,Upgrades — Jonathan Lewis @ 5:18 pm BST Sep 9,2020

It’s a common pattern of Oracle features that they start with various restrictions or limitations that disappear over time. This note is about an enhancement to Bloom filter processing that appeared in the 18.1 optimizer and, for some people, may be a good enough reason for upgrading to a newer version of Oracle. This enhancement came to my attention by way of the Oracle Developer forum in a thread with the title Bloom filters and view using UNION ALL asking how to get a Bloom filter pushed inside a UNION ALL view. The original requirement wasn’t a trivial one so I’ll demonstrate the problem with a very simple example – first the data set:

rem
rem     Script:         bloom_pushdown.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2020
rem
rem     Last tested:
rem             19.3.0.0
rem

create table t1 as select * from all_objects where rownum <= 50000;
create table t2 as select t1.* from t1, (select rownum n1 from dual connect by level <= 4);
create table t3 as select t1.* from t1, (select rownum n1 from dual connect by level <= 4); -- > comment to avoid wordpress format issue

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns object_type size 254'
        );
end;
/

I’ve been a bit lazy here, copying data from view all_objects. I’ve gathered stats on t1 so that I can generate a histogram on the object_type column because I’m going to query for a rare object_type and I want the optimizer to get a reasonable estimate of rows. I’m going to hint a parallel query to join t1 to t2 (aliased, trivially, as v1 for reasons that will become apparent soon):

select
        /*+ 
                parallel(2) 
        */
        t1.object_name, v1.object_name
from
        t1,
        t2 v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

In my case the optimizer chooses to do a hash join between these two table, and creates a Bloom filter to try and minimise the data passing through the data flow operation. The result set in my 12.2.0.1 database is only 16 rows, so it would be nice if the parallel scan could eliminate most of the 200,000 rows in t2 early – here’s the execution plan pulled from memory after running the query with rowsource execution stats enabled:


----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |      1 |        |   371 (100)|     16 |00:00:00.06 |      20 |      0 |       |       |          |
|   1 |  PX COORDINATOR        |          |      1 |        |            |     16 |00:00:00.06 |      20 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)  | :TQ10000 |      0 |     16 |   371   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN           |          |      2 |     16 |   371   (5)|     16 |00:00:00.05 |    6278 |   3988 |  1250K|  1250K|     2/0/0|
|   4 |     JOIN FILTER CREATE | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|*  5 |      TABLE ACCESS FULL | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|   6 |     JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.04 |    4244 |   3988 |       |       |          |
|   7 |      PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.04 |    4244 |   3988 |       |       |          |
|*  8 |       TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|     16 |00:00:00.03 |    4244 |   3988 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
   8 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"V1"."OBJECT_ID"))

We see that Oracle has generated a Bloom filter at operation 4 from the data returned from t1 at operation 5, and then used that Bloom filter at operation 6 to eliminate most of the data from t2 before passing the remaining few rows up to the hash join.

Let’s make the query more interesting – what if you want to use a UNION ALL of t2 and t3 in the query (for example one might be “current data” while the other is “historic data”. Here’s the query and plan from 12.2.0.1:

select
        /*+ 
                parallel(2) 
        */
        t1.object_name, v1.object_name
from
        t1,
        (select * from t2 union all select * from t3) v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |      1 |        |   667 (100)|     32 |00:00:00.37 |      40 |      0 |       |       |          |
|   1 |  PX COORDINATOR         |          |      1 |        |            |     32 |00:00:00.37 |      40 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 |      0 |     32 |   667   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN            |          |      1 |     32 |   667   (5)|     32 |00:00:00.34 |    5125 |   3860 |  1250K|  1250K|     2/0/0|
|*  4 |     TABLE ACCESS FULL   | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    2034 |      0 |       |       |          |
|   5 |     VIEW                |          |      2 |    400K|   584   (4)|    400K|00:00:00.52 |    8488 |   7976 |       |       |          |
|   6 |      UNION-ALL          |          |      2 |        |            |    400K|00:00:00.24 |    8488 |   7976 |       |       |          |
|   7 |       PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|    200K|00:00:00.11 |    4244 |   3988 |       |       |          |
|*  8 |        TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|    200K|00:00:00.07 |    4244 |   3988 |       |       |          |
|   9 |       PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|    200K|00:00:00.11 |    4244 |   3988 |       |       |          |
|* 10 |        TABLE ACCESS FULL| T3       |     32 |    200K|   292   (4)|    200K|00:00:00.03 |    4244 |   3988 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   4 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
   8 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
  10 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue

No Bloom filter – so all 400,000 rows feed up the plan and through the hash join. This won’t matter too much for my sub-second tiny data set but on a pair of 50GB tables, with the potential to offload the Bloom filter to storage in Exadata and, perhaps, eliminate 99% of the data at the cell servers, this could make a huge difference to performance.

Since Bloom filters are all about hashing data (in Oracle the standard Bloom filter is the bitmap summarising the build table in a hash join) let’s trying pushing the optimizer into a hash distribution for the parallel join to see if that had any effect:


select
        /*+ 
                parallel(2) 
                gather_plan_statistics
                leading(@sel$1 t1@sel$1 v1@sel$1)
                use_hash(@sel$1 v1@sel$1)
                pq_distribute(@sel$1 v1@sel$1 hash hash)
        */
        t1.object_name, v1.object_name
from
        t1,
        (select * from t2 union all select * from t3) v1
where
        t1.object_type = 'SCHEDULE'
and     v1.object_id = t1.object_id
/

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |      1 |        |   667 (100)|     32 |00:00:00.43 |      60 |      0 |       |       |          |
|   1 |  PX COORDINATOR             |          |      1 |        |            |     32 |00:00:00.43 |      60 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)       | :TQ10002 |      0 |     32 |   667   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN BUFFERED       |          |      1 |     32 |   667   (5)|     32 |00:00:00.38 |    4000 |   3752 |  2290K|  2082K|     2/0/0|
|   4 |     JOIN FILTER CREATE      | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX RECEIVE             |          |      2 |      4 |    75   (4)|      8 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       PX SEND HYBRID HASH   | :TQ10000 |      0 |      4 |    75   (4)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   7 |        STATISTICS COLLECTOR |          |      2 |        |            |      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|   8 |         PX BLOCK ITERATOR   |          |      2 |      4 |    75   (4)|      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|*  9 |          TABLE ACCESS FULL  | T1       |     26 |      4 |    75   (4)|      4 |00:00:00.01 |    1517 |      0 |       |       |          |
|  10 |     PX RECEIVE              |          |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  11 |      PX SEND HYBRID HASH    | :TQ10001 |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  12 |       JOIN FILTER USE       | :BF0000  |      2 |    400K|   584   (4)|     66 |00:00:00.77 |    8488 |   7976 |       |       |          |
|  13 |        VIEW                 |          |      2 |    400K|   584   (4)|    400K|00:00:00.68 |    8488 |   7976 |       |       |          |
|  14 |         UNION-ALL           |          |      2 |        |            |    400K|00:00:00.59 |    8488 |   7976 |       |       |          |
|  15 |          PX BLOCK ITERATOR  |          |      2 |    200K|   292   (4)|    200K|00:00:00.18 |    4244 |   3988 |       |       |          |
|* 16 |           TABLE ACCESS FULL | T2       |     32 |    200K|   292   (4)|    200K|00:00:00.06 |    4244 |   3988 |       |       |          |
|  17 |          PX BLOCK ITERATOR  |          |      2 |    200K|   292   (4)|    200K|00:00:00.12 |    4244 |   3988 |       |       |          |
|* 18 |           TABLE ACCESS FULL | T3       |     32 |    200K|   292   (4)|    200K|00:00:00.08 |    4244 |   3988 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   9 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue 
       filter("T1"."OBJECT_TYPE"='SCHEDULE') 
  16 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue
  18 - access(:Z>=:Z AND :Z<=:Z)   -- > edit to avoid wordpress format issue

We’ve managed to introduce a Bloom filter (which is visible as :BF0000 in the plan, even through there’s no reference to sys_op_bloom_filter() in the predicate information) but there’s a problem, we’re still passing 400,000 rows up the plan and the Bloom filter is only being applied at (or just after) the VIEW operator, discarding all but 66 rows before doing the hash join. It’s an improvement but not ideal; we’d like to see the Bloom filter applied to each of the two tables separately to eliminate rows as early as possible.

This can’t be done in 12.2, and you’d have to rewrite the query, changing a “join with union” into a “union of joins”, and that’s not really a desirable strategy.

Next Steps

Searching MOS, though you will be able to find the following note:

Doc ID 18849313.8 – ENH : bloom filters/pruning are pushed through union-all view

There’s an enhancement request to do what we want in 18.1, and the enhancement has got into the software. Here’s the (unhinted) plan from 19.3 (the plan stays the same when optimizer_features_enable is set back to 18.1.0, but drops back to the 12.1. plan when OFE is set to 12.2.0.1):

------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |   666 (100)|     32 |00:00:00.11 |      10 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |            |     32 |00:00:00.11 |      10 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)    | :TQ10000 |      0 |     32 |   666   (5)|      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  3 |    HASH JOIN             |          |      2 |     32 |   666   (5)|     32 |00:00:00.05 |   10020 |   7958 |  1250K|  1250K|     2/0/0|
|   4 |     JOIN FILTER CREATE   | :BF0000  |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    1998 |      0 |       |       |          |
|*  5 |      TABLE ACCESS FULL   | T1       |      2 |      4 |    75   (4)|      8 |00:00:00.01 |    1998 |      0 |       |       |          |
|   6 |     VIEW                 |          |      2 |    400K|   583   (4)|     32 |00:00:00.04 |    8022 |   7958 |       |       |          |
|   7 |      UNION-ALL           |          |      1 |        |            |     12 |00:00:00.02 |    4011 |   3979 |       |       |          |
|   8 |       JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|   9 |        PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|* 10 |         TABLE ACCESS FULL| T2       |     32 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|  11 |       JOIN FILTER USE    | :BF0000  |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|  12 |        PX BLOCK ITERATOR |          |      2 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
|* 13 |         TABLE ACCESS FULL| T3       |     32 |    200K|   292   (4)|     16 |00:00:00.02 |    4011 |   3979 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - filter("T1"."OBJECT_TYPE"='SCHEDULE')
  10 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T2"."OBJECT_ID")) 
  13 - access(:Z>=:Z AND :Z<=:Z)      -- > edit to avoid wordpress format issue
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T3"."OBJECT_ID"))

As you can see, we create a Bloom filter at operation 4, and use it twice at operations 8 and 11 – with the sys_op_bloom_filter() functions clearly visible in the predicate information showing us that the Bloom filter is applied to the object_id column in both cases.

If you want to disable this enhancement for some reasons there are two hidden parameters available (which you might set for a single query using the opt_param() hint):

  • _bloom_filter_setops_enabled = true
  • _bloom_pruning_setops_enabled = true

The first is for Bloom filters in the situation shown, I assume the second deals with Bloom filters for partition pruning.

Summary

In versions prior to 18.1 the optimizer is unable to push Bloom filters down to the individual tables in a UNION ALL view, but this limitation was removed in the 18.1 code set.

 

August 6, 2020

Case and Aggregate bug

Filed under: 12c,Bugs,Oracle,Troubleshooting,Upgrades — Jonathan Lewis @ 12:43 pm BST Aug 6,2020

[Fixed – but currently needs a patch for 19c]

The following description of a bug appeared on the Oracle Developer Community forum a little while ago – on an upgrade from 12c to 19c a query starting producing the wrong results on a simple call to the average() function. In fact it turned out to be a bug introduced in 12.2.0.1.

The owner of the thread posted a couple of zip files to build a test case – but I had to do a couple of edits, and change the nls_numeric_characters to ‘,.’ in order to get past a formatting error on a call to the to_timestamp() function. I’ve stripped the example to a minimum, and translated column name from German (which was presumably the source of the nls_numeric_characters issue) to make it easier to demonstrate and play with the bug.

First the basic data – you’ll notice that I’ve tested this on 12.1.0.2, 12.2.0.1 and 19.3.0.0 to find out when the bug appeared:

rem
rem     Script:         case_aggregate_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table test(
        case_col        varchar2(11),
        duration        number(*,0),
        quarter         varchar2(6),
        q2h_knum_b      varchar2(10)
   )
/

insert into test values('OK',22,'1.2020','AB1234');
insert into test values('OK',39,'1.2020','AB1234');
insert into test values('OK',30,'1.2020','AB1234');
insert into test values('OK',48,'1.2020','AB1234');
commit;

execute dbms_stats.gather_table_stats(user,'test')

create or replace force view v_test
as
select
        q2h_knum_b,
        case
                when b.case_col not like 'err%'
                        then b.duration
        end     duration,
        case
                when b.case_col not like 'err%'
                        then 1
                        else 0
        end     status_ok
from
        test b
where
        substr(b.quarter, -4) = 2020
;

break on report
compute avg of duration on report
select * from v_test;

---------------------------------------------

Q2H_KNUM_B   DURATION  STATUS_OK
---------- ---------- ----------
AB1234             22          1
AB1234             39          1
AB1234             30          1
AB1234             48          1
           ----------
avg             34.75

I’ve created a table, loaded some data, gathered stats, then created a view over the table. The view includes a couple of columns that use a simple case expression, and both expressions are based in the same way on the same base column (this may, or may not, be significant in what’s coming). I’ve then run off a simple query with a couple of SQL*Plus commands to report the actual content of the view with the average of the duration column – which is 34.75.

So now we run a couple of queries against the view which aggregate the data down to a single row – including the avg() of the duration – using the coalesce() function – rather than the older nvl() function – to convert any nulls to zero.


select
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg,
        coalesce(sum(status_ok), 0)     ok_count
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

DURATION_COUNT DURATION_MED DURATION_AVG   OK_COUNT
-------------- ------------ ------------ ----------
             4         34.5            0          4

You’ll notice that the duration_avg is reported as zero (this would be the same if I used nvl(), and would be a null if I omitted the coalesce(). This is clearly incorrect. This was the output from 19.3; 12.2 gives the same result, 12.1.0.2 reports the average correctly as 34.75.

There are several way in which you can modify this query to get the right average – here’s one, just put the ok_count column first in the select list:


select
        coalesce(sum(status_ok), 0)     ok_count,
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

  OK_COUNT DURATION_COUNT DURATION_MED DURATION_AVG
---------- -------------- ------------ ------------
         4              4         34.5        34.75

There’s no obvious reason why the error should occur, but there’s a little hint about what may be happening in the Column projection information from the execution plan. The basic plan is the same in both cases, so I’m only show it once; but it’s followed by two versions of the projection information (restricted to operation 1) which I’ve formatted to improve:

Plan hash value: 2603667166

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     2 (100)|          |
|   1 |  SORT GROUP BY     |      |     1 |    20 |            |          |
|*  2 |   TABLE ACCESS FULL| TEST |     1 |    20 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter((INSTR('AB1234',"Q2H_KNUM_B")>0 AND
              TO_NUMBER(SUBSTR("B"."QUARTER",(-4)))=2020))

Column Projection Information (Operation 1 only):  (Wrong result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22],
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22]

Column Projection Information (Operation 1 only):  (Right result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22]

As you can see, to report avg() Oracle has projected sum() and count().

When we get the right result the sum() for duration appears immediately after the count().

When we get the wrong result the sum() for ok_count comes between the count() and sum() for duration.

This makes me wonder whether Oracle is somehow just losing track of the sum() for duration and therefore dividing null by the count().

This is purely conjecture, of course, and may simply be a coincidence – particularly since 12.1.0.2 gets the right result and shows exactly the same projection information.

Readers are left to experiment with other variations to see if they can spot other suggestive patterns.

Update (Aug 2020)

This is now logged as Bug 31732779 – WRONG RESULT WITH CASE STATEMENT AGGREGATION , though it’s not yet publicly visible.

Update (May 2021)

The script produces the correct result in 19.11.0.0. 

Update (Aug 2022)

I’ve just had a follow-up on a continuation forum thread telling me that their original example still produces the wrong result even on 19.14.0.0 on LiveSQL, and I’ve just re-run my script on 19.11.0.0 and got the wrong results. I don’t know what I did to fool myself into thinking it had produced the right results in May 2021 (except that I ran it again the same afternoon after restarting the instance and started getting the right results!)

The bug, however, is part of a more general issue and has been flagged as a duplicate of (unpublished) 31567719 WRONG RESULT ISSUE WITH AVG AND COUNT AGGREGATES ON EXPRESSIONS, is marked fixed in 21.1 with some patches already available for some versions of 19 (up to 19.10 at present).

A MOS search for 31567719 will report the list of currently available patches under the “Recommended links” heading.

July 13, 2020

Min/Max costing

Filed under: Oracle,Performance,subqueries,Tuning,Upgrades — Jonathan Lewis @ 1:07 pm BST Jul 13,2020

A question about the min/max index scan appeared on the Oracle Developer Community forum recently. The query supplied in the thread was a little odd – you might ask why anyone would run it as it stands – and I’ve modified it to make it even stranger to demonstrate a range of details.

I’ll start with a simple data set, not bothering to collect stats because that will be done automatically on create for my versions:

rem
rem     Script:         min_max_cost_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 
as 
select  * 
from    all_objects 
where   rownum <= 50000 -- > comment to avoid wordpress format issue
;

create index t1_i1 on t1(object_name);

Now a few simple queries – for which I’ll capture and display the in-memory execution plans a little further on:


set linesize 156
set pagesize 60
set trimspool on
set serveroutput off
alter session set statistics_level = all;

prompt  =====================
prompt  Baseline select max()
prompt  =====================

select max(object_name) from t1;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ============================
prompt  select max() with dummy join
prompt  ============================

select max(object_name) from t1, dual where dummy is not null;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  =============================================
prompt  select max() with dummy join and index() hint
prompt  =============================================

select /*+ index(t1) */  max(object_name) from t1, dual where dummy is not null;
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ============================================
prompt  select max() with dummy join and inline view
prompt  ============================================

select  obj
from    (
        select  max(object_name)  obj
        from    t1
        ),
        dual 
where   dummy is not null
/

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

prompt  ====================================
prompt  select max() with existence subquery
prompt  ====================================

select max(object_name) from t1 where exists (select null from dual where dummy is not null);
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

prompt  ============================================
prompt  select max() with failing existence subquery
prompt  ============================================

select max(object_name) from t1 where exists (select null from dual where dummy is null);
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last alias'));

With 50,000 rows and the appropriate index to allow Oracle to find the maximum value very quickly we expect the optimizer to invoke the “index full scan (min/max)” operation, visiting only the extreme leaf block of the index – and, indeed, we are not disappointed, that’s exactly what the baseline query shows us:

=====================
Baseline select max()
=====================
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       3 |
|   1 |  SORT AGGREGATE            |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|   2 |   INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
-----------------------------------------------------------------------------------------------------------

However, when we introduce the (as yet unjustified) join to dual something very different happens – the optimizer forgets all about the min/max optimisation and does an index fast full scan of the t1_i1 index, passing all 50,000 rows up to the parent operation.


============================
select max() with dummy join
============================
-------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |      1 |        |    50 (100)|      1 |00:00:00.02 |     360 |
|   1 |  SORT AGGREGATE        |       |      1 |      1 |            |      1 |00:00:00.02 |     360 |
|   2 |   NESTED LOOPS         |       |      1 |  50000 |    50   (6)|  50000 |00:00:00.01 |     360 |
|*  3 |    TABLE ACCESS FULL   | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   4 |    INDEX FAST FULL SCAN| T1_I1 |      1 |  50000 |    48   (7)|  50000 |00:00:00.01 |     357 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("DUMMY" IS NOT NULL)

We could, of course, try hinting an index range (full) scan to see what happens – and the result is even more surprising: Oracle takes the hint, uses the min/max optimisation, and shows us that it didn’t take that path by default because it had “forgotten” how to cost it correctly.

Note the cost of 354 at operation 5 when the original min/max cost was 3, note also that the optimizer thinks we have to visit all 50,000 index entries even though, at run-time, Oracle correctly uses a path that visits only one index entry:


=============================================
select max() with dummy join and index() hint
=============================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |   356 (100)|      1 |00:00:00.01 |       6 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       6 |
|   2 |   NESTED LOOPS               |       |      1 |  50000 |   356   (2)|      1 |00:00:00.01 |       6 |
|*  3 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   4 |    FIRST ROW                 |       |      1 |  50000 |   354   (2)|      1 |00:00:00.01 |       3 |
|   5 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |  50000 |   354   (2)|      1 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("DUMMY" IS NOT NULL)

Of course we could recognise that the t1 access and the access to dual could be de-coupled – and hope that the optimizer doesn’t try to use complex view merging (maybe we should have included a /*+ no_merge */ hint) to fall back to a simple join. Fortunately the optimizer doesn’t try merging the two query blocks, so it optimises the max(object_name) query block correctly, giving us the benefit of the min/max optimisation. I’ve included the ‘alias’ format option in this call to dbms_xplan() so that we can see the two query blocks that are optimised separately.


============================================
select max() with dummy join and inline view
============================================

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       6 |
|   1 |  NESTED LOOPS                |       |      1 |      1 |     5   (0)|      1 |00:00:00.01 |       6 |
|*  2 |   TABLE ACCESS FULL          | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 |
|   3 |   VIEW                       |       |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|   4 |    SORT AGGREGATE            |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|   5 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$1 / DUAL@SEL$1
   3 - SEL$2 / from$_subquery$_001@SEL$1
   4 - SEL$2
   5 - SEL$2 / T1@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DUMMY" IS NOT NULL)

There is a maxim (or guideline, or rule of thumb) that if the from clause of a query includes tables that don’t get referenced in the select list then those tables should (probably) appear in subqueries. Of course this guideline sometimes turns out to be a very bad idea, and sometimes it just means the optimizer unnests the subqueries and recreates the joins we started with, but let’s try the approach with this query. I’ve included the ‘alias’ option again so that you can see that this plan is optimised as two query blocks, allowing the max(object_name) query block to find the min/max strategy.


====================================
select max() with existence subquery
====================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       6 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       6 |
|*  2 |   FILTER                     |       |      1 |        |            |      1 |00:00:00.01 |       6 |
|   3 |    FIRST ROW                 |       |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|   4 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      1 |      1 |     3   (0)|      1 |00:00:00.01 |       3 |
|*  5 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       3 | 
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$2 / DUAL@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   5 - filter("DUMMY" IS NOT NULL)

There’s a very important detail in the execution plan above. At first sight it looks like the optimizer has a plan using a simple filter subquery operation – which means you might be fooled into reading it as “for each row returned by operation 3 call operation 5”. This is not the case.

Because the subquery is not a correlated subquery – it’s an example that I sometimes call a “fixed” or (slightly ambiguously) “constant” subquery – Oracle can execute it once and use the resulting rowsource to decide whether or not to call the main query. It’s a case where (if you didn’t realise the plan consisted of two separate query blocks) you would say that Oracle was calling the second child first.

To prove this point I’ve set up one last variation of the query – the “failed subquery” version – where my select from dual returns no rows. Check the numbers of Starts shown for each line of the plan:


============================================
select max() with failing existence subquery
============================================
-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     5 (100)|      1 |00:00:00.01 |       3 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |            |      1 |00:00:00.01 |       3 |
|*  2 |   FILTER                     |       |      1 |        |            |      0 |00:00:00.01 |       3 |
|   3 |    FIRST ROW                 |       |      0 |      1 |     3   (0)|      0 |00:00:00.01 |       0 |
|   4 |     INDEX FULL SCAN (MIN/MAX)| T1_I1 |      0 |      1 |     3   (0)|      0 |00:00:00.01 |       0 |
|*  5 |    TABLE ACCESS FULL         | DUAL  |      1 |      1 |     2   (0)|      0 |00:00:00.01 |       3 |
-------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T1@SEL$1
   5 - SEL$2 / DUAL@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   5 - filter("DUMMY" IS NULL)

The filter at operation 3 calls operation 5 – the query against dual – which runs once returning no rows. The min/max scan of t1_i1 at operation 4 doesn’t run. Operation 5 was called before operation 4 was considered.

Finally

This brings us back to the question – why would anyone run a strange query like this.

Perhaps the answer is that it’s just a demonstration of one part of a more complex query and what we’re trying to do is say: “if a certain record exists in a control table then include some information from table X”.

This note tells us that if there’s a possibility of a min/max optimisation to find the data then we should avoid using a join, instead we should use a “fixed subquery” to check the control table, and maybe we’ll also have to write the part of our query that collects (or isn’t required to collect) the interesting bit of data as an inline view.

 

Next Page »

Website Powered by WordPress.com.