Oracle Scratchpad

October 15, 2021

use_nl redux

Filed under: CBO,Execution plans,Hints,Ignoring Hints,Oracle — Jonathan Lewis @ 2:58 pm BST Oct 15,2021

A question has just appeared on a note I wrote in 2012 about the incorrect use of the use_nl() hint in some sys-recursive SQL, linking forward to an explanation I wrote in 2017 of the use_nl() hint – particularly the interpretation of the form use_nl(a,b), which does not mean “use a nested loop from table A to table B)”.

The question is essentially turns into – “does Oracle pick the join order before it looks at the hints”?

I’m going to look at one of the queries (based on the 2017 table creation code) that was supplied in the question and explain how Oracle gets to the plan it uses in my (21.3) system; here’s the query, followed by the plan:

select
        /*+ use_nl(b) */
        a.v1, b.v1, c.v1, d.v1
from
        a, b, c, d
where
        d.n100 = 0
and     a.n100 = d.id
and     b.n100 = a.n2
and     c.id   = a.id
/


| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      | 20000 |  1347K|   105   (5)| 00:00:01 |
|*  1 |  HASH JOIN           |      | 20000 |  1347K|   105   (5)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | C    | 10000 |   146K|    26   (4)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 20000 |  1054K|    78   (4)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL | D    |   100 |  1800 |    26   (4)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 20000 |   703K|    52   (4)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| B    | 10000 |   136K|    26   (4)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| A    | 10000 |   214K|    26   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$1" "D"@"SEL$1")
      USE_HASH(@"SEL$1" "C"@"SEL$1")
      USE_HASH(@"SEL$1" "D"@"SEL$1")
      USE_HASH(@"SEL$1" "A"@"SEL$1")
      LEADING(@"SEL$1" "B"@"SEL$1" "A"@"SEL$1" "D"@"SEL$1" "C"@"SEL$1")
      FULL(@"SEL$1" "C"@"SEL$1")
      FULL(@"SEL$1" "D"@"SEL$1")
      FULL(@"SEL$1" "A"@"SEL$1")
      FULL(@"SEL$1" "B"@"SEL$1")
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      DB_VERSION('21.1.0')
      OPTIMIZER_FEATURES_ENABLE('21.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("C"."ID"="A"."ID")
   3 - access("A"."N100"="D"."ID")
   4 - filter("D"."N100"=0)
   5 - access("B"."N100"="A"."N2")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   6 -  SEL$1 / "B"@"SEL$1"
         U -  use_nl(b)

Note
-----
   - this is an adaptive plan

Points to note:

  • The Hint Report says the plan final did not use the use_nl(b) hint.
  • Whatever you may think the join order is by looking at the bodyy of the plan, the leading() hint in the Outline Information tells us that the join order was (B A D C) – and that explains why the use_nl(b) hint could not be used, because B was never “the next table in the join order”.
  • The “visible” order of activity displayed in the plan is C D B A, but that’s because we swap_join_inputs(D) to put it about the (B,A) join, then swap_join_inputs(C) to put that above D.

So did Oracle completely pre-empt any plans that allowed B to be “the next table”, thus avoiding the hint, or did it consider some plans where B wasn’t the first table in the join order, and if would it have used a nested loop into B if that plan had had a low enough cost?

The only way to answer these questions is to look at the CBO (10053) trace file; and for very simply queries it’s often enough to pick out a few lines as a starting point – in my case using egrep:

egrep -e "^Join order" -e"Best so far" or21_ora_15956.trc

Join order[1]:  D[D]#0  A[A]#1  B[B]#2  C[C]#3
Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
Join order[2]:  D[D]#0  A[A]#1  C[C]#3  B[B]#2
Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
Join order[3]:  D[D]#0  B[B]#2  A[A]#1  C[C]#3
Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
Join order[4]:  D[D]#0  B[B]#2  C[C]#3  A[A]#1
Join order aborted2: cost > best plan cost
Join order[5]:  D[D]#0  C[C]#3  A[A]#1  B[B]#2
Join order aborted2: cost > best plan cost
Join order[6]:  D[D]#0  C[C]#3  B[B]#2  A[A]#1
Join order aborted2: cost > best plan cost

Join order[7]:  A[A]#1  D[D]#0  B[B]#2  C[C]#3
Join order aborted2: cost > best plan cost
Join order[8]:  A[A]#1  D[D]#0  C[C]#3  B[B]#2
Join order aborted2: cost > best plan cost
Join order[9]:  A[A]#1  B[B]#2  D[D]#0  C[C]#3
Join order aborted2: cost > best plan cost
Join order[10]:  A[A]#1  C[C]#3  D[D]#0  B[B]#2
Join order aborted2: cost > best plan cost
Join order[11]:  A[A]#1  C[C]#3  B[B]#2  D[D]#0
Join order aborted2: cost > best plan cost

Join order[12]:  B[B]#2  D[D]#0  A[A]#1  C[C]#3
Join order aborted2: cost > best plan cost
Join order[13]:  B[B]#2  A[A]#1  D[D]#0  C[C]#3
Best so far:  Table#: 2  cost: 25.692039  card: 10000.000000  bytes: 140000.000000
Join order[14]:  B[B]#2  A[A]#1  C[C]#3  D[D]#0
Join order aborted2: cost > best plan cost
Join order[15]:  B[B]#2  C[C]#3  D[D]#0  A[A]#1
Join order aborted2: cost > best plan cost

Join order[16]:  C[C]#3  D[D]#0  A[A]#1  B[B]#2
Join order aborted2: cost > best plan cost
Join order[17]:  C[C]#3  A[A]#1  D[D]#0  B[B]#2
Join order aborted2: cost > best plan cost
Join order[18]:  C[C]#3  A[A]#1  B[B]#2  D[D]#0
Join order aborted2: cost > best plan cost
Join order[19]:  C[C]#3  B[B]#2  D[D]#0  A[A]#1
Join order aborted2: cost > best plan cost

Oracle has considerd 19 possible join orders (out of a maximum of 24 (= 4!). In theory we should see 6 plans starting with wach of the 4 tables. In fact we we that the optimizer’s first choice started with table D, producing 6 join orders, then switched to starting with table A, producing only 5 join orders.

The “missing” order is (A, B, C, D) which should have appeared between join orders 9 and 10. If we check the trace file in more detail we’ll see that the optimizer aborted after calculation the join from A to B because the cost had already exceeded the “Best so far” by then so it didn’t carry on to calculate the cost going on to D. Clearly , then, there was no point in considering any other order that starting with (A, B) hence the absence of (A, B, C, D).

I’ve highlighted all the join orders where the optimizer didn’t abort. The “Best so far” line that I have reported (for ease of searching and reporting) is misleading – it’s only the cost of the first table in join order, this is what the 4 non-aborted summaries look like:

egrep -A+3 -e"Best so far" or21_ora_15956.trc

Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
              Table#: 1  cost: 51.767478  card: 10000.000000  bytes: 400000.000000
              Table#: 2  cost: 30137.036118  card: 20000.000000  bytes: 1080000.000000
              Table#: 3  cost: 30163.548157  card: 20000.000000  bytes: 1380000.000000
--
Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
              Table#: 1  cost: 51.767478  card: 10000.000000  bytes: 400000.000000
              Table#: 3  cost: 78.079517  card: 10000.000000  bytes: 550000.000000
              Table#: 2  cost: 30163.348157  card: 20000.000000  bytes: 1380000.000000
--
Best so far:  Table#: 0  cost: 25.752439  card: 100.000000  bytes: 1800.000000
              Table#: 2  cost: 2483.956340  card: 1000000.000000  bytes: 32000000.000000
              Table#: 1  cost: 2530.068379  card: 20000.000000  bytes: 1080000.000000
              Table#: 3  cost: 2556.580418  card: 20000.000000  bytes: 1380000.000000
--
Best so far:  Table#: 2  cost: 25.692039  card: 10000.000000  bytes: 140000.000000
              Table#: 1  cost: 52.204078  card: 20000.000000  bytes: 720000.000000
              Table#: 0  cost: 78.479517  card: 20000.000000  bytes: 1080000.000000
              Table#: 3  cost: 104.991556  card: 20000.000000  bytes: 1380000.000000

As you can see, when we start with (B A) the estimated cost drops dramatically.

Now that we’ve see that Oracle looks at many (though not a completely exhaustive set of) plans on the way to the one it picks the thing we need to do (in theory) is check that for every single calculation where B is “the next table”, Oracle obeys our hint. Each time the optimizer join “the next” table it usually considers the cost of a Nested Loop, a Sort Merge, and a Hash Join in that order; if the optimizer is obeying the hint it will only consider the nested loop join. Here’s a suitable call to egrep with the first four join orders::

egrep -e "^Join order" -e "^Now joining" -e"^NL Join" -e"^SM Join" -e"^HA Join" or21_ora_15956.trc

Join order[1]:  D[D]#0  A[A]#1  B[B]#2  C[C]#3
Now joining: A[A]#1
NL Join
SM Join
SM Join (with index on outer)
HA Join
Now joining: B[B]#2
NL Join
Now joining: C[C]#3
NL Join
SM Join
HA Join

Join order[2]:  D[D]#0  A[A]#1  C[C]#3  B[B]#2
Now joining: C[C]#3
NL Join
SM Join
HA Join
Now joining: B[B]#2
NL Join

Join order[3]:  D[D]#0  B[B]#2  A[A]#1  C[C]#3
Now joining: B[B]#2
NL Join
Now joining: A[A]#1
NL Join
SM Join
HA Join
Now joining: C[C]#3
NL Join
SM Join
HA Join

Join order[4]:  D[D]#0  B[B]#2  C[C]#3  A[A]#1
Now joining: C[C]#3
NL Join
Join order aborted2: cost > best plan cost


As you can see, the only join considered when “Now joining” B is a nested loop join; for all other tables the three possible joins (and sometimes two variants of the Sort Merge join) are evaluated.

You may also notice another of the clever strategies the optimizer uses to minimise its workload. On the second join order the optimizer goes straight to “Now joining C” because it has remembered the result of joining A from the previous join order.

This is only a very simple example and analysis, but I hope it’s given you some idea of how the optimizer works, and how clever it tries to be about minimising the work; and how it can obey a hint while still producing an execution plan that appears to have ignored the hint.

October 11, 2021

Adaptive Study

Filed under: Execution plans,Oracle — Jonathan Lewis @ 11:57 am BST Oct 11,2021

This is a little case study of adaptive optimisation in Oracle 19c with a surprising side-effect showing up when the optimizer gave the execution engine the option to “do the right thing” and the execution engine took it – except the “right thing” turned out to be a wrong thing.

We started with a request to the Oracle-L list server asking about the difference between the operations “table access by rowid” and “table access by rowid batched” and why changing the parameter “optimizer_adaptive_reporting_only” should make a plan switch from one to the other, and how much of a performance impact this would have because this was the only change that showed up in a plan that went from fast (enough) to very slow when the parameter was changed from true to false.

The batching (or not) of the table access really shouldn’t make much difference; the batch option tends to appear if there’s a “blocking” operation (such as a hash join) further up the execution plan, but the mechanism by which a rowsource is produced and passed up the tree is only likely to be affected very slightly. So there had to be something else going on.

Fortunately the OP had the SQL Monitor reports available from a fast / non-batched / reporting only = true run and a slow / batched / “reporting only = false” run. I’ve shown these below with the option to expand and contract them on demand:

Fast plan (reporting only):

Click on this line to expand the “reporting only = true (fast)” plan
Global Information
------------------------------
 Status              :  DONE (ALL ROWS)         
 Instance ID         :  2                       
 Session             :  XXXXX (510:5394) 
 SQL ID              :  791qwn38bq6gv           
 SQL Execution ID    :  33554432                
 Execution Started   :  10/07/2021 11:46:56     
 First Refresh Time  :  10/07/2021 11:46:56     
 Last Refresh Time   :  10/07/2021 11:51:36     
 Duration            :  280s                    
 Module/Action       :  SQL*Plus/-              
 Service             :  XXXXX.XXXXX.com 
 Program             :  sqlplus.exe             
 Fetch Calls         :  370                     

Global Stats
===========================================================================
| Elapsed |   Cpu   |    IO    | Cluster  | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
===========================================================================
|     252 |     170 |       71 |       11 |   370 |    39M | 251K |   2GB |
===========================================================================

SQL Plan Monitoring Details (Plan Hash Value=250668601)
===============================================================================================================================================================================
| Id |                      Operation                       |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  |  Mem  |
|    |                                                      |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | (Max) |
===============================================================================================================================================================================
|  0 | SELECT STATEMENT                                     |                               |         |       |       279 |     +2 |     1 |       2M |       |       |     . |
|  1 |   FILTER                                             |                               |         |       |       279 |     +2 |     1 |       2M |       |       |     . |
|  2 |    NESTED LOOPS OUTER                                |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  3 |     NESTED LOOPS OUTER                               |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  4 |      HASH JOIN OUTER                                 |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  5 |       NESTED LOOPS OUTER                             |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  6 |        STATISTICS COLLECTOR                          |                               |         |       |       279 |     +2 |     1 |       2M |       |       |     . |
|  7 |         NESTED LOOPS OUTER                           |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  8 |          HASH JOIN OUTER                             |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
|  9 |           NESTED LOOPS OUTER                         |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
| 10 |            STATISTICS COLLECTOR                      |                               |         |       |       279 |     +2 |     1 |       2M |       |       |     . |
| 11 |             NESTED LOOPS OUTER                       |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
| 12 |              NESTED LOOPS OUTER                      |                               |       1 |    3M |       279 |     +2 |     1 |       2M |       |       |     . |
| 13 |               NESTED LOOPS                           |                               |    272K |    2M |       279 |     +2 |     1 |       2M |       |       |     . |
| 14 |                NESTED LOOPS OUTER                    |                               |    272K |    2M |       279 |     +2 |     1 |       2M |       |       |     . |
| 15 |                 NESTED LOOPS                         |                               |    272K |    2M |       279 |     +2 |     1 |       2M |       |       |     . |
| 16 |                  NESTED LOOPS OUTER                  |                               |    272K |    1M |       279 |     +2 |     1 |       2M |       |       |     . |
| 17 |                   NESTED LOOPS                       |                               |    272K |    1M |       279 |     +2 |     1 |       2M |       |       |     . |
| 18 |                    FILTER                            |                               |         |       |       279 |     +2 |     1 |       2M |       |       |     . |
| 19 |                     NESTED LOOPS OUTER               |                               |    272K |  598K |       279 |     +2 |     1 |       2M |       |       |     . |
| 20 |                      VIEW                            | index$_join$_006              |    276K | 48299 |       279 |     +2 |     1 |       2M |       |       |     . |
| 21 |                       HASH JOIN                      |                               |         |       |       279 |     +2 |     1 |       2M |       |       | 132MB |
| 22 |                        HASH JOIN                     |                               |         |       |         2 |     +1 |     1 |       2M |       |       | 124MB |
| 23 |                         INDEX STORAGE FAST FULL SCAN | TET_IX2                       |    276K |  8505 |         1 |     +2 |     1 |       2M |       |       |     . |
| 24 |                         INDEX STORAGE FAST FULL SCAN | TET_IX4                       |    276K | 13077 |         1 |     +2 |     1 |       2M |       |       |     . |
| 25 |                        INDEX STORAGE FAST FULL SCAN  | TET_PK                        |    276K | 11889 |       279 |     +2 |     1 |       2M |   149 |  62MB |     . |
| 26 |                      TABLE ACCESS BY INDEX ROWID     | TT                            |       1 |     2 |       279 |     +2 |    2M |       2M |  2347 |  18MB |     . |
| 27 |                       INDEX UNIQUE SCAN              | TT_PK                         |       1 |     1 |       279 |     +2 |    2M |       2M |    11 | 90112 |     . |
| 28 |                    TABLE ACCESS BY INDEX ROWID       | TM                            |       1 |     2 |       279 |     +2 |    2M |       2M | 12476 |  97MB |     . |
| 29 |                     INDEX UNIQUE SCAN                | TM_PK                         |       1 |     1 |       279 |     +2 |    2M |       2M |  1683 |  13MB |     . |
| 30 |                   TABLE ACCESS BY INDEX ROWID        | TU                            |       1 |     1 |       257 |    +21 |    2M |    17764 |   137 |   1MB |     . |
| 31 |                    INDEX UNIQUE SCAN                 | TU_PK                         |       1 |       |       257 |    +21 |    2M |    17764 |     1 |  8192 |     . |
| 32 |                  TABLE ACCESS BY INDEX ROWID         | TEP                           |       1 |     2 |       279 |     +2 |    2M |       2M |  155K |   1GB |     . |
| 33 |                   INDEX UNIQUE SCAN                  | TEP_PK                        |       1 |     1 |       279 |     +2 |    2M |       2M |  1729 |  14MB |     . |
| 34 |                 TABLE ACCESS BY INDEX ROWID          | TLIM                          |       1 |     1 |       279 |     +2 |    2M |       2M |       |       |     . |
| 35 |                  INDEX UNIQUE SCAN                   | TLIM_PK                       |       1 |       |       279 |     +2 |    2M |       2M |       |       |     . |
| 36 |                TABLE ACCESS BY INDEX ROWID           | TLPSE                         |       1 |     1 |       279 |     +2 |    2M |       2M |       |       |     . |
| 37 |                 INDEX UNIQUE SCAN                    | TLPSE_PK                      |       1 |       |       279 |     +2 |    2M |       2M |       |       |     . |
| 38 |               INDEX RANGE SCAN                       | TCX_IX2                       |       1 |     2 |       279 |     +2 |    2M |       2M |  8870 |  69MB |     . |
| 39 |              TABLE ACCESS BY INDEX ROWID             | TC                            |       1 |     2 |       279 |     +2 |    2M |       2M | 14648 | 114MB |     . |
| 40 |               INDEX UNIQUE SCAN                      | TC_PK                         |       1 |     1 |       279 |     +2 |    2M |       2M |   157 |   1MB |     . |
| 41 |            INDEX RANGE SCAN                          | TCX_PK                        |       1 |     2 |       279 |     +2 |    2M |       2M |       |       |     . |
| 42 |           INDEX RANGE SCAN                           | TCX_PK                        |       1 |     2 |           |        |       |          |       |       |     . |
| 43 |          TABLE ACCESS BY INDEX ROWID                 | TC                            |       1 |     2 |       279 |     +2 |    2M |       2M | 16037 | 125MB |     . |
| 44 |           INDEX UNIQUE SCAN                          | TC_PK                         |       1 |     1 |       279 |     +2 |    2M |       2M |   224 |   2MB |     . |
| 45 |        TABLE ACCESS BY INDEX ROWID                   | TP                            |       1 |     3 |       279 |     +2 |    2M |       2M |       |       |     . |
| 46 |         INDEX RANGE SCAN                             | TP_PK                         |      15 |     1 |       279 |     +2 |    2M |      28M |       |       |     . |
| 47 |       TABLE ACCESS BY INDEX ROWID                    | TP                            |       1 |     3 |           |        |       |          |       |       |     . |
| 48 |        INDEX RANGE SCAN                              | TP_PK                         |      15 |     1 |           |        |       |          |       |       |     . |
| 49 |      TABLE ACCESS STORAGE FULL FIRST ROWS            | TLIET                         |       1 |     3 |       279 |     +2 |    2M |       2M |       |       |     . |
| 50 |     VIEW PUSHED PREDICATE                            | TEB_VW                        |       1 |    57 |       256 |    +24 |    2M |     1459 |       |       |     . |
| 51 |      NESTED LOOPS OUTER                              |                               |       1 |    57 |       272 |     +8 |    2M |     1459 |       |       |     . |
| 52 |       NESTED LOOPS                                   |                               |       1 |    55 |       256 |    +24 |    2M |     1459 |       |       |     . |
| 53 |        NESTED LOOPS                                  |                               |       1 |    53 |       256 |    +24 |    2M |     1459 |       |       |     . |
| 54 |         NESTED LOOPS                                 |                               |       1 |    51 |       272 |     +9 |    2M |     1459 |       |       |     . |
| 55 |          NESTED LOOPS                                |                               |       5 |    41 |       279 |     +2 |    2M |     6965 |       |       |     . |
| 56 |           NESTED LOOPS                               |                               |       1 |     7 |       279 |     +2 |    2M |     770K |       |       |     . |
| 57 |            NESTED LOOPS                              |                               |       1 |     4 |       279 |     +2 |    2M |     770K |       |       |     . |
| 58 |             NESTED LOOPS                             |                               |       1 |     3 |       279 |     +2 |    2M |     770K |       |       |     . |
| 59 |              TABLE ACCESS BY INDEX ROWID             | TEP                           |       1 |     3 |       279 |     +2 |    2M |     770K |       |       |     . |
| 60 |               INDEX UNIQUE SCAN                      | TEP_PK                        |       1 |     2 |       279 |     +2 |    2M |       2M |       |       |     . |
| 61 |              INDEX RANGE SCAN                        | TLP_IX1                       |       1 |       |       279 |     +2 |  770K |     770K |       |       |     . |
| 62 |             VIEW                                     |                               |       1 |     1 |       279 |     +2 |  770K |     770K |       |       |     . |
| 63 |              SORT AGGREGATE                          |                               |       1 |       |       279 |     +2 |  770K |     770K |       |       |     . |
| 64 |               TABLE ACCESS BY INDEX ROWID            | TPR                           |       1 |     1 |       279 |     +2 |  770K |     770K |       |       |     . |
| 65 |                INDEX UNIQUE SCAN                     | TPR_PK                        |       1 |       |       279 |     +2 |  770K |     770K |       |       |     . |
| 66 |            TABLE ACCESS BY INDEX ROWID               | TET                           |       1 |     3 |       279 |     +2 |  770K |     770K | 28892 | 226MB |     . |
| 67 |             INDEX RANGE SCAN                         | TET_Ix1                       |       1 |     2 |       279 |     +2 |  770K |     899K |  6957 |  54MB |     . |
| 68 |           TABLE ACCESS BY INDEX ROWID                | TWE                           |       5 |    34 |       272 |     +9 |  770K |     6965 |   890 |   7MB |     . |
| 69 |            INDEX RANGE SCAN                          | TWE_IDX1                      |      35 |     2 |       272 |     +9 |  770K |     6965 |    22 | 176KB |     . |
| 70 |          TABLE ACCESS BY INDEX ROWID                 | TT                            |       1 |     2 |       272 |     +9 |  6965 |     1459 |       |       |     . |
| 71 |           INDEX UNIQUE SCAN                          | TT_PK                         |       1 |     1 |       272 |     +9 |  6965 |     6965 |       |       |     . |
| 72 |         INDEX RANGE SCAN                             | TCX_IX2                       |       1 |     2 |       256 |    +24 |  1459 |     1459 |   932 |   7MB |     . |
| 73 |        TABLE ACCESS BY INDEX ROWID                   | TC                            |       1 |     2 |       256 |    +24 |  1459 |     1459 |       |       |     . |
| 74 |         INDEX UNIQUE SCAN                            | TC_PK                         |       1 |     1 |       256 |    +24 |  1459 |     1459 |       |       |     . |
| 75 |       TABLE ACCESS BY INDEX ROWID                    | TLS                           |       1 |     2 |       256 |    +24 |  1459 |     1451 |       |       |     . |
| 76 |        INDEX SKIP SCAN                               | TLS_PK                        |       1 |     1 |       256 |    +24 |  1459 |     1451 |       |       |     . |
| 77 |    SORT AGGREGATE                                    |                               |       1 |       |       279 |     +2 |    2M |       2M |       |       |     . |
| 78 |     FIRST ROW                                        |                               |       1 |     3 |       279 |     +2 |    2M |       2M |       |       |     . |
| 79 |      INDEX RANGE SCAN (MIN/MAX)                      | TCX_IX2                       |       1 |     3 |       279 |     +2 |    2M |       2M |       |       |     . |
===============================================================================================================================================================================

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1))
---------------------------------------------------------------------------
    0 -  STATEMENT
         U -  first_rows / hint overridden by another in parent query block
           -  first_rows
 
  56 -  SEL$5
           -  no_merge
 
Note
-----
   - this is an adaptive plan

Slow plan (runtime adapted):

Click on this line to expand the “reporting_only = false (slow)” plan
Global Information
------------------------------
 Status              :  DONE (ALL ROWS)          
 Instance ID         :  2                        
 Session             :  XXXXX (509:27860) 
 SQL ID              :  8t19y7v5j9ztg            
 SQL Execution ID    :  33554432                 
 Execution Started   :  10/07/2021 07:56:09      
 First Refresh Time  :  10/07/2021 07:56:09      
 Last Refresh Time   :  10/07/2021 08:07:17      
 Duration            :  668s                     
 Module/Action       :  SQL*Plus/-               
 Service             :  XXXXX.XXXXX.com  
 Program             :  sqlplus.exe              
 Fetch Calls         :  370                      

Global Stats
==========================================================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Cluster  | Fetch | Buffer | Read | Read  | Write | Write |    Offload     |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) | Calls |  Gets  | Reqs | Bytes | Reqs  | Bytes | Returned Bytes |
==========================================================================================================================
|     705 |     280 |      270 |        0.00 |      155 |   370 |    40M | 984K |  11GB |  6422 |   3GB |            6GB |
==========================================================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3015036808)
========================================================================================================================================================================================================
| Id |                      Operation                       |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write |  Mem  | Temp  |
|    |                                                      |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes | (Max) | (Max) |
========================================================================================================================================================================================================
|  0 | SELECT STATEMENT                                     |                               |         |       |       512 |   +157 |     1 |       2M |       |       |       |       |     . |     . |
|  1 |   FILTER                                             |                               |         |       |       512 |   +157 |     1 |       2M |       |       |       |       |     . |     . |
|  2 |    NESTED LOOPS OUTER                                |                               |       1 |    3M |       512 |   +157 |     1 |       2M |       |       |       |       |     . |     . |
|  3 |     NESTED LOOPS OUTER                               |                               |       1 |    3M |       512 |   +157 |     1 |       2M |       |       |       |       |     . |     . |
|  4 |      HASH JOIN OUTER                                 |                               |       1 |    3M |       538 |   +131 |     1 |       2M |  3387 |   2GB |  3387 |   2GB | 450MB |   2GB |
|  5 |       NESTED LOOPS OUTER                             |                               |       1 |    3M |        27 |   +131 |     1 |       2M |       |       |       |       |     . |     . |
|  6 |        STATISTICS COLLECTOR                          |                               |         |       |        27 |   +131 |     1 |       2M |       |       |       |       |     . |     . |
|  7 |         NESTED LOOPS OUTER                           |                               |       1 |    3M |        27 |   +131 |     1 |       2M |       |       |       |       |     . |     . |
|  8 |          HASH JOIN OUTER                             |                               |       1 |    3M |       155 |     +3 |     1 |       2M |  3035 |   1GB |  3035 |   1GB | 309MB |   1GB |
|  9 |           NESTED LOOPS OUTER                         |                               |       1 |    3M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 10 |            STATISTICS COLLECTOR                      |                               |         |       |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 11 |             NESTED LOOPS OUTER                       |                               |       1 |    3M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 12 |              NESTED LOOPS OUTER                      |                               |       1 |    3M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 13 |               NESTED LOOPS                           |                               |    272K |    2M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 14 |                NESTED LOOPS OUTER                    |                               |    272K |    2M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 15 |                 NESTED LOOPS                         |                               |    272K |    2M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 16 |                  NESTED LOOPS OUTER                  |                               |    272K |    1M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 17 |                   NESTED LOOPS                       |                               |    272K |    1M |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 18 |                    FILTER                            |                               |         |       |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 19 |                     NESTED LOOPS OUTER               |                               |    272K |  598K |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 20 |                      VIEW                            | index$_join$_006              |    276K | 48299 |       129 |     +3 |     1 |       2M |       |       |       |       |     . |     . |
| 21 |                       HASH JOIN                      |                               |         |       |       129 |     +3 |     1 |       2M |       |       |       |       | 132MB |     . |
| 22 |                        HASH JOIN                     |                               |         |       |         3 |     +1 |     1 |       2M |       |       |       |       | 124MB |     . |
| 23 |                         INDEX STORAGE FAST FULL SCAN | TET_IX2                       |    276K |  8505 |         1 |     +1 |     1 |       2M |   129 |  54MB |       |       |     . |     . |
| 24 |                         INDEX STORAGE FAST FULL SCAN | TET_IX4                       |    276K | 13077 |         3 |     +1 |     1 |       2M |   167 |  81MB |       |       |     . |     . |
| 25 |                        INDEX STORAGE FAST FULL SCAN  | TET_PK                        |    276K | 11889 |       129 |     +3 |     1 |       2M |   198 |  61MB |       |       |     . |     . |
| 26 |                      TABLE ACCESS BY INDEX ROWID     | TT                            |       1 |     2 |       129 |     +3 |    2M |       2M |  1488 |  12MB |       |       |     . |     . |
| 27 |                       INDEX UNIQUE SCAN              | TT_PK                         |       1 |     1 |       129 |     +3 |    2M |       2M |     7 | 57344 |       |       |     . |     . |
| 28 |                    TABLE ACCESS BY INDEX ROWID       | TM                            |       1 |     2 |       129 |     +3 |    2M |       2M |  9875 |  77MB |       |       |     . |     . |
| 29 |                     INDEX UNIQUE SCAN                | TM_PK                         |       1 |     1 |       129 |     +3 |    2M |       2M |  1235 |  10MB |       |       |     . |     . |
| 30 |                   TABLE ACCESS BY INDEX ROWID        | TU                            |       1 |     1 |       119 |    +11 |    2M |    17764 |       |       |       |       |     . |     . |
| 31 |                    INDEX UNIQUE SCAN                 | TU_PK                         |       1 |       |       119 |    +11 |    2M |    17764 |       |       |       |       |     . |     . |
| 32 |                  TABLE ACCESS BY INDEX ROWID         | TEP                           |       1 |     2 |       129 |     +3 |    2M |       2M |  140K |   1GB |       |       |     . |     . |
| 33 |                   INDEX UNIQUE SCAN                  | TEP_PK                        |       1 |     1 |       129 |     +3 |    2M |       2M |  1478 |  12MB |       |       |     . |     . |
| 34 |                 TABLE ACCESS BY INDEX ROWID          | TLIM                          |       1 |     1 |       129 |     +3 |    2M |       2M |       |       |       |       |     . |     . |
| 35 |                  INDEX UNIQUE SCAN                   | TLIM_PK                       |       1 |       |       129 |     +3 |    2M |       2M |       |       |       |       |     . |     . |
| 36 |                TABLE ACCESS BY INDEX ROWID           | TLPSE                         |       1 |     1 |       129 |     +3 |    2M |       2M |       |       |       |       |     . |     . |
| 37 |                 INDEX UNIQUE SCAN                    | TLPSE_PK                      |       1 |       |       129 |     +3 |    2M |       2M |       |       |       |       |     . |     . |
| 38 |               INDEX RANGE SCAN                       | TCX_IX2                       |       1 |     2 |       129 |     +3 |    2M |       2M |  4642 |  36MB |       |       |     . |     . |
| 39 |              TABLE ACCESS BY INDEX ROWID             | TC                            |       1 |     2 |       129 |     +3 |    2M |       2M | 22307 | 174MB |       |       |     . |     . |
| 40 |               INDEX UNIQUE SCAN                      | TC_PK                         |       1 |     1 |       129 |     +3 |    2M |       2M |   546 |   4MB |       |       |     . |     . |
| 41 |            INDEX RANGE SCAN                          | TCX_PK                        |       1 |     2 |           |        |       |          |       |       |       |       |     . |     . |
| 42 |           INDEX RANGE SCAN                           | TCX_PK                        |       1 |     2 |         1 |   +131 |     1 |     976K |       |       |       |       |     . |     . |
| 43 |          TABLE ACCESS BY INDEX ROWID                 | TC                            |       1 |     2 |        27 |   +131 |    2M |       2M | 21549 | 168MB |       |       |     . |     . |
| 44 |           INDEX UNIQUE SCAN                          | TC_PK                         |       1 |     1 |        27 |   +131 |    2M |       2M |   959 |   7MB |       |       |     . |     . |
| 45 |        TABLE ACCESS BY INDEX ROWID BATCHED           | TP                            |       1 |     3 |           |        |       |          |       |       |       |       |     . |     . |
| 46 |         INDEX RANGE SCAN                             | TP_PK                         |      15 |     1 |           |        |       |          |       |       |       |       |     . |     . |
| 47 |       TABLE ACCESS BY INDEX ROWID BATCHED            | TP                            |       1 |     3 |        36 |   +157 |     1 |       15 |       |       |       |       |     . |     . |
| 48 |        INDEX RANGE SCAN                              | TP_PK                         |      15 |     1 |        36 |   +157 |     1 |       15 |       |       |       |       |     . |     . |
| 49 |      TABLE ACCESS STORAGE FULL FIRST ROWS            | TLIET                         |       1 |     3 |       512 |   +157 |    2M |       2M |       |       |       |       |     . |     . |
| 50 |     VIEW PUSHED PREDICATE                            | TEB_VW                        |       1 |    57 |       506 |   +163 |    2M |     1459 |       |       |       |       |     . |     . |
| 51 |      NESTED LOOPS OUTER                              |                               |       1 |    57 |       506 |   +163 |    2M |     1459 |       |       |       |       |     . |     . |
| 52 |       NESTED LOOPS                                   |                               |       1 |    55 |       506 |   +163 |    2M |     1459 |       |       |       |       |     . |     . |
| 53 |        NESTED LOOPS                                  |                               |       1 |    53 |       506 |   +163 |    2M |     1459 |       |       |       |       |     . |     . |
| 54 |         NESTED LOOPS                                 |                               |       1 |    51 |       506 |   +163 |    2M |     1459 |       |       |       |       |     . |     . |
| 55 |          NESTED LOOPS                                |                               |       5 |    41 |       510 |   +159 |    2M |     6965 |       |       |       |       |     . |     . |
| 56 |           NESTED LOOPS                               |                               |       1 |     7 |       510 |   +159 |    2M |     770K |       |       |       |       |     . |     . |
| 57 |            NESTED LOOPS                              |                               |       1 |     4 |       510 |   +159 |    2M |     770K |       |       |       |       |     . |     . |
| 58 |             NESTED LOOPS                             |                               |       1 |     3 |       510 |   +159 |    2M |     770K |       |       |       |       |     . |     . |
| 59 |              TABLE ACCESS BY INDEX ROWID             | TEP                           |       1 |     3 |       512 |   +157 |    2M |     770K |  661K |   5GB |       |       |     . |     . |
| 60 |               INDEX UNIQUE SCAN                      | TEP_PK                        |       1 |     2 |       512 |   +157 |    2M |       2M |  2934 |  23MB |       |       |     . |     . |
| 61 |              INDEX RANGE SCAN                        | TLP_IX1                       |       1 |       |       510 |   +159 |  770K |     770K |       |       |       |       |     . |     . |
| 62 |             VIEW                                     |                               |       1 |     1 |       510 |   +159 |  770K |     770K |       |       |       |       |     . |     . |
| 63 |              SORT AGGREGATE                          |                               |       1 |       |       510 |   +159 |  770K |     770K |       |       |       |       |     . |     . |
| 64 |               TABLE ACCESS BY INDEX ROWID            | TPR                           |       1 |     1 |       510 |   +159 |  770K |     770K |       |       |       |       |     . |     . |
| 65 |                INDEX UNIQUE SCAN                     | TPR_PK                        |       1 |       |       510 |   +159 |  770K |     770K |       |       |       |       |     . |     . |
| 66 |            TABLE ACCESS BY INDEX ROWID BATCHED       | TET                           |       1 |     3 |       511 |   +158 |  770K |     770K | 79759 | 623MB |       |       |     . |     . |
| 67 |             INDEX RANGE SCAN                         | TET_Ix1                       |       1 |     2 |       510 |   +159 |  770K |     899K | 15834 | 124MB |       |       |     . |     . |
| 68 |           TABLE ACCESS BY INDEX ROWID BATCHED        | TWE                           |       5 |    34 |       506 |   +163 |  770K |     6965 |  2080 |  16MB |       |       |     . |     . |
| 69 |            INDEX RANGE SCAN                          | TWE_IDX1                      |      35 |     2 |       506 |   +163 |  770K |     6965 |   118 | 944KB |       |       |     . |     . |
| 70 |          TABLE ACCESS BY INDEX ROWID                 | TT                            |       1 |     2 |       506 |   +163 |  6965 |     1459 |   208 |   2MB |       |       |     . |     . |
| 71 |           INDEX UNIQUE SCAN                          | TT_PK                         |       1 |     1 |       506 |   +163 |  6965 |     6965 |       |       |       |       |     . |     . |
| 72 |         INDEX RANGE SCAN                             | TCX_IX2                       |       1 |     2 |       506 |   +163 |  1459 |     1459 |  1388 |  11MB |       |       |     . |     . |
| 73 |        TABLE ACCESS BY INDEX ROWID                   | TC                            |       1 |     2 |       506 |   +163 |  1459 |     1459 |   936 |   7MB |       |       |     . |     . |
| 74 |         INDEX UNIQUE SCAN                            | TC_PK                         |       1 |     1 |       506 |   +163 |  1459 |     1459 |    75 | 600KB |       |       |     . |     . |
| 75 |       TABLE ACCESS BY INDEX ROWID BATCHED            | TLS                           |       1 |     2 |       506 |   +163 |  1459 |     1451 |     1 |  8192 |       |       |     . |     . |
| 76 |        INDEX SKIP SCAN                               | TLS_PK                        |       1 |     1 |       506 |   +163 |  1459 |     1451 |     1 |  8192 |       |       |     . |     . |
| 77 |    SORT AGGREGATE                                    |                               |       1 |       |       512 |   +157 |    2M |       2M |       |       |       |       |     . |     . |
| 78 |     FIRST ROW                                        |                               |       1 |     3 |       512 |   +157 |    2M |       2M |       |       |       |       |     . |     . |
| 79 |      INDEX RANGE SCAN (MIN/MAX)                      | TCX_IX2                       |       1 |     3 |       512 |   +157 |    2M |       2M |  9356 |  73MB |       |       |     . |     . |
=======================================================================================================================================================================================================

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1))
---------------------------------------------------------------------------
   0 -  STATEMENT
         U -  first_rows / hint overridden by another in parent query block
           -  first_rows
 
  56 -  SEL$5
           -  no_merge
 
Note
-----
   - this is an adaptive plan

If you want to pull these plans into separate windows and compare (nothing but) the Operations and Names line by line you’ll find that the only differences appear at operations 45, 47, 66, 68, and 75:

| 45 |        TABLE ACCESS BY INDEX ROWID                   | TP                            |
| 47 |       TABLE ACCESS BY INDEX ROWID                    | TP                            |
| 66 |            TABLE ACCESS BY INDEX ROWID               | TET                           |
| 68 |           TABLE ACCESS BY INDEX ROWID                | TWE                           |
| 75 |       TABLE ACCESS BY INDEX ROWID                    | TLS                           |

| 45 |        TABLE ACCESS BY INDEX ROWID BATCHED           | TP                            |
| 47 |       TABLE ACCESS BY INDEX ROWID BATCHED            | TP                            |
| 66 |            TABLE ACCESS BY INDEX ROWID BATCHED       | TET                           |
| 68 |           TABLE ACCESS BY INDEX ROWID BATCHED        | TWE                           |
| 75 |       TABLE ACCESS BY INDEX ROWID BATCHED            | TLS                           |

So what could possibly make one plan so much slower than the other?

There are all sorts of bits and pieces in these plans that you need to be able to spot “in passing” if you want to become fluent at understanding execution plans. It’s something that takes a lot of practise but there’s one general tip (or warning, perhaps) that I can offer.

If you start out by looking for one particular thing you’ll miss lots of important clues; on a first pass through the plan just try to notice anything that looks a little interesting or informative, then go back for a more detailed examination on a second pass through the plan.

I won’t go through the entire pattern of thought that went through my mind as I started looking at these plans, but here are a couple of flags I raised

  • Adaptive plans in SQL Monitor – we’re likely to see some “statistics collector” operations and that’s the “obvious” source of the anomaly, but reading plans that include their adaptive bits can be a mind-bending experience.
  • Global Stats of the slow one says 984K read requests (compared to 251K for the fast plan) – that might explain the difference in timing – keep an eye out for where the big numbers appear. (NB Don’t, at this point, go looking for them as that may lead you into missing the real issue.)
  • The slow plan plan shows the top operation with a Start Active of +157 while the fast plan has a start active of +2: that’s pretty consistent with a comment the user made (but I hadn’t mentioned) about the response time from the user’s persective; and it tells us that there’s a blocking operation somewhere in the slow plan. That’s nice because that’s what we might expect from seeing an adaptive plan switching from a nested loop to a hash join. (So we already think we’re a clever bunny – too bad it wasn’t quite the right answer.)
  • There are two places in the plan which report Statistics Collector, with one Nested Loop Outer then a Hash Join Outer immediately above them as the two candidate consumers for the rowsource supplied by the collector’s child operation. (Note: in some cases you will see two Nested Loop operations above a Statistics Collector before you get to the Hash Join, but that’s just because Oracle may implement a nested loop join in two steps, first to the index, second to the table)

First thoughts

With the warning about not digging “too deep too soon” in mind, here are a few potentially significant observations;. In the following notes I’ll simply refer to the plans as the “fast” plan and the “slow” plan..

  • The slow plan shows a large hash join with spill to disc at operation 4, and that’s one of the options triggered by the statistics collector at operation 6. It’s a little confusing that the nested loop join following it also reports execution statistics since only one of the two lines would have been doing anything, but we’ll assume for now that that’s a reporting error and postpone making a fuss about it. The same pattern appears at operations 8, 9 and 10.
  • A cross-check to the fast plan shows that the hash joins don’t do any work at corresponding operations (although it also displays the oddity of operation that (we assume) didn’t actually run reporting a long Time Active.
  • The slow plan has an oddity at operation 41 – it’s an index range scan of a primary key (TCX_PK) that doesn’t report any execution statistics. Cross-checking to the slow plan we see that it’s operation 42 (which is also an index range scan, and using the same index !) that doesn’t report any execution statistics. We note that the fast plan “Starts” its range scan 2 million times, while the slow plan starts just once, starting at time +131 and having an active time of 1 second. [side-note: I am a little suspicious of that number – It looks to me as if it ought to be reporting 27 seconds]
  • Keep going – because just a bit further down we see that the slow plan has no stats for operations 45 and 46 (index range scan of TP_PK with table access of TP) while the fast plan has no stats for operations 47 and 48 (also an index range scan of TP_PK with table access to TP). Again we see the same pattern that the slow plan executes the operation just once while the fast plan executes its operations 2M times.
  • Keep going – the previous two observations are interesing and probably need further investigation, but they might not be critical. The very next line (operation 49) in both plans shows us a “TABLE ACCESS STORAGE FULL FIRST ROWS” that executes 2 million times – that’s got to hurt, surely, but let’s not get side-tracked just yet.
  • Operation 50 is a “VIEW PUSHED PREDICATE” – that almost certainly means it’s the second child of a nested loop join with a join predicate pushed into a non-mergeable view (and the view name is TEB_VW so it’s not a view created by an internal transformation) and the operation has, like so many other lines in the plan, started 2 million times.
  • Looking at the rest of the plan, there are no more statistics collectors and the plans have an exact match on operations. Unfortunately we don’t have a Predicate Information section, so we can’t tell whether matching operations were really doing the same thing e.g. was an index range scan in one plan using more columns to probe the index than the corresponging index range scan in the other plan. However we can check times:
  • The View Pushed Predicate in the fast plan starts at time +24 [Another slightly suspicious time given all the +2 starts in the locale] and is active for 256 seconds, while in the slow plan it starts at time +163 and is active for 506 seconds. So it looks as if a lot of the excess run time of the query time is spent in this part of the plan — for no logical reason that we can see.
  • Again we take a quick note, and move on. The final observation is that the last three lines of the plan look like the plan for a subquery block (executed very efficiently) of the “find the most recent / highest / lowest” type, and a quick check to the top of the plan shows that its parent is a FILTER operation, corroborating our first guess.

Starting to dig

Reviewing the first pass we can see that we lose a lot of “startup” time to the two hash joins where the build table in each case has to be completed before any further tables can be joined. This is in the order of 160 seconds, which is consistent with the OP’s observations, and it’s happening because adaptive plans are activated, triggering a change from nested loop joins to hash joins.

More significantly, from the perspective of time, is that the nested loop join into the View Pushed Predicate is active for twice as long in the slow plan as it is in the fast plan – so that’s a place to look a little more closely, revealing that operation 59 is probably the reason for the difference: 661 thousand read requests in the slow plan but none in the fast plan.

Unfortunately we don’t have any Activity Stats (i.e. active session history data) in the report, but since the access to the table is reported as unique access by unique index in both cases we can be fairly sure that the difference isn’t due to a difference in the Predicate Information (that isn’t in the report).

Clearly we need to stop the stop the adaptive plan from being taken to avoid the start-up delay – e.g. add a /*+ no_adaptive_plan */ hint to the query, but that still leaves two puzzles:

  1. why are the rows estimates so bad (and at least part of the reason for that is that it turned out that the query was being optimized with optimizer_mode = first_rows – that’s the legacy first_rows, not a cost-based first_rows_N);
  2. how could the same sub-plan result in far more physical reads in one case compared to the other when the critical operation is a unique index access.

The answer to the second question could be in an observation I first published 14 years ago – and it could indicate a generic threat to adaptive optimisation.

If you have an execution plan which, stripped to a minimum, looks like this:

Join Operation
        Table_X
        Table_Y

The order in which the result appears is likely to change depending on the join mechanism that Oracle chooses, viz Hash Join, Merge Join or Nested Loop Join.

Under “non-adaptive” conditions if you have a join that’s border-line between a hash join and a nested loop join it frequently means that the optimizer will fip flop between two plans like the following (leading to the classic question – nothing changed why did the plan change):

Hash Join
        Table_X
        Table_Y

Nested Loop Join
        Table_Y
        Table_X

Note that the order in which the tables appear is reversed.

As it says in another article of mine: all joins are nested loop joins, just with different startup costs”. In both the plans above Oracle picks a row from Table_Y and looks for a match in Table_X, so the order of the result set is dictated by the Table_Y regardless of whether the join is a hash join or a nested loop join. However, if Oracle has decided to use an adaptive plan and starts with the nested loop (Y -> X) and decides to switch to a hash join it doesn’t swap the join order as the join mechanism is selected, so a result set whose order would have been dictated by Table_Y turns into the same result set (we hope) but in an order dictated by Table_X.

Consequences:

If you’re using very big tables and Oracle produces an adaptive nested loop join early in the plan, this may result in a later nested loop being lucky and inducing lots of “self-caching” because its driving rowsource is in a nice order. If the plan adapts to a hash join the driving data set may appear in a completely different order that makes the later nested loop jump randomly around a very large table, inducing a lot of “self-flushing” as one table block is discarded from the buffer cache to make space for another. (I published an article several years ago about how a similar – though far from identical – type of anomaly could happen with Exadata and compression: an unlucky order of data access causing a massive extra workload.)

Conclusion and further thoughts

In this note I’ve tried to present my thoughts as I’ve read through an execution plan trying to understand what it’s doing and why it’s showing the performance characteristics it does.

In this case the interpretation was made harder because the plan was an adaptive plan – and there doesn’t appear to be an option in the procedure in dbms_sql_monitor to choose between hiding and revealing the adaptive parts [ed: this statement may apply only to the text option – see comment #1 for a counter-example using the ‘Active HTML” option]; moreover there was no Activity (ASH) information supplied and we didn’t have the Predicate Information.

The performance “issue” was that when adaptive plans were allowed (as opposed to reported only) we could see that two nested loops changed to hash joins. It was fairly clear that this explained the huge delay before results started to appear, but didn’t explain why the query took so much longer to complete.

We have a hypothesis that the extra run time of the query was due to “bad luck” because we can see very clearly that a nested loop into a non-mergeable view with pushed predicate reports a huge number of single block read requests; and we know that changing a join from a nested loop to a hash join without changing the order of the child operations will change the order in which the join’s rowsource is generated.

Ini this case the query was executing under the legacy first_rows optimizer mode, and it’s possible that if first_rows_N had been used the optimizer would have behaved differently, especially since we have a query that is returning 2M rows and we only want the first few rows.

Next Steps

The obvious “next step” in this investigation is to check whether first_rows_N co-operates nicely with adaptive optimisation. After all, the only significant thing that adaptive optimisation does to (serial) execution plans is set an inflexion point to dictate when a nested loop should change to a hash join – and a hash join is a blocking operation which is rarely a good thing for a first_rows_N plan.

So, does first_rows_N disable this adaptive plan analysis, does it move the inflection point significantly, or does the optimizer simply forget that hash joins are less desirable in first_rows N optimisation. And if you’re running a system in first_rows_N mode should you disable adaptive plans by default, and only enable it for special cases.

I also have an urge to test a couple of ideas about why the two timing anomalies I mentioned have appeared, but it’s already taken me several hours to write notes (including a few replies to the list server) about the 30 minutes I’ve spent looking at an execution plan, so that’s just another couple of items on my to-do list.

October 7, 2021

Hints and Costs

Filed under: 12c,CBO,Conditional SQL,Execution plans,Oracle — Jonathan Lewis @ 12:06 pm BST Oct 7,2021

This note is one I drafted three years ago, based on a question from the Oracle-L. It doesn’t directly address that question because at the time I was unable to create a data set that reproduced the problem’ but it did highlight a detail that’s worth mentioning, so I’ve finally got around to completing it (and testing on a couple of newer versions of Oracle).

I’ll start with a model that was supposed to demonstrate the problem behind the question:


rem
rem     Script:         122_or_expand.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2018
rem     Purpose:        
rem
rem     Last tested
rem             21.3.0.0
rem             19.11.0.0
rem             12.2.0.1
rem

create table t1
segment creation immediate
pctfree 80 pctused 20
nologging
as
select
        *
from
        all_objects
where
        rownum <= 50000
;

alter table t1 add constraint t1_pk
        primary key(object_id)
        using index pctfree 80
;

variable b1 number
variable b2 number
variable b3 number
variable b4 number
variable b5 number
variable b6 number

exec :b1 := 100
exec :b2 := 120
exec :b3 := 1100
exec :b4 := 1220
exec :b5 := 3100
exec :b6 := 3320

set serveroutput off

select
        object_name
from 
        t1
where
        object_id between :b1 and :b2
or      object_id between :b3 and :b4
or      object_id between :b5 and :b6
;

select * from table(dbms_xplan.display_cursor(null,null,'outline'));


The critical feature of the query is the list of disjuncts (ORs) which all specify a range for object_id. The problem was that the query used a plan with an index full scan when there were no statistics on the table (or its indexes), but switched to a plan  that used index range scans when statistics were gathered – and the performance of the plan with the full scan was unacceptable.  (Clearly the “proper” solution is to have some suitable statistics in place – but sometimes such things are out of the control of the people who have to solve the problems.)

The /*+ index() */ and (undocumented) /*+ index_rs_asc() */ hints had no effect on the plan. The reason why the /*+ index() */ hint made no difference is because an index full scan is one of the ways in which the /*+ index() */ hint can be obeyed – the hint doesn’t instruct the optimizer to pick an index range scan. The hint /*+ index_rs_asc() */ specifically tells the optimizer to pick an index Range Scan ASCending if the hint has been specified correctly and the choice is available and legal. So why was the optimizer not doing as it was told. Without seeing the execution plan or CBO trace file from a live example I can’t guarantee that the following hypothesis is correct, but I think it’s in the right ball park.

I think the optimizer was probably using the (new to 12c) cost-based“OR expansion” transformation, which basically transformed the query into a UNION ALL of several index range scans – and that’s why its outline would show /*+ index_rs_asc() */ hints, and the hint would only become valid after the transformation had taken place so if Oracle didn’t consider (or considered and discarded) the transformation when there were no stats in place then the hint would have to be “Unused” (as the new 19c hint-report would say).

When I tried to model the problem the optimizer kept doing nice things with my data, so I wasn’t able to demonstrate the OP’s problem. However in one of my attempts to get a silly plan I did something silly – that can happen by accident if your client code isn’t careful! I’ll tell you what that was in a moment – first, a couple of plans.

As it stands, with the data and bind variables as shown, the optimizer used “b-tree / bitmap conversion” to produce an execution plan that did three separate index range scans, converts rowids to bit, OR-ed the bit-strings, then converted back to rowids before accessing the table:

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |       |       |    84 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   291 | 12804 |    84   (5)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|   3 |    BITMAP OR                        |       |       |       |            |          |
|   4 |     BITMAP CONVERSION FROM ROWIDS   |       |       |       |            |          |
|   5 |      SORT ORDER BY                  |       |       |       |            |          |
|*  6 |       INDEX RANGE SCAN              | T1_PK |       |       |     2   (0)| 00:00:01 |
|   7 |     BITMAP CONVERSION FROM ROWIDS   |       |       |       |            |          |
|   8 |      SORT ORDER BY                  |       |       |       |            |          |
|*  9 |       INDEX RANGE SCAN              | T1_PK |       |       |     2   (0)| 00:00:01 |
|  10 |     BITMAP CONVERSION FROM ROWIDS   |       |       |       |            |          |
|  11 |      SORT ORDER BY                  |       |       |       |            |          |
|* 12 |       INDEX RANGE SCAN              | T1_PK |       |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

So the first thing I had to do was disable this feature, which I did by adding the hint /*+ opt_param(‘_b_tree_bitmap_plans’,’false’) */ to the query. This adjustment left Oracle doing the OR-expansion that I didn’t want to see:


----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |   297 (100)|          |
|   1 |  VIEW                                  | VW_ORE_BA8ECEFB |   288 | 19008 |   297   (1)| 00:00:01 |
|   2 |   UNION-ALL                            |                 |       |       |            |          |
|*  3 |    FILTER                              |                 |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| T1              |    18 |   792 |    20   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                  | T1_PK           |    18 |       |     2   (0)| 00:00:01 |
|*  6 |    FILTER                              |                 |       |       |            |          |
|   7 |     TABLE ACCESS BY INDEX ROWID BATCHED| T1              |    97 |  4268 |   100   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN                  | T1_PK           |    97 |       |     2   (0)| 00:00:01 |
|*  9 |    FILTER                              |                 |       |       |            |          |
|  10 |     TABLE ACCESS BY INDEX ROWID BATCHED| T1              |   173 |  7612 |   177   (1)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN                  | T1_PK           |   173 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

You’ll notice that the three range scans have different row estimates and costs – that’s the effect of bind variable peeking and my careful choice of bind variables to define different sized ranges. Take note, by the way, for the three filter predicates flagged at operations 3, 6, and 9.  These are the “conditional plan” filters that say things like: “don’t run the sub-plan if the runtime value of :b5 is greater than :b6”.

Since I didn’t want to see OR-expansion just yet I then added the hint /*+ no_or_expand(@sel$1) */ to the query and that gave me a plan with tablescan:

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   617 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |   291 | 12804 |   617   (4)| 00:00:01 |
--------------------------------------------------------------------------

This was a shame because I really wanted to see the optimizer produce an index full scan at this point – so I decided to add an “unnamed index” hint to the growing list of hints – specifically: /*+ index_(@sel$1 t1@sel$1) */

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |       |       |   405 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   291 | 12804 |   405   (2)| 00:00:01 |
|*  2 |   INDEX FULL SCAN                   | T1_PK |   291 |       |   112   (7)| 00:00:01 |
---------------------------------------------------------------------------------------------

This, of course, is where things started to get a little interesting – the index full scan costs less than the tablescan but didn’t appear until hinted. But after a moment’s thought you can dismiss this one (possibly correctly) as an example of the optimizer being cautious about the cost of access paths that are dictated by bind variables or unpeekable inputs. (But these bind variables were peekable – so maybe there’s more to it than that – I was still trying to get to a point where my model would behave more like the OP’s, so I didn’t follow up on this detail: maybe in a couple of years time … ).

Once last tweak – and that will bring me to the main point of this note. In my original code I was using three ranges dictated by 3 pairs of bind variables, for example [:b5, :b6]. What would happen if I made :b5 greater than :b6, say I swapped their values?

The original btree/bitmap plan didn’t change, but where I had simply blocked bree/bitmap plans and seen OR-expansion as a result the plan changed to a full tablescan (with the cost you saw above of 617). So tried again, adding the hint /*+ or_expand(@sel$1) */ to see why; and this is the plan I got:

----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |   735 (100)|          |
|   1 |  VIEW                                  | VW_ORE_BA8ECEFB |   116 |  7656 |   735   (3)| 00:00:01 |
|   2 |   UNION-ALL                            |                 |       |       |            |          |
|*  3 |    FILTER                              |                 |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| T1              |    18 |   792 |    20   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                  | T1_PK           |    18 |       |     2   (0)| 00:00:01 |
|*  6 |    FILTER                              |                 |       |       |            |          |
|   7 |     TABLE ACCESS BY INDEX ROWID BATCHED| T1              |    97 |  4268 |   100   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN                  | T1_PK           |    97 |       |     2   (0)| 00:00:01 |
|*  9 |    FILTER                              |                 |       |       |            |          |
|* 10 |     TABLE ACCESS FULL                  | T1              |     1 |    44 |   615   (4)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

I still get the same three branches in the expansion, but look what’s happened to the sub-plan for the third pair of bind variables. The optimizer still has the FILTER at operation 9 – and that will evaluate to FALSE for the currently peeked values; but the optimizer has decided that it should use a tablescan for this part of the query if it ever gets a pair of bind variables in the right order; and the cost of the tablescan has echoed up the plan to make the total cost of the plan 735, which is (for obvious reasons) higher than the cost of running the whole query as a single tablescan.

The same anomaly appears in 19.11.0.0 and 21.3.0.0. On the plus side, it’s possible that if you have code like this the optimizer will be using the btree/bitmap conversion anyway;

tl;dr

As a generic point it’s worth ensuring that if you’re using bind variables in client code to define ranges then you’ve got to get the values in the right order otherwise one day the answer to the question “nothing changed why is the query running so slowly?” will be “someone got in first with the bound values the wrong way round”.

September 3, 2021

Ordered hint

Filed under: Execution plans,Hints,Oracle — Jonathan Lewis @ 6:49 pm BST Sep 3,2021

It’s been such a long time since Oracle deprecated the /*+ ordered */ hint that I can’t remember when it happened. The hint you should be using is the /*+ leading(…) */ hint which initially – maybe some time in 9i – would only allow you to specify the first table that the optimizer should use when examining join orders, but which soon changed to allow you to specify a complete join order.

I’ve written a few notes about the need to get rid of any /*+ ordered */ hints in production SQL, because it can produce a join order you’re not expecting. I’ve just found an extreme case of this running a quick test on 19.11.0.0 then 21.3.0.0

I’m not going to bother with the data setup for the query, but it’s a simple parent/child query that exhibits a surprising pattern. Here’s the query:

select
        /*+
                no_adaptive_plan
                ordered
                use_nl(ch)
        */
        par.n1,
        par.small_vc,
        sum(ch.n1)
from
        parent par,
        child ch
where
        par.n1 <= 20
and     ch.id_par = par.id
group by
        par.n1,
        par.small_vc
;

And here’s the plan, pulled from memory with a call to dbms_xplan.display_cursor() with ordered hint in place. I’ve included the outline information, hint report and (since this is from 21c) the query block registry:

-----------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |       |       |    32 (100)|          |
|   1 |  HASH GROUP BY         |          |    20 |   780 |    32   (7)| 00:00:01 |
|*  2 |   HASH JOIN            |          |    20 |   780 |    31   (4)| 00:00:01 |
|   3 |    JOIN FILTER CREATE  | :BF0000  |    20 |   440 |     8   (0)| 00:00:01 |
|   4 |     VIEW               | VW_GBF_6 |    20 |   440 |     8   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL | PARENT   |    20 |   380 |     8   (0)| 00:00:01 |
|   6 |    VIEW                | VW_GBC_5 |  1000 | 17000 |    23   (5)| 00:00:01 |
|   7 |     HASH GROUP BY      |          |  1000 |  8000 |    23   (5)| 00:00:01 |
|   8 |      JOIN FILTER USE   | :BF0000  |  4000 | 32000 |    22   (0)| 00:00:01 |
|*  9 |       TABLE ACCESS FULL| CHILD    |  4000 | 32000 |    22   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('21.1.0')
      DB_VERSION('21.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$D2EA58F1")
      ELIM_GROUPBY(@"SEL$FFB6458A")
      OUTLINE_LEAF(@"SEL$FE9D3122")
      OUTLINE_LEAF(@"SEL$E2E47E3A")
      PLACE_GROUP_BY(@"SEL$1" ( "PAR"@"SEL$1" ) ( "CH"@"SEL$1" ) 5)
      OUTLINE(@"SEL$FFB6458A")
      ELIM_GROUPBY(@"SEL$1D9E464A")
      OUTLINE(@"SEL$E26B953F")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$1D9E464A")
      OUTLINE(@"SEL$E132E821")
      NO_ACCESS(@"SEL$E2E47E3A" "VW_GBF_6"@"SEL$E132E821")
      NO_ACCESS(@"SEL$E2E47E3A" "VW_GBC_5"@"SEL$E26B953F")
      LEADING(@"SEL$E2E47E3A" "VW_GBF_6"@"SEL$E132E821"
              "VW_GBC_5"@"SEL$E26B953F")
      USE_HASH(@"SEL$E2E47E3A" "VW_GBC_5"@"SEL$E26B953F")
      PX_JOIN_FILTER(@"SEL$E2E47E3A" "VW_GBC_5"@"SEL$E26B953F")
      USE_HASH_AGGREGATION(@"SEL$E2E47E3A" GROUP_BY)
      FULL(@"SEL$D2EA58F1" "PAR"@"SEL$1")
      FULL(@"SEL$FE9D3122" "CH"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$FE9D3122" GROUP_BY)
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ITEM_1"="ITEM_1")
   5 - filter("PAR"."N1"<=20)
   9 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"CH"."ID_PAR"))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1))
---------------------------------------------------------------------------

   0 -  STATEMENT
           -  no_adaptive_plan

   1 -  SEL$E2E47E3A
           -  ordered

   9 -  SEL$FE9D3122 / "CH"@"SEL$1"
         U -  use_nl(ch)

Query Block Registry:
---------------------

  SEL$1 (PARSER)
    SEL$E26B953F (QUERY BLOCK TABLES CHANGED SEL$1)
      SEL$E132E821 (QUERY BLOCK TABLES CHANGED SEL$E26B953F)
        SEL$1D9E464A (SPLIT/MERGE QUERY BLOCKS SEL$E132E821)
          SEL$FFB6458A (ELIMINATION OF GROUP BY SEL$1D9E464A)
            SEL$D2EA58F1 (ELIMINATION OF GROUP BY SEL$FFB6458A) [FINAL]
      SEL$FE9D3122 (SPLIT/MERGE QUERY BLOCKS SEL$E26B953F) [FINAL]
    SEL$E2E47E3A (PLACE GROUP BY SEL$1) [FINAL]

The optimizer seems to have got rather carried away with how clever it cn be; so here’s the result of switching from /*+ ordered */ to using /*+ leading(par ch) */ – I won’t bother with all the extras since it’s a very simple plan:

----------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |       |       |   109 (100)|          |
|   1 |  HASH GROUP BY                |        |    80 |  2160 |   109   (1)| 00:00:01 |
|   2 |   NESTED LOOPS                |        |    80 |  2160 |   108   (0)| 00:00:01 |
|   3 |    NESTED LOOPS               |        |    80 |  2160 |   108   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL         | PARENT |    20 |   380 |     8   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | CHI_PK |     4 |       |     1   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID| CHILD  |     4 |    32 |     5   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("PAR"."N1"<=20)
   5 - access("CH"."ID_PAR"="PAR"."ID")


tl;dr

You should not be using the /*+ ordered */ hint in any recent version of Oracle.

August 23, 2021

Distributed Query

Filed under: distributed,Execution plans,Hints,Oracle,subqueries,Transformations,Troubleshooting — Jonathan Lewis @ 5:24 pm BST Aug 23,2021

Here’s an example that appeared on the Oracle Developer Community forum about a year ago that prompted me to do a little investigative work. The question involved a distributed query that was “misbehaving” – the interesting points were the appearance of the /*+ rule */ and /*+ driving_site() */ hints in the original query when combined with a suggestion to address the problem using the /*+ materialize */ hint with factored subqueries (common table expressions – CTEs), or when combined with my suggestion to use the /*+ no_merge */ hint.

If you don’t want to read the whole article there’s a tl;dr summary just before the end.

The original question was posed with a handful of poorly constructed code fragments that were supposed to describe the problem, viz:


select /*+ DRIVING_SITE (s1) */ * from  Table1 s1 WHERE condition in (select att1 from local_table) ; -- query n°1

select /*+ RULE DRIVING_SITE (s2) */  * from  Table2 s2 where  condition in (select att1 from local_table); -- query n°2

select * from
select /*+ DRIVING_SITE (s1) */ * from  Table1 s1 WHERE condition in (select att1 from local_table) ,
select /*+ RULE DRIVING_SITE (s2) */  * from  Table2 s2 where  condition in (select att1 from local_table)
where att_table_1 = att_table_2  -- sic

The crux of the problem was that the two separate statements individually produced an acceptable execution plan but the attempt to use the queries in inline views with a join resulted in a plan that (from the description) sounded like the result of Oracle merging the two inline views and running the two IN subqueries as FILTER (existence) subqueries.

We weren’t shown any execution plans and only had the title of the question (“Distributed sql query through multiple databases”) to give us the clue that there might be three different databases involved.

Obviously there are several questions worth asking when presented with this problem. The first being “can we have a more realistic piece of code”, also “which vesion of Oracle”, and “where are the execution plans”. I can’t help feeling that there’s more to the problem than just the three tables that seem to be suggested by the fragments supplied.

More significant, though, was the surprise that rule and driving_site should work together. There’s a long-standing (but incorrect) assertion that “any other hint invalidates the RULE hint”. I think I’ve published an example somewhere showing that /*+ unnest */ would affect an execution plan where the optimizer still obeyed the /*+ rule */ hint, and there’s an old post on this blog which points out that transformation and optimisation are (or were, at the time) independent of each other, implying that you could combine the rule hint with “transformational” hints and still end up with a rule-based execution plan.

Despite old memories suggesting the contrary my first thought was that the rule and driving_site hints couldn’t be working together – and that made it worth running a little test. Then one of the other specialists on the forums suggested using subquery factoring with the materialize hint – and I thought that probably wouldn’t help because when you insert into a global temporary table the driving site has to become the site that holds the global temporary tables (in fact this isn’t just a feature of GTTs). So there was another thing prompting me to run a test. (And then I suggested using the /*+ no_merge */ hint – but thought I’d check if that idea was going to work before I suggested it.)

So here’s a code sample to create some data, and the first two simple queries with calls for their predicted execution plans:

rem
rem     Script:         distributed_multi.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

rem     create public database link test@loopback using 'test';
rem     create public database link test2@loopback using 'test2';

rem     create public database link orcl@loopback using 'orcl';
rem     create public database link orcl2@loopback using 'orcl2';

rem     create public database link orclpdb@loopback using 'orclpdb';
rem     create public database link orclpdb2@loopback using 'orclpdb2';

define m_target=test@loopback
define m_target2=test2@loopback

define m_target=orcl@loopback
define m_target2=orcl2@loopback

define m_target=orclpdb@loopback
define m_target2=orclpdb2@loopback

create table t0
as
select  *
from    all_objects
where   mod(object_id,4) = 1
;

create table t1
as
select  *
from    all_objects
where   mod(object_id,11) = 0
;

create table t2
as
select  *
from    all_Objects
where   mod(object_id,13) = 0
;

explain plan for
select  /*+ driving_site(t1) */
        t1.object_name, t1.object_id
from    t1@&m_target    t1
where
        t1.object_id in (
                select  t0.object_id
                from    t0
        )
;

select * from table(dbms_xplan.display);

explain plan for
select
        /*+ rule driving_site(t2) */
        t2.object_name, t2.object_id
from    t2@&m_target2   t2
where
        t2.object_id in (
                select  t0.object_id
                from    t0
        )
;

select * from table(dbms_xplan.display);

Reading from the top down – t0 is in the local database, t1 is in remote database 1, t2 is in remote database 2. I’ve indicated the creation and selection of a pair of public database links at the top of the script – in this case both of them are loopback links to the local database, but I’ve used substitition variables in the SQL to allow me to adjust which databases are the remote ones. Since there are no indexes on any of the tables the optimizer is very limited in its choice of execution plans, which are as follows in 19.3 (the oraclepdb/orclpdb2 links).

First, the query against t1@orclpdb1 – which will run cost-based:


-----------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|      |  5168 |   287K|    57   (8)| 00:00:01 |        |      |
|*  1 |  HASH JOIN SEMI        |      |  5168 |   287K|    57   (8)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL    | T1   |  5168 |   222K|    16   (7)| 00:00:01 | ORCLP~ |      |
|   3 |   REMOTE               | T0   | 14058 |   178K|    40   (5)| 00:00:01 |      ! | R->S |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("A1"."OBJECT_ID"="A2"."OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "OBJECT_ID" FROM "T0" "A2" (accessing '!' )

Note
-----
   - fully remote statement

You’ll note that operation 3 is simply REMOTE, and t0 is the object accessed – which means this query is behaving as if the (local) t0 table is the remote one as far as the execution plan is concerned. The IN-OUT column tells us that this operation is “Remote to Serial” (R->S)” and the instance called to is named “!” which is how the local database is identified in the plan from a remote database.

We can also see that the execution plan gives us the “Remote SQL Information” for operation 2 – and that’s the text of the query that gets sent by the driving site to the instance that holds the object of interest. In this case the query is simply selecting the object_id values from all the rows in t0.

Now the plan for the query against t2@orclpdb2 which includes a /*+ rule */ hint:

-----------------------------------------------------------
| Id  | Operation              | Name     | Inst   |IN-OUT|
-----------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|          |        |      |
|   1 |  MERGE JOIN            |          |        |      |
|   2 |   SORT JOIN            |          |        |      |
|   3 |    TABLE ACCESS FULL   | T2       | ORCLP~ |      |
|*  4 |   SORT JOIN            |          |        |      |
|   5 |    VIEW                | VW_NSO_1 | ORCLP~ |      |
|   6 |     SORT UNIQUE        |          |        |      |
|   7 |      REMOTE            | T0       |      ! | R->S |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("A1"."OBJECT_ID"="OBJECT_ID")
       filter("A1"."OBJECT_ID"="OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   7 - SELECT /*+ RULE */ "OBJECT_ID" FROM "T0" "A2" (accessing '!' )

Note
-----
   - fully remote statement
   - rule based optimizer used (consider using cbo)

The most striking feature of this plan is that it is an RBO (rule based optimizer) plan not a cost-based plan – and the Note section confirms that observation. We can also see that the Remote SQL Information is echoing the /*+ RULE */ hint back in it’s query against t0. Since the query is operating rule-based the hash join mechanism is not available (it’s a costed path – it needs to know the size of the data that will be used in the build table), and that’s why the plan is using a sort/merge join.

Following the “incremental build” strategy for writing SQL all we have to do as the next step of producing the final code is put the two queries into separate views and join them:


explain plan for
select  v1.*, v2.*
from    (
        select  /*+ driving_site(t1) */
                t1.object_name, t1.object_id
        from    t1@&m_target    t1
        where
                t1.object_id in (
                        select  t0.object_id
                        from    t0
                )
        )       v1,
        (
        select
                /*+ rule driving_site(t2) */
                t2.object_name, t2.object_id
        from    t2@&m_target2 t2
        where
                t2.object_id in (
                        select  t0.object_id
                        from    t0
                )
        )       v2
where
        v1.object_id = v2.object_id
;

select * from table(dbms_xplan.display);

And here’s the execution plan – which, I have to admit, gave me a bit of a surprise on two counts when I first saw it:


-----------------------------------------------------------
| Id  | Operation              | Name     | Inst   |IN-OUT|
-----------------------------------------------------------
|   0 | SELECT STATEMENT       |          |        |      |
|   1 |  MERGE JOIN            |          |        |      |
|   2 |   MERGE JOIN           |          |        |      |
|   3 |    MERGE JOIN          |          |        |      |
|   4 |     SORT JOIN          |          |        |      |
|   5 |      REMOTE            | T2       | ORCLP~ | R->S |
|*  6 |     SORT JOIN          |          |        |      |
|   7 |      REMOTE            | T1       | ORCLP~ | R->S |
|*  8 |    SORT JOIN           |          |        |      |
|   9 |     VIEW               | VW_NSO_1 |        |      |
|  10 |      SORT UNIQUE       |          |        |      |
|  11 |       TABLE ACCESS FULL| T0       |        |      |
|* 12 |   SORT JOIN            |          |        |      |
|  13 |    VIEW                | VW_NSO_2 |        |      |
|  14 |     SORT UNIQUE        |          |        |      |
|  15 |      TABLE ACCESS FULL | T0       |        |      |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
       filter("T1"."OBJECT_ID"="T2"."OBJECT_ID")
   8 - access("T2"."OBJECT_ID"="OBJECT_ID")
       filter("T2"."OBJECT_ID"="OBJECT_ID")
  12 - access("T1"."OBJECT_ID"="OBJECT_ID")
       filter("T1"."OBJECT_ID"="OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   5 - SELECT /*+ RULE */ "OBJECT_NAME","OBJECT_ID" FROM "T2" "T2"
       (accessing 'ORCLPDB2.LOCALDOMAIN@LOOPBACK' )

   7 - SELECT /*+ RULE */ "OBJECT_NAME","OBJECT_ID" FROM "T1" "T1"
       (accessing 'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

Note
-----
   - rule based optimizer used (consider using cbo)

The two surprises were that (a) the entire plan was rule-based, and (b) the driving_site() selection has disappeared from the plan.

Of course as soon as I actually started thinking about what I’d written (instead of trusting the knee-jerk “just stick the two bits together”) the flaw in the strategy became obvious.

  • Either the whole query runs RBO or it runs CBO – you can’t split the planning.
  • In the words of The Highlander “There can be only one” (driving site that is) – only one of the database involved will decide how to decompose and distribute the query.

It’s an interesting detail that the /*+ rule */ hint seems to have pushed the whole query into the arms of the RBO despite being buried somewhere in the depths of the query rather than being in the top level query block – but we’ve seen that before in some old data dictionary views.

The complete disregard for the driving_site() hints is less interesting – there is, after all, a comment in the manuals somewhere to the effect that when two hints contradict each other they are both ignored. (But I did wonder why the Hint Report that should appear with 19.3 plans didn’t tell me that the hints had been observed but not used.)

The other problem (from the perspective of the OP) is that the two inline views have been merged so the join order no longer reflects the two isolated components we used to have. So let’s fiddle around a little bit to see how close we can get to what the OP wants. The first step would be to add the /*+ no_merge */ hint to both inline view, and eliminate one of the /*+ driving_site() */ hints to see what happens, and since we’re modern we’ll also get rid of the /*+ rule */ hint:


explain plan for
select  v1.*, v2.*
from    (
        select  /*+ qb_name(subq1) no_merge driving_site(t1) */
                t1.object_name, t1.object_id
        from    t1@&m_target    t1
        where
                t1.object_id in (
                        select  t0.object_id
                        from    t0
                )
        )       v1,
        (
        select
                /*+ qb_name(subq2) no_merge */
                t2.object_name, t2.object_id
        from    t2@&m_target2 t2
        where
                t2.object_id in (
                        select  t0.object_id
                        from    t0
                )
        )       v2
where
        v1.object_id = v2.object_id
;

select * from table(dbms_xplan.display);

-----------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|      |  4342 |   669K|    72   (9)| 00:00:01 |        |      |
|*  1 |  HASH JOIN             |      |  4342 |   669K|    72   (9)| 00:00:01 |        |      |
|   2 |   VIEW                 |      |  4342 |   334K|    14   (8)| 00:00:01 |        |      |
|   3 |    REMOTE              |      |       |       |            |          |      ! | R->S |
|   4 |   VIEW                 |      |  5168 |   398K|    57   (8)| 00:00:01 |        |      |
|*  5 |    HASH JOIN SEMI      |      |  5168 |   287K|    57   (8)| 00:00:01 |        |      |
|   6 |     TABLE ACCESS FULL  | T1   |  5168 |   222K|    16   (7)| 00:00:01 | ORCLP~ |      |
|   7 |     REMOTE             | T0   | 14058 |   178K|    40   (5)| 00:00:01 |      ! | R->S |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("A2"."OBJECT_ID"="A1"."OBJECT_ID")
   5 - access("A3"."OBJECT_ID"="A6"."OBJECT_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - EXPLAIN PLAN INTO "PLAN_TABLE" FOR SELECT /*+ QB_NAME ("SUBQ2") NO_MERGE */
       "A1"."OBJECT_NAME","A1"."OBJECT_ID" FROM  (SELECT DISTINCT "A3"."OBJECT_ID"
       "OBJECT_ID" FROM "T0" "A3") "A2","T2"@ORCLPDB2.LOCALDOMAIN@LOOPBACK "A1" WHERE
       "A1"."OBJECT_ID"="A2"."OBJECT_ID" (accessing '!' )

   7 - SELECT "OBJECT_ID" FROM "T0" "A6" (accessing '!' )

Note
-----
   - fully remote statement

In this plan we can see that the /*+ driving_site() */ hint has been applied – the plan is presented from the point of view of orclpdb (the database holding t1). The order of the two inline views has apparently been reversed as we move from the statement to its plan – but that’s just a minor side effect of the hash join (picking the smaller result set as the build table).

Operations 5 – 7 tell us that t1 is treated as the local table and used for the build table in a hash semi-join, and then t0 is accessed by a call back to our database and its result set is used as the probe table.

From operation 3 (in the body of the plan, and in the Remote SQL Information) we see that orclpdb has handed off the entire t2 query block to a remote operation – which is ‘accessing “!”. But there’s a problem (in my opinion) in the SQL that it’s handing off – the text is NOT the text of our inline view; it’s already been through a heuristic transformation that has unnested the IN subquery of our original text into a “join distinct view” – if we had used a hint to force this transformation it would have been the /*+ unnest(UNNEST_INNERJ_DISTINCT_VIEW) */ variant.

SELECT /*+ NO_MERGE */
        "A1"."OBJECT_NAME","A1"."OBJECT_ID"
FROM
       (SELECT DISTINCT "A3"."OBJECT_ID" "OBJECT_ID" FROM "T0" "A3") "A2",
       "T2"@ORCLPDB2.LOCALDOMAIN@LOOPBACK "A1"
WHERE
        "A1"."OBJECT_ID"="A2"."OBJECT_ID"

I tried to change this by adding alternative versions of the /* unnest() */ hint to the original query, following the query block names indicated by the outline information (not shown), but it looks as if the code path constructs the Remote SQL operates without considering the main query hints – perhaps the decomposition code is simply following the code path of the old heuristic “I’ll do it if it’s legal” unnest. The drawback to this is that if the original form of the text had been sent to the other site the optimizer that had to handle it could have used cost-based query transformation and may have come up with a better plan.

You may be wondering why I left the /*+ driving_site() */ hint in one of the inline views rather than inserting it in the main query block. The answer is simple – it didn’t seem to work (even in 19.3) when I put /*+ driving_site(t1@subq1) */ in the main query block.

tl;dr

The optimizer has to operate rule-based or cost-based, it can’t do a bit of both in the same query – so if you’ve got a /*+ RULE */ hint that takes effect anywhere in the query the entire query will be optimised under the rule-based optimizer.

There can be only one driving site for a query, and if you manage to get multiple driving_site() hints in a query that contradict each other the optimizer will ignore all of them.

When the optimizer decomposes a distributed query and produces non-trivial components to send to remote sites you may find that some of the queries constructed for the remote sites have been subject to transformations that you cannot influence by hinting.

Footnote

I mentioned factored subqueries and the /*+ materialize */ option in the opening notes. In plans where the attempt to specify the driving site failed (i.e. when the query ran locally) the factored subqueries did materialize. In any plans where the driving site was a remote site the factored subqueries were always inline. This may well be related to the documented (though not always implemented) restriction that temporary tables cannot take part in distributed transactions.

March 12, 2021

Distributed Sequences

Filed under: distributed,Execution plans,Oracle,Performance,Problem Solving — Jonathan Lewis @ 9:09 am GMT Mar 12,2021

A request for help came up some time ago on ODC reporting a query that was hanging when it included a sequence.nextval. In fact the intial “query” was an “insert as select” with a select that was a join of two remote tables.

Making the fairly automatic assumption that many people say “hanging” when they really mean “hasn’t finished yet” the first thought I had about the structure of the statement was that it was just a variation of the standard problem of distributed DML. I haven’t written anything previously about how using sequences can introduce the problem so here’s a note to demonstrate the issue and suggest a workaround:

We start with a database link and a few tables:


rem
rem     Script:         distributed_sequence.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             18.3.0.0
rem             12.2.0.1
rem

rem     create public database link orcl@loopback using 'orcl';

define m_target=orcl@loopback

create sequence s1 cache 10000;
select s1.nextval from dual;

create table t1
segment creation immediate
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
;

create table t2
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n2,
        lpad(rownum,10,'0')             v2,
        lpad('x',100,'x')               padding
from
        generator       v1
;

create table t3(
        id1     number(6,0),
        id2     number(6,0),
        n0      number(6,0),
        n1      number(6,0),
        n2      number(6,0),
        v1      varchar2(10),
        v2      varchar2(10)
)
segment creation immediate
;

create or replace view v1 as
select
        t1.id id1,
        t2.id id2,
        t1.n1,
        t2.n2,
        t1.v1,
        t2.v2
from
        t1, t2
where
        t2.id = t1.id
;

The command to create a public database link (one example of the many optional commands in my original source) has to be run by a suitably privileged schema as a one-off event. The definition of the substitution variable m_target that I’ll be using as my database link (again with many possibilities in my original script) means I only have to edit my choice of database link once in my script as I change Oracle instances. You’ll notice I’ve done my usual trick of using a loopback database link to emulate a distributed system.

I’ve then created two populated tables (t1, t2) and a third empty table (t3) that will be the target of an insert. I’ve also created a view (v1) that joins the first two tables and a sequence (s1) that I’ve primed with a single call to nextval. When I get to the tests I’ll be using t1, t2 and v1 as if they had been created in the remote database (referenced through my loopback database link) while t3 and s1 will be local objects.

So let’s run a couple of statements and see what their execution plans look like:


set serveroutput off

prompt  =======================
prompt  Insert without sequence
prompt  =======================

insert into t3 (id1, id2, n0, n1, n2, v1, v2)
select
        t1.id,
        t2.id,
        0,
        t1.n1,
        t2.n2,
        t1.v1,
        t2.v2
from
        t1@&m_target    t1,
        t2@&m_target    t2
where
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(format=>'-plan_hash'));

prompt  =======================
prompt  Insert WITH sequence
prompt  =======================

insert into t3 (id1, id2, n0, n1, n2, v1, v2)
select
        t1.id,
        t2.id,
        s1.nextval,
        t1.n1,
        t2.n2,
        t1.v1,
        t2.v2
from
        t1@&m_target    t1,
        t2@&m_target    t2
where
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(format=>'-plan_hash'));
commit;

The code joins t1 and t2, selects a couple of columns and inserts then into t3 without or with a call to s1.nextval (the local sequence). Here’s the output (with minor cosmetic changes) from executing this code under 19.3.0.0 (the effects are the same in 18.3.0.0 and 12.2.0.1):


=======================
Insert without sequence
=======================

10000 rows created.


SQL_ID  373nz810u3frv, child number 0
-------------------------------------
insert into t3 (id1, id2, n0, n1, n2, v1, v2) select  t1.id,  t2.id,
0,  t1.n1,  t2.n2,  t1.v1,  t2.v2 from  t1@orclpdb@loopback t1,
t2@orclpdb@loopback t2 where  t2.id = t1.id

-----------------------------------------------------------------
| Id  | Operation                | Name | Cost  | Inst   |IN-OUT|
-----------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |     1 |        |      |
|   1 |  LOAD TABLE CONVENTIONAL | T3   |       |        |      |
|   2 |   REMOTE                 |      |       | ORCLP~ | R->S |
-----------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------
   2 - SELECT "A2"."ID","A1"."ID",0,"A2"."N1","A1"."N2","A2"."V1","A1"."
       V2" FROM "T1" "A2","T2" "A1" WHERE "A1"."ID"="A2"."ID" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

Note
-----
   - cpu costing is off (consider enabling it)


=======================
Insert WITH sequence
=======================

10000 rows created.


SQL_ID  8jg23arujnh01, child number 0
-------------------------------------
insert into t3 (id1, id2, n0, n1, n2, v1, v2) select  t1.id,  t2.id,
s1.nextval,  t1.n1,  t2.n2,  t1.v1,  t2.v2 from  t1@orclpdb@loopback
t1,  t2@orclpdb@loopback t2 where  t2.id = t1.id

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |       |       |    54 (100)|          |        |      |
|   1 |  LOAD TABLE CONVENTIONAL | T3   |       |       |            |          |        |      |
|   2 |   SEQUENCE               | S1   |       |       |            |          |        |      |
|*  3 |    HASH JOIN             |      | 10000 |   937K|    54   (8)| 00:00:01 |        |      |
|   4 |     REMOTE               | T1   | 10000 |   468K|    26   (4)| 00:00:01 | ORCLP~ | R->S |
|   5 |     REMOTE               | T2   | 10000 |   468K|    26   (4)| 00:00:01 | ORCLP~ | R->S |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."ID"="T1"."ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   4 - SELECT /*+ OPAQUE_TRANSFORM */ "ID","N1","V1" FROM "T1" "A2" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

   5 - SELECT /*+ OPAQUE_TRANSFORM */ "ID","N2","V2" FROM "T2" "A1" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

The key thing to notice is that when we want to insert the local sequence as a column in the select list Oracle breaks the hash join into two separate accesses to the remote database and pulls all the data we might need from the two tables before trying to join them locally. When the query is “fully remote” the local database can let the remote database deal with the join, when the query is distributed – which is a side effect of introducing the sequence – the local site becomes the driving site and has to work out the least worst way of handling the join, which might be much slower than the remote join.  (It’s an odd little quirk that when the select is fully remote the optimizer thinks that it’s not using CPU costing. Possibly that’s because all the arithmetic happens somewhere else and the local cost of the query never gets above zero.)

This is one of those cases where we might work around the problem by creating a remote view to handle the join – hence the creation of the view v1; here’s a suitable statement and the resulting execution plan:

prompt  ==============================
prompt  Insert using view and sequence
prompt  ==============================

set serveroutput off

insert into t3 (id1, id2, n0, n1, n2, v1, v2)
select
        v1.id1,
        v1.id2,
        s1.nextval,
        v1.n1,
        v1.n2,
        v1.v1,
        v1.v2
from
        v1@&m_target v1
;

select * from table(dbms_xplan.display_cursor(format=>'-plan_hash'));
commit;



==============================
Insert using view and sequence
==============================

10000 rows created.


SQL_ID  4tz0rrqt87nb8, child number 0
-------------------------------------
insert into t3 (id1, id2, n0, n1, n2, v1, v2) select  v1.id1,  v1.id2,
s1.nextval,  v1.n1,  v1.n2,  v1.v1,  v1.v2 from  v1@orclpdb@loopback v1

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |       |       |    27 (100)|          |        |      |
|   1 |  LOAD TABLE CONVENTIONAL | T3   |       |       |            |          |        |      |
|   2 |   SEQUENCE               | S1   |       |       |            |          |        |      |
|   3 |    REMOTE                | V1   | 10000 |   937K|    27   (8)| 00:00:01 | ORCLP~ | R->S |
-------------------------------------------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT /*+ OPAQUE_TRANSFORM */ "ID1","ID2","N1","N2","V1","V2" FROM "V1" "V1"
       (accessing 'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

As you can see, the local optimizer doesn’t know enough about the remote view to be able to split it into components and make a mess of the execution plan, it simply sees a query against a “single table” and leaves the remote database to worry about optimising it. But, of course, we don’t always have the luxury of being able to create objects on someone else’s database, so what’s the alternative?

Try rewriting the query to use an inline view with the /*+ no_merge() */ hint:


prompt  =======================================
prompt  Insert from a no-merge inline view with
prompt  a sequence.nextval in the outer query
prompt  ========================================

set serveroutput off

insert into t3 (id1, id2, n0, n1, n2, v1, v2)
select
        id1, id2, s1.nextval, n1, n2, v1, v2
from    (
        select  /*+ no_merge */
                t1.id   id1,    
                t2.id   id2,
                0       n0,
                t1.n1,
                t2.n2,
                t1.v1,
                t2.v2
        from
                t1@&m_target    t1,
                t2@&m_target    t2
        where
                t2.id = t1.id
        )       v1
;

select * from table(dbms_xplan.display_cursor(format=>'-plan_hash'));
commit;


=======================================
Insert from a no-merge inline view with
a sequence.nextval in the outer query
========================================

10000 rows created.


SQL_ID  20z81g550tbsk, child number 0
-------------------------------------
insert into t3 (id1, id2, n0, n1, n2, v1, v2) select  id1, id2,
s1.nextval, n1, n2, v1, v2 from (  select /*+ no_merge */   t1.id id1,
 t2.id id2,   0 n0,   t1.n1,   t2.n2,   t1.v1,   t2.v2  from
t1@orclpdb@loopback t1,   t2@orclpdb@loopback t2  where   t2.id = t1.id
 ) v1

-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |       |       |    54 (100)|          |        |      |
|   1 |  LOAD TABLE CONVENTIONAL | T3   |       |       |            |          |        |      |
|   2 |   SEQUENCE               | S1   |       |       |            |          |        |      |
|   3 |    VIEW                  |      | 10000 |   937K|    54   (8)| 00:00:01 |        |      |
|   4 |     REMOTE               |      |       |       |            |          | ORCLP~ | R->S |
-------------------------------------------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------
   4 - SELECT /*+ NO_MERGE */ "A2"."ID","A1"."ID",0,"A2"."N1","A1"."N2","A2"."V1","A1"."V
       2" FROM "T1" "A2","T2" "A1" WHERE "A1"."ID"="A2"."ID" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )


The hint has done exactly what we needed: it has passed the text of the inline view to the remote database for optimisation so the join happens remotely, and the sequence number is then included after the result set comes back from the remote database. The SQL is a little messier, of course, mostly thanks to the doubled appearance of the columns in the select list.

Summary

Using a local sequence in DML that accesses a remote database makes the optimizer treats the underlying query as a distributed query, and this may mean it can’t find an efficient execution path unless you do some re-engineering of the code. If you can manage to make an insert with a constant efficient then using that version of the code as an in-line no_merge view with one extra layer that brings the sequence into play may be all you need to do to make the DML operate efficiently.

March 4, 2021

use_nl_with_index

Filed under: Execution plans,Index skip scan,Indexing,Nested Loop,Oracle — Jonathan Lewis @ 3:59 pm GMT Mar 4,2021

One of the less well-known hints is the hint /*+ use_nl_with_index() */  (link to 19c reference manual) which appeared in the 10g timeline. I may be wrong but I don’t think I saw a good description of this hint in the manuals until the 19c manual supplied the following:

The USE_NL_WITH_INDEX hint will cause the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table but only under the following condition. If no index is specified, the optimizer must be able to use some index with at least one join predicate as the index key. If an index is specified, the optimizer must be able to use that index with at least one join predicate as the index key.

It looks like a fairly redundant hint, really, since it could easily (and with greater safely, perhaps) be replaced by the pair /*+ use_nl() index() */ with the necessary details of query block, object alias etc. on the hints. In fact I think I’ve only ever seen the hint “in the wild” once, and that was in an internal view definition where it had been used incorrectly (see this recent update to a note on one of the dynamic performance views that I wrote a few years ago).

The note I’ve just referenced prompted me to take a closer look at the hint to see how accurate the definition was. Here’s a data set I created for testing:

rem
rem     Script:         use_nl_with_index.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2021
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,10)                  n10,
        mod(rownum,1000)                n1000,
        mod(rownum,2000)                n2000,
        lpad(mod(rownum,1000),10,'0')   v1000,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5   -- > comment to avoid WordPress format issue
;

create table t2 as
select distinct
        n10, n1000, v1000
from
        t1
;

create index t1_i1000 on t1(n1000);
create index t1_i10_1000 on t1(n10,n1000);
create index t1_i2000 on t1(n2000);
create bitmap index t1_b1 on t1(n1000, n10);

I’ve set up the data to do a join between t2 and t1, and I’m going to hint a query to force the join order t2 -> t1, and thanks to the data pattern the default path should be a hash join. Once I’ve established the default path I’m going to use the use_nl_with_index() hint to see how it behaves with respect to the various indexes I’ve created. So here’s the query with the default path:

set autotrace traceonly explain

select  
        /*+ leading(t2 t1) */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  1 |  HASH JOIN         |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |   100 |   700 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |   100K|    12M|   252   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."N1000"="T2"."N1000")
   2 - filter("T2"."N10"=1)

Note
-----
   - this is an adaptive plan

So the join order is as required, and the default is a hash join. The join predicate is t1.n1000 = t2.n1000, and if you examine the indexes I’ve created you’ll see I’ve got

  • t1_i1000 on t1(n1000) – the perfect index
  • t1_i10_1000 on t1(n10, n1000) – which could be used for a skip scan
  • t1_i2000 on t1(n2000) – which doesn’t include the column in the join predicate
  • t1_b1 on t1(n1000, n10) – which is a bitmap index

So here are the first batch of tests – all rolled into a single statement with optional hints included:

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1) 
--              use_nl_with_index(t1 t1_i1000)
--              use_nl_with_index(t1(n1000))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Execution Plan
----------------------------------------------------------
Plan hash value: 3315267048

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|   1 |  NESTED LOOPS                |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|   2 |   NESTED LOOPS               |          | 10000 |  1318K| 10133   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | T2       |   100 |   700 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | T1_I1000 |   100 |       |     1   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1       |   100 | 12800 |   101   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N10"=1)
   4 - access("T1"."N1000"="T2"."N1000")

If I don’t specify an index the optimizer picks the best possible index; alternatively I can specify the index on (n1000) by name or by description and the optimizer will still use it. So what do I get if I reference the index on (n2000):

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1(n2000))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  1 |  HASH JOIN         |      | 10000 |  1318K|   259   (8)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |   100 |   700 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |   100K|    12M|   252   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."N1000"="T2"."N1000")
   2 - filter("T2"."N10"=1)

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   3 -  SEL$1 / T1@SEL$1
         U -  use_nl_with_index(t1(n2000))

Note
-----
   - this is an adaptive plan


I’m back to the tablescan with hash join – and since I’m testing on 19.3.0.0 Oracle kindly tells me in the Hint Report that I have an unused hint: the one that can’t be used because the referenced index doesn’t have any columns that are join predicates.

So what about the skip scan option:

select  
        /*+ 
                leading(t2 t1) 
                use_nl_with_index(t1(n10, n1000))
--              use_nl_with_index(t1(n10))
--              index_ss(t1 (n10))
        */
        t1.*
from    t2, t1
where
        t2.n10 = 1
and     t1.n1000 = t2.n1000
;


Even though the index I’ve specified in the hint does contain a column in the join predicate the execution plan reports a full tablescan and hash join – unless I include an explicit index_ss() hint: but in that case I might as well have used the vanilla flavoured use_nl() hint. I did have a look at the 10053 (CBO) trace file for this example, and found that if I didn’t include the index_ss() hint the optimizer calculated the cost of using an index full scan (and no other option) for every single index on t1 before choosing the tablescan with hash join.

Finally, and without repeating the query, I’ll just note that when I referenced t1_b1 (n1000, n10) in the hint Oracle was happy to use the index in a nested loop join:

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|   1 |  NESTED LOOPS                 |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|   2 |   NESTED LOOPS                |       | 10000 |  1318K|  2182   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL          | T2    |   100 |   700 |     2   (0)| 00:00:01 |
|   4 |    BITMAP CONVERSION TO ROWIDS|       |       |       |            |          |
|*  5 |     BITMAP INDEX RANGE SCAN   | T1_B1 |       |       |            |          |
|   6 |   TABLE ACCESS BY INDEX ROWID | T1    |   100 | 12800 |  2182   (1)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T2"."N10"=1)
   5 - access("T1"."N1000"="T2"."N1000")
       filter("T1"."N1000"="T2"."N1000")

Summary

The use_nl_with_index() hint generally works as described in the manuals – with the exception that it doesn’t consider an index skip scan as a valid option when trying to match the join predicate. That exception is one of those annoying little details that could waste a lot of your time.

Since it’s so easy to replace use_nl_with_index() with a pair of hints – including an index hint that could be an index_desc(), index_ss(), or index_combine() hint – I can’t come up with a good reason for using the use_nl_with_index() hint.

January 26, 2021

Index Hints

Filed under: CBO,dbms_xplan,Execution plans,Hints,Ignoring Hints,Indexing,Oracle — Jonathan Lewis @ 4:28 pm GMT Jan 26,2021

At the end of the previous post on index hints I mentioned that I had been prompted to complete a draft from a few years back because I’d been sent an email by Kaley Crum showing the optimizer ignoring an index_rs_asc() hint in a very simple query. Here, with some cosmetic changes, is the example he sent me.

rem
rem     Script: index_rs_kaley.sql
rem     Dated:  Dec 2020
rem     Author: Kaley Crum
rem
rem     Last tested
rem             19.3.0.0
rem

create table range_scan_me(
        one,
        letter 
)
compress
nologging
as
with rowgen_cte as (
        select  null
        from    dual
        connect by level <=  11315
)
select
        1 one,
        case 
                when rownum <=  64e5     then 'A'
                when rownum  =  64e5 + 1 then 'B'
                when rownum <= 128e5     then 'C' 
        end     letter
from 
        rowgen_cte a
cross join 
        rowgen_cte b 
where 
        rownum <= 128e5
;

create index one_letter_idx on range_scan_me(one, letter) nologging;

The table has 12.8 million rows. Of the two columns the first always holds the value 1, the second has one row holding the value ‘B’, and 6.4M rows each holding ‘A’ and ‘C’. On my laptop it took about 20 seconds to create the table and 26 seconds to create the index; using a total of roughly 376 MB (29,000 blocks for the index, 18,500 blocks for the (compressed) table).

Since this is running on 19,3 Oracle will have created basic statistics on the table and index as it created them. Significantly, though, the statistics created during data loading do note include histograms so the optimizer will not know that ‘B’ is a special case, all it knows is that there are three possible values for letter.

Time now to query the data:

et serveroutput off
alter session set statistics_level=all;

select 
        /*+ index_rs_asc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last'));

I’ve told the optimizer to use an index range scan, using the “description” method to specify the index I want it to use. The hint is definitely valid, and the index can definitely be used in this way to get the correct result. But here’s the execution plan:

------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.01 |       8 |      4 |
|*  1 |  INDEX SKIP SCAN | ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |      4 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_asc(t1 (one, letter))

The plan gives us two surprises: first it ignores (and reports that it is ignoring) a perfectly valid hint. Secondly it claims to be using an index skip scan even though the common understanding of a skip scan is that it will be used when “the first column of the index doesn’t appear in the where clause”.

We can infer that the plan is truthful because it has taken only 8 buffer visits to get the result – that’s probably a probe down to the (1,’B’) index entry, then another probe to see if the last index leaf block has any entries in it where column one is greater than 1.

But there are a couple of little oddities about this “ignoring the index” line. First, if we hadn’t hinted the query at all it would have done a tablescan, so the “index” bit of the hint is being obeyed even if the “rs” bit isn’t. Then there’s this:

select 
        /*+ index_rs_desc(t1 (one, letter)) */ 
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
/

-------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |      1 |        |      1 |00:00:00.01 |       8 |
|*  1 |  INDEX SKIP SCAN DESCENDING| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
         U -  index_rs_desc(t1 (one, letter))

If we change the index_rs_asc() to index_rs_desc(), the optimizer still ignores the “range scan” bit of the hint, but honours the “descending” bit – we get an index skip scan descending.

Of course this example is a very extreme case – nevertheless it is a valid example of the optimizer behaving in a way that doesn’t seem very user-friendly. If we add ‘outline’ to the format options for the call to dbms_xplan.display_cursor() we’ll find that the index_ss_asc() and index_ss_desc() hints have been substituted for our attempted index_rs_asc() and index_rs_desc().

So, if we really are confident that an index range scan would work a lot better than an index skip scan what could we do. We could try telling it to use an index (posibly even an index range scan ascending), but not to do an index skip scan. Let’s test that and include the Outline Information in the execution plan:

select 
        /*+ index(t1) no_index_ss(t1) */
        letter, one
from 
        range_scan_me t1
where   one >= 1
and     letter = 'B'
;


select * from table(dbms_xplan.display_cursor(format=>'hint_report allstats last outline'));


---------------------------------------------------------------------------------------------
| Id  | Operation        | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |      1 |        |      1 |00:00:00.78 |   14290 |
|*  1 |  INDEX RANGE SCAN| ONE_LETTER_IDX |      1 |   4266K|      1 |00:00:00.78 |   14290 |
---------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("ONE">=1 AND "LETTER"='B' AND "ONE" IS NOT NULL)
       filter("LETTER"='B')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2
---------------------------------------------------------------------------
   1 -  SEL$1 / T1@SEL$1
           -  index(t1)
           -  no_index_ss(t1)

It worked – we can see the index range scan, and we can see in the Buffers column of the plan why an index range scan was a bad idea – it’s taken 14,290 buffer visits to get the right result. If you check the index size I mentioned further up the page (, and think about how I defined the data, you’ll realise that Oracle has started an index range scan at the leaf block holding (1,B’) – which is half way along the index – and then walked every leaf block from there to the end of the index in an attempt to find any index entries with column one greater than 1.

The other thing to notice here is that the hint in the Outline Information is given as:

INDEX(@"SEL$1" "T1"@"SEL$1" ("RANGE_SCAN_ME"."ONE" "RANGE_SCAN_ME"."LETTER"))

This was the hint that appeared in the outline whether I used the index() hint or the index_rs_asc() hint in the query. Similarly, when I tried index_desc() or index_rs_desc() as the hint the outline reported index_desc() in both cases.

If I try adding just this hint to the query the plan goes back to a skip scan. It’s another case where the hints in the Outline Information (hence, possibly, an SQL Plan Baseline) don’t reproduce the plan that the outline claims to be describing.

Summary

Does Oracle ignore hints?

It looks as if the answer is still no, except it seems to think that a skip scan is just a special case of a range scan (and, from the previous article, a range scan is just a special case of a skip scan). So if you want to ensure that Oracle uses your preferred index strategy you may have to think about including various “no_index” hints to block the indexes you don’t want Oracle to use, and then no_index_ss() and no_index_ffs() to make sure it doesn’t use the wrong method for the index you do want to use. Even then you may find you don’t have quite enough options to block every index option that you’d like to block.

January 20, 2021

Hint Errors

Filed under: 19c,dbms_xplan,Execution plans,Hints,Ignoring Hints,Oracle — Jonathan Lewis @ 11:06 am GMT Jan 20,2021

This is a list of possible explanations of errors that you might see in the Hint Report section of an execution plan. It’s just a list of the strings extracted from a chunk of the 19.3 executable around the area where I found something I knew could be reported, so it may have some errors and omissions – but there are plenty of things there that might give you some idea why (in earlier versions of Oracle) you might have seen Oracle “ignoring” a hint:

internally generated hint is being cleared
hint conflicts with another in sibling query block
hint overridden by another in parent query block
conflicting optimizer mode hints
duplicate hint
all join methods are excluded by hints
index specified in the hint doesn't exist
index specified in hint cannot be parallelized
incorrect number of indexes for AND_EQUAL
partition view set up
FULL hint is same as INDEX_FFS for IOT
access path is not supported for IOT
hint on view cannot be pushed into view
hint is discarded during view merging
duplicate tables in multi-table hint
conditions failed for array vector read
same QB_NAME hints for different query blocks
rejected by IGNORE_OPTIM_EMBEDDED_HINTS
specified number must be positive integer
specified number must be positive number
specified number must be >= 0 and <= 1
hint is only valid for serial SQL
hint is only valid for slave SQL
hint is only valid for dyn. samp. query
hint is only valid for update join ix qry
opt_estimate() without value list
opt_estimate() with conflicting values spec
hint overridden by NO_QUERY_TRANSFORMATION
hinted query block name is too long
hinted bitmap tree wasn't fully resolved
bitmap tree specified was invalid
Result cache feature is not enabled
Hint is valid only for select queries
Hint is not valid for this query block
Hint cannot be honored
Pred reorder hint has semantic error
WITH_PLSQL used in a nested query
ORDER_SUBQ with less than two subqueries
conflicting OPT_PARAM hints
conflicting optimizer_feature_enable hints
because of _optimizer_ignore_parallel_hints
conflicting JSON_LENGTH hints

Update August 2021 – New items in 21.3

Hint id larger than number of union groups
ORDER_KEY_VECTOR_USE with less than two IDs
ORDER_SUBQ referenced query block name, which cannot be found
Same table referenced in both lists

CBO Example

Filed under: CBO,Execution plans,Oracle,Statistics — Jonathan Lewis @ 10:01 am GMT Jan 20,2021

A little case study based on an example just in on the Oracle-L list server. This was supplied with a complete, working, test case that was small enough to understand and explain very quickly.

The user created a table, and used calls to dbms_stats to fake some statistics into place. Here, with a little cosmetic editing, is the code they supplied.

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

drop table t1 purge;

create table t1 (id number(20), v varchar2(20 char));
create unique index pk_id on t1(id);
alter table t1 add (constraint pk_id primary key (id) using index pk_id enable validate);
exec dbms_stats.gather_table_stats(user, 't1');
 
declare
        srec               dbms_stats.statrec;
        numvals            dbms_stats.numarray;
        charvals           dbms_stats.chararray;
begin
  
        dbms_stats.set_table_stats(
                ownname => user, tabname => 't1', numrows => 45262481, numblks => 1938304, avgrlen => 206
        );

        numvals := dbms_stats.numarray (1, 45262481);
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, numvals);
        dbms_stats.set_column_stats (
                ownname => user, tabname => 't1', colname => 'id', 
                distcnt => 45262481, density => 1/45262481,
                nullcnt => 0, srec => srec, avgclen => 6
        );

        charvals := dbms_stats.chararray ('', '');
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, charvals);
        dbms_stats.set_column_stats(
                ownname => user, tabname => 't1', colname => 'v', 
                distcnt => 0,  density => 0, 
                nullcnt => 45262481, srec => srec, avgclen => 0
        );
        dbms_stats.set_index_stats( 
                ownname => user, indname =>'pk_id', numrows => 45607914, numlblks => 101513,
                numdist => 45607914, avglblk => 1, avgdblk => 1, clstfct => 33678879, indlevel => 2
        );
end;
/
 
variable n1 nvarchar2(32)
variable n2 number

begin
        :n1 := 'D';
        :n2 := 50;
end;
/
 

select 
        /*+ gather_plan_statistics */ 
        * 
from    ( 
        select  a.id col0,a.id col1
        from    t1 a
        where   a.v = :n1 
        and     a.id > 1
        order by 
                a.id 
        ) 
where 
        rownum <= :n2 
;
 
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost peeked_binds '));

From Oracle’s perspective the table has 45M rows, with a unique sequential key starting at 1 in the id column. The query looks like a pagination query, asking for 50 rows, ordered by id. But the in-line view asks for rows where id > 1 (which, initiall, means all of them) and applies a filter on the v column.

Of course we know that v is always null, so in theory the predicate a.v = :n1 is always going to return false (or null, but not true) – so the query will never return any data. However, if you read the code carefully you’ll notice that the bind variable v has been declared as an nvarchar2() not a varchar2().

Here’s the exection plan I got on an instance running 19.3 – and it’s very similar to the plan supplied by the OP:

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |  3747 (100)|      0 |00:00:00.01 |
|*  1 |  COUNT STOPKEY                |       |      1 |        |            |      0 |00:00:00.01 |
|   2 |   VIEW                        |       |      1 |     50 |  3747   (1)|      0 |00:00:00.01 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |    452K|  3747   (1)|      0 |00:00:00.01 |
|*  4 |     INDEX RANGE SCAN          | PK_ID |      0 |   5000 |    14   (0)|      0 |00:00:00.01 |
----------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   2 - :2 (NUMBER): 50

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=:N2)
   3 - filter(SYS_OP_C2C("A"."V")=:N1)
   4 - access("A"."ID">1)

The question we were asked was this: “Why does the optimizer estimate that it will return 5,000 entries from the index range scan at operation4?”

The answer is the result of combining two observations.

First: In the Predicate Information you can see that Oracle has applied a character-set conversion to the original predicate “a.v = :n1” to produce filter(SYS_OP_C2C(“A”.”V”)=:N1). The selectivity of “function of something = bind value” is one of those cases where Oracle uses one of its guesses, in this case 1%. Note that the E-rows estimate for operation 3 (table access) is 452K, which is 1% of the 45M rows in the table.

In real life if you had optimizer_dynamic_sampling set at level 3, or had added the hint /*+ dynamic_sampling(3) */ to the query, Oracle would sample some rows to avoid the need for guessing at this point.

Secondly: the optimizer has peeked the bind variable for the rownum predicate, so it is optimizing for 50 rows (basically doing the arithmetic of first_rows(50) optimisation). The optimizer “knows” that the filter predicate at the table will eliminate all but 1% of the rows acquired, and it “knows” that it has to do enough work to find 50 rows in total – so it can calculate that (statistically speaking) it has to walk through 5,000 (= 50 * 100) index entries to visit enough rows in the table to end up with 50 rows.

Next Steps (left as exercise)

Once you’ve got the answer to the question “Why is this number 5,000?”, you might go back and point out that the estimate for the table access was 95 times larger than the estimate for the number of rowids selected from the index and wonder how that could be possible. (Answer: that’s just one of the little defects in the code for first_rows(n).)

You might also wonder what would have happened in this model if the bind variable n1 had been declared as a varchar2() rather than an nvarchar2() – and that might have taken you on to ask yet another question about what the optimizer was playing at.

Once you’ve modelled something that is a little puzzle there’s always scope for pushing the model a little further and learning a little bit more before you file the model away for testing on the next version of Oracle.

January 14, 2021

Between

Filed under: CBO,Conditional SQL,Execution plans,Oracle — Jonathan Lewis @ 11:07 am GMT Jan 14,2021

Reading Richard Foote’s latest blog note about automatic indexing and “non-equality” predicates I was struck by a whimsical thought about how the optimizer handles “between” predicates. (And at the same time I had to worry about the whimsical way that WordPress treats “greater than” and “less than” symbols.)

It’s probably common knowledge that if your SQL has lines like this:

columnA between {constant1} and {constant2}

the optimizer will transform them into lines like these:

    columnA >= {constant1}
and columnA <= {constant2}

The question that crossed my mind – and it was about one of those little details that you might never look at until someone points it out – was this: “does the optimizer get clever about which constant to use first?”

The answer is yes (in the versions I tested). Here’s a little demonstration:

rem
rem     Script:         between.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2021
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
select
        rownum  rn,
        ao.*
from
        all_objects ao
where
        rownum <= 50000
;

set autotrace traceonly explain

select  object_name
from    t1
where
        rn between 45 and 55
;


select  object_name
from    t1
where
        rn between 49945 and 49955
;


select  object_name
from    t1
where
        rn between 24945 and 24955
;

select  object_name
from    t1
where
        rn between 25045 and 25055
;

set autotrace off

All I’ve done is create a table with 50,000 rows and a column that is basically a unique sequence number between 1 and 50,000. Then I’ve checked the execution plans for a simple query for 11 rows based on the sequence value – but for different ranges of values.

Two of the ranges are close to the low and high values for the sequence; two of the ranges are close to, but either side of, the mid-point value (25,000) of the sequence. The big question is: “does the execution plan change with choice of range?”. The answer is Yes, and No.

No … because the only possible execution path is a full tablescan

Yes … because when you examine the plan properly you’ll notice a change in the Predicate Information. Here are the first two execution plans produced by the calls to dbms_xplan.display():

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN"<=55 AND "RN">=45)

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   528 |   140   (5)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    12 |   528 |   140   (5)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=49945 AND "RN"<=49955)

Notice how the order of the filter predicates has changed as we move from one end of the range to the other. The optimizer has decided do the test that is more likely to fail first, and the test that is more likely to succeed second (which means there won’t be many rows where it has to run both tests which will make a small difference in the CPU usage).

Picking out just the filter predicate line from the output for this script (host grep filter between.lst) you can see the same pattern appear when the values supplied are very close to the mid-point (25,000).

SQL> host grep filter between.lst
   1 - filter("RN"<=55 AND "RN">=45)
   1 - filter("RN">=49945 AND "RN"<=49955)
   1 - filter("RN"<=24955 AND "RN">=24945)
   1 - filter("RN">=25045 AND "RN"<=25055)

My code has used literal values to demonstrate an effect. It’s worth checking whether we would still see the same effect if we were using bind variables (and bind variable peeking were enabled). So here’s a little more of the script:

set serveroutput off

variable b1 number
variable b2 number

exec :b1 := 45
exec :b2 := 55

select
        /* low_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

exec :b1 := 49945
exec :b2 := 49955

select
        /* high_test */
        object_name
from    t1
where
        rn between :b1 and :b2
/

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));
set serveroutput on

Since autotrace simply calls “explain plan” and doesn’t know anything about bind variables (treating them as unpeekable character strings) I’ve used code that executes the statements and pulls the plans from memory. Here are the results (with some of the script’s output deleted):

EXPLAINED SQL STATEMENT:
------------------------
select  /* low_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN"<=:B2 AND "RN">=:B1))


EXPLAINED SQL STATEMENT:
------------------------
select  /* high_test */  object_name from t1 where  rn between :b1 and :b2

Plan hash value: 3332582666

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  FILTER            |      |
|*  2 |   TABLE ACCESS FULL| T1   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:B2>=:B1)
   2 - filter(("RN">=:B1 AND "RN"<=:B2))

As you can see, when we query the low value the first comparison is made against :b2, when we query the high range the first comparison is made against :b1.

It is actually worth knowing that this can happen. How many times have you heard the question: “the plan’s the same, why is the performance different?”. Maybe the body of the plan looks the same and has the same plan_hash_value, but today the first person to execute the query supplied bind values that made the optimizer choose to apply the filters in the opposite order to usual. This probably won’t make much difference to CPU usage in most cases there are bound to be a few cases where it matters.

You’ll notice, by the way, that the plan with bind variables includes a FILTER operation that doesn’t appear in the plans with literal values. This is an example of “conditional SQL” – if you check the predicate information for operation 1 you’ll see that it’s checking that :b2 is greater than :b1, if this test doesn’t evaluate to true then operation 1 will not make a call to operation 2, i.e. the tablescan is in the plan but won’t happen at run-time.

(I believe that there may be some RDBMS which will treat (e.g.) “X between 20 and 10” as being identical to “X between 10 and 20” – Oracle doesn’t.)

Left as an exercise

The test data was created as a completely evenly spaced (by value) and evenly distributed (by count) set of values. How would things change if the data were sufficiently skewed that the optimizer would default to creating a histogram when gathering stats.

Left as another exercise**

There are lots of little bits of arithmetic that go into the CPU_COST component of an execution plan – including a tiny factor to allow for the number of columns that Oracle has to “step over” (by counting bytes) as it projects the columns needed by the query; so if you had a second “between” predicate on another column in the table, could you manage to get all 24 possible orders for the 4 transformed predicates by adjusting the ranges of the between clauses and/or moving the two columns to different positions in the row.

** For those in lockdown who need something to do to fill the time.

December 8, 2020

Hash Joins

Filed under: Hash Join,Joins,Oracle — Jonathan Lewis @ 11:55 am GMT Dec 8,2020

This is a follow-up to a note that’s 10 years old [opens in new tab], prompted by a thread on the Oracle Developer Community forum asking about the order of operation in an execution plan, and following up with a question about PGA memory use that I had left open in the original note.

The original note pointed out that the serial execution plan for a 4 table join that used hash joins for every join and had a leading() hint dictating a single join order could still have 8 distinct execution plans (which I then corrected to 4, because half of them were eliminated by an odd little inconsistency of the leading() hint).

The source of the number 8 was the fact that when you hint a hash join with /*+ use_hash(next_alias) */ the optimizer will consider a /*+ swap_join_inputs(next_alias) */ – in other words, it will choose which of the two inputs should be the “build” table and which the “probe” table, even though the human eye will probably assume that next_alias was clearly intended as the probe table. (The “little inconsistency” in the leading() hint is that it blocks the option for a swap for the first – and only the first – join unless it’s explicitly hinted.)

In the article I showed all 8 possible plans, and described how the two plans at the extreme ends of my list would operate at run-time. I’ve reproduced these two plans below, preceded by the hints that produced them:

leading(t1 t2 t3 t4)
use_hash(t2) no_swap_join_inputs(t2)
use_hash(t3) no_swap_join_inputs(t3)
use_hash(t4) no_swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|*  3 |    HASH JOIN          |      |    70 |  1260 |     8 |
|*  4 |     HASH JOIN         |      |    70 |   840 |     5 |
|   5 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
|   6 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   7 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|   8 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
--------------------------------------------------------------

leading(t1 t2 t3 t4)
use_hash(t2)    swap_join_inputs(t2)
use_hash(t3)    swap_join_inputs(t3)
use_hash(t4)    swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|   3 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
|*  4 |    HASH JOIN          |      |    70 |  1260 |     8 |
|   5 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|*  6 |     HASH JOIN         |      |    70 |   840 |     5 |
|   7 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   8 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
--------------------------------------------------------------

Side note: one of the comments on the original post raised the point that (e.g.) the second of the plans above could have been generated by the join order represented by the hint leading(t2 t1 t3 t4), and that’s absolutely correct. In fact, if you see the plan for a 4 table join consisting of nothing but 3 hash join you cannot be certain what join order the optimizer was examining when it produced that plan unless you look at the CBO trace file (or check to see if it’s been explicitly and completely, hinted in the code).

Having set the scene, we come to the question that prompted this note. The question related to the second plan above, and said:

“However, this one is the plan that takes the most memory in the PGA because it builds 3 hash tables before joining any table, right?”

The quick answer to the question is: “Not necessarily.”

In the original article I had pointed out that plans following the pattern of the first plan above with N tables and N – 1 joins would have at most two build tables in memory at any one moment while the bottom plan would create N – 1 build tables in memory before any join results could be created. This does rather invite the inference that the bottom plan has to be the one that is going to use most PGA memory, but what I had said in the original article was (emphasis added):

“Notice: the number of in-memory hash (build) tables we have in the first of the 8 plans at any instant after the first join starts is two and (no matter how many tables are involved in this pattern) the number of in-memory hash tables will always be two. The actual size of the two hash tables is a little unpredictable and, as a very crude guideline, you might expect the size to grow as more tables are joined into the result set.

As a thought experiment, consider 3 small dimension tables and one big fact table. If Oracle were to create in-memory hash tables from the three dimension tables and then start scanning the fact table (following the pattern of the second plan above with t1 in the role of the fact table) probing each of the dimension tables in turn, it could deliver the first result row very quickly without requiring more memory to store intermediate results.

Conversely if Oracle were to create a tiny in-memory hash table from the first dimension and probe it with the fact table (following the pattern of the first plan above with t2 in the role of the fact table) Oracle would then have to build a very large in-memory hash table before before probing it with the second dimension table, and as that second join takes place it would be generating a new result set that would become the next big in-memory hash table.

In this thought experiment we would probably find that the optimizer did the right thing without prompting and constructed three tiny in-memory hash tables – but it’s not always so clear-cut, and even the “right” decision can result in very large intermediate build tables (and if those intermediate build tables spill to disc in an Exadata environment the change in performance can be huge).

To finish off, here’s a script to turn the thought experiment into a concrete example (by careful, but deliberately silly, hinting).

rem
rem     Script:         c_treblehash_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem 

create table t1 
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) swap_join_inputs(t1)
                use_hash(t2) full(t2) swap_join_inputs(t2)
                use_hash(t3) full(t3) swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) no_swap_join_inputs(t1)
                use_hash(t2) full(t2) no_swap_join_inputs(t2)
                use_hash(t3) full(t3) no_swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

All I’ve done is create three small “dimension” tables of 70 rows each then created a table which is their Cartesian join, which produces a “fact” table of 343,000 rows. Then I’ve written a simple query to join the three dimension tables to the fact table.

I’ve used swap_join_inputs() for all the joins in one version of the query, and no_swap_join_inputs() in the other versions; and I’ve reconnected to the data before running each query to make it easier to see the different impact on the PGA of the two plans.

I’ve use dbms_xplan.display_cursor() to pull the execution plans from memory after running the queries, and since I’ve not set statistics_level to all, or added the hint /*+ gather_plan_statistics */ to the queries the only rowsource execution statistics I’ll get are the PGA usage: estimates and actuals.

The final step of each test reports the maximum PGA memory usage the session reached in the course of the test.

Here are the two plans, each followed by the PGA maximum memory size.

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|  2171K|  2171K| 1684K (0)|
|   3 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
|*  4 |    HASH JOIN          |      |    343K|  2171K|  2171K| 1681K (0)|
|   5 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|*  6 |     HASH JOIN         |      |    343K|  2171K|  2171K| 1711K (0)|
|   7 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   8 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   4 - access("T2"."ID"="T4"."ID2")
   6 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 13,859,925

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|    23M|  3727K|   29M (0)|
|*  3 |    HASH JOIN          |      |    343K|    23M|  3727K|   29M (0)|
|*  4 |     HASH JOIN         |      |    343K|    23M|  3667K|   29M (0)|
|   5 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
|   6 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   7 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|   8 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   3 - access("T2"."ID"="T4"."ID2")
   4 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 52,984,917

As you can see, each of the hash joins in the first plan required roughly 1.7MB of memory. All three hash tables would have been in existence at the same time, giving a total of about 5.1MB of memory for the query. The session’s maximum PGA usage shows up as 13MB, of which 5MB was my basic “startup” PGA, leaving 3MB “unaccounted”.

In comparison, each of the hash joins in the second plan required roughly 29MB, although only two of the hash tables would have been in existence at any one moment. That’s still an allocation of 58MB for the same basic 4 table join. In fact things aren’t quite as bad as they seem in this case since the maximum PGA allocated was only about 52MB (again with 5MB of startup PGA). The apparent contradiction may be due to the way that Oracle allocates PGA in increasing chunks – the 29MB reported may have been the result of the session doing something like: “I’ve reached 23MB, my next allocation will be 4MB, oops, I only needed another 128KB)”

As a final check of activity, I’ve also run a couple of queries against V$sql_workarea – a convenient memory structure you can examine to get some “recent history” of queries that have been using large amount of memory, or spilling to disc. In this case I’ve query the structure by sql_id for the two queries, reporting just a little detail about the last execution and memory usage.

SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'dcc01q28gcbmy';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                 1724416 OPTIMAL
           4 HASH-JOIN                                 1721344 OPTIMAL
           6 HASH-JOIN                                 1752064 OPTIMAL


SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'b52uwjz07fwhk';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                30930944 OPTIMAL
           3 HASH-JOIN                                30945280 OPTIMAL
           4 HASH-JOIN                                30650368 OPTIMAL

As you can see, the view reports each plan operation (with id and type) that needed memory for an SQL workarea – and the numbers confirm the 1.7MB and 29MB reported by the execution plans. (Which is not surprising since it’s exactly these figures that are used to supply the details in the plans.)

You need to be a little cautious with this view in older versions of Oracle – it used to do a “full tablescan” of the library cache, which resulted in a lot of library cache latch activity and could cause some contention if you ran it frequently, but it’s a very useful view for finding queries that are doing unexpectedly large sorts or hash joins, and one you might query occasionally if you see any space management threats in the temporary tablespace.

November 12, 2020

rowsets

Filed under: Execution plans,Infrastructure,Oracle,Performance — Jonathan Lewis @ 12:35 pm GMT Nov 12,2020

Here’s a little demonstration of the benefit of rowsets. It started with a very simple question that arrived in my in-box from Kaley Crum.

  • “Why does the query with the UNION ALL take so much longer than the query without the UNION ALL?”

Here are the two queries – surely they should take virtually the same amount of time.

select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me);


select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me
      --
      union all
      --
      select 1 from dual);

Look closely – the difference between the two queries is just the addition through a UNION ALL of “select 1 from dual”. The first query took about 3.5 seconds to complete (there were 70M rows in the table), and the second took nearly 69 seconds.

Here are the execution plans showing the rowsource execution statistics (and Query Block // Object Alias information) – first the baseline query:

QL_ID  0ph1vfuuxkbqb, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me)
 
Plan hash value: 3609429292
 
--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        | 26465 (100)|          |      1 |00:00:03.34 |   95896 |  95891 |
|   1 |  SORT AGGREGATE     |         |      1 |      1 |            |          |      1 |00:00:03.34 |   95896 |  95891 |
|   2 |   VIEW              |         |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.32 |   95896 |  95891 |
|   3 |    TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.27 |   95896 |  95891 |
--------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$2 / from$_subquery$_001@SEL$1
   3 - SEL$2 / DROP_ME@SEL$2

And then the UNION ALL query:

SQL_ID  0chdajr28y0ub, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me       --       union all       --       select 1 from
dual)
 
Plan hash value: 3408528233
 
---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |         |      1 |        | 26467 (100)|          |      1 |00:01:18.58 |   95896 |  95891 |
|   1 |  SORT AGGREGATE      |         |      1 |      1 |            |          |      1 |00:01:18.58 |   95896 |  95891 |
|   2 |   VIEW               |         |      1 |     70M| 26467   (2)| 00:00:02 |     70M|00:01:10.84 |   95896 |  95891 |
|   3 |    UNION-ALL         |         |      1 |        |            |          |     70M|00:00:53.13 |   95896 |  95891 |
|   4 |     TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:19.28 |   95896 |  95891 |
|   5 |     FAST DUAL        |         |      1 |      1 |     2   (0)| 00:00:01 |      1 |00:00:00.01 |       0 |      0 |
---------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SET$1 / from$_subquery$_001@SEL$1
   3 - SET$1
   4 - SEL$2 / DROP_ME@SEL$2
   5 - SEL$3 / DUAL@SEL$3

This is not a trick question – I’ve got the SQL to create the data set and run the test; and I’ve run the test through 19.3.0.0 and 12.2.0.1 with results very similar to the above. Here’s a slightly modified version of the script – you’ll notice the column name has changed because I’ve used my usual “large data set” generator rather than scaling up a clone of all_objects:

rem
rem     Script:         rowset_kaley.sql
rem     Author:         Kaley Crum / JP Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem

create table drop_me
pctfree 0
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum  n1
from
        generator
cross join
        generator
where rownum <= 7e7
;

alter session set statistics_level = all;

set timing on
set serveroutput off

prompt  ===========================================
prompt  Baseline: Simple query without a union all.
prompt  Runs in 3-4 seconds
prompt  ===========================================

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

prompt  ==========================================
prompt  Add a UNION ALL for one record from dual.
Prompt  Runs in over a minute
prompt  ==========================================

pause Press return

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me
      --
      union all
      --
      select 1 from dual);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

There are two factors involved in the massive variation in performance. The first factor is one that the test case will report – though I didn’t show it in the output above, the second is that the test case has enabled rowsource execution statistics.

Here’s the big clue – from the Column Projection Information, which is one of the things that appears with the “advanced” format option (or when you add the “projection” format option) in the call to dbms_xplan.display_xxx()

First for the fast query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]
   2 - (rowset=1019)
   3 - (rowset=1019)

The rowsource passes 1,019 rows at a time from the tablescan operation to the view operation and from the view operation to the sort operation, for a total of roughly 70,000 calls for each of those two steps of the plan.

Compare this to the projection information for the slow UNION ALL query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]

I haven’t missed a bit in copying – this really is it. The rows move up the stack one at a time, not in a rowset array. That’s 70,000,000 subroutime calls for each of the two steps of the plan.

I’m sure most of us have heard the mantra “row by row is slow by slow” (or some variant on the theme). This is true all the way down to the internal levels of an execution plan.

Of course 70M calls vs. 70K calls shouldn’t really explain a difference of more than 60 seconds – but when the calls to the O/S for timing information for rowsource execution stats are repeated for each call (or even if it’s only a 1% sample of the calls) down the stack that’s where a lot of time can disappear.

On my laptop, running Oracle 19.3 in a VM, this is how my timing went:

  • Fast query: 1.94 seconds, dropping to 1.79 seconds when I disabled rowsource execution stats.
  • Slow query: 65.91 seconds, dropping to a far less astounding 3.26 seconds when I disabled rowsource execution stats.

So there really is a noticeable difference between row-by-row and array-processing but most of the difference in the original test came from using rowsource execution statistics to measure how much of a difference there would be.

It’s also worth mentioning that this is probably the most extreme case you could produce to show the difference – using the largest possible rowset size with the smallest possible rows when you want to mazimise the gap between (internal) array processing and single row processing – and then processing a very large number of rows. [But isn’t ridiculously large numbers of rows what you do with Exadata?].

If you want further evidence that the difference is due to the rowset size you can always alter session set “_rowsets_enabled”=false; and watch the fast query slow down. It will take about half the time of the slow query as it only has to pass 70M rows up one step of the plan rather than the two steps that are in the UNION ALL plan.

In theory it looks as if you could also restrict the size of the rowset by setting _rowsets_max_rows or _rowsets_target_maxsize, but since the default value for the former is 256 the results above suggest that the settings might be ignored, and when I tried adjusting them at the session level nothing changed.

Another test you could run is to adjust (alter session) the parameter “_rowsource_statistics_sampfreq”, When I set this to 100 the time for the slow query dropped to about 21 seconds (and the fast query – with rowsets disabled – dropped to about 11 seconds).

Footnote

It’s always easy to hit an accidental special case without realising it, so when you’re testing something it’s important to think about what’s special in the example. I came up with two possibilities in this case – dual is always going to be a very special case in any circumstances, and I’ve got a table in my query that is guaranteed to return exactly one row. So I repeated the test with:

  • a real table with one row and primary key in case the reference to dual was disabling rowsets
  • a real table with 100 rows in case this enabled a rowset size of 100

In neither case did Oracle start using array processing.

Footnote 2

One of the patterns I see occasionally follows the form of the slow query above. Some front-end tool executes a select statement that starts with a select {list of desired column headings} from dual so that the front-end code doesn’t have to be programmed to create headings as it receives the data.

Although the impact this would have on rowsets looks like a large percentage of the work done in the database in this case, in more realistic cases it would probably be irrelevant, until someone starts testing with the statistics_level set to all, or _rowsource_execution_statistics explicitly enabled.

There is a related effect, though, from the SQL Monitor feature (which activates automatically for a statement if it’s executing in parallel, or if it’s predicted to take more than 5 seconds to complete). When I added the /*+ monitor */ hint to the two queries (and didn’t fiddle with the rowsets parameter) the fast query averaged 2.26 seconds instead of 1.79 seconds, and the slow query averaged 3.51 seconds instead of 3.26 seconds. I have to say, though, that the variation between executions of the same statement was quite significant compared the variation produced by adding or eliminating the hint.

November 9, 2020

I wish

Filed under: CBO,Execution plans,Oracle,sorting,Wishlist — Jonathan Lewis @ 12:01 pm GMT Nov 9,2020

Here’s a lovely little mechanism new to Postgres 13 that can minimise sorting costs: the “incremental sort”. It would be nice to see it in Oracle as well as it could make an enormous difference to “fetch first N” queries.

The concept is simple – if a rowsource moving up a plan is known to be in “partially sorted” order when it reaches a sort operation the optimizer can choose whether or not to sort the entire rowsource in one go or to sort it in batches as it arrives.

For example if you have a query which has “order by cola, colb” as its final clause and the plan has used an index range scan on an index on just (cola) then the “sort order by” operation can accept the rows for the first value of cola, sort them by colb and pass them on, then accept the rows for the second value of cola, sort them by colb and pass them on, and so on.

Better still, if you have an “order by table1.colA, table2,colB” and the optimizer used an indexed access path on table1.colA and a nested loop into table2, then the optimizer will still recognize that the generated data is already partially sorted, and sort batches for table1.colA to order them by table2.colB. (Oracle has a mechanism for dealing with sorted hash clusters that is roughly similar.)

Obviously the benefit is that you avoid doing a very large sort that might spill to disc; slightly less obviously is that you might avoid sorting the whole data set if you have a “fetch first N” query. Here’s an Oracle model setting up a demonstration of the principle:

rem
rem     Script:         fetch_first_postgres_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem

create table t1 as
with generator(id) as (
        select 1 from dual
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator
/

alter table t1 modify n1 not null;

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(n1);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

drop index t1_i1;
create index t1_i1 on t1(n1, n2);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

I’ve created a table with 100,000 rows where the value of n1 is repeated 10 times and well-clustered, while for each value of n1, the n2 column has 10 distinct values (not necessarily in order thanks to the mod(,13)). Using this data set I’ve executed the same query three times – selecting the rows for 6 consecutive values of n1, ordering by n1, n2.

The first test will have to do a tablescan, the second can use the index I’ve created on (n1) but will have to do a sort (after visiting the table), the third can walk the index I’ve created on (n1,n2) and complete without sorting.

Here, in order, are the three execution plans:

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    70 |  4830 |   145   (9)| 00:00:01 |
|   1 |  SORT ORDER BY     |      |    70 |  4830 |   145   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T1   |    70 |  4830 |   144   (8)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N1"<=30 AND "N1">=25)


----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                       |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    70 |  4830 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1">=25 AND "N1"<=30)


-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    70 |  4830 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    70 |  4830 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1">=25 AND "N1"<=30)

Having created the Oracle model I asked Phil Florent (who had prompted this note by emailing me with a question about why Postgres 13 was executing a “Fetch First” so much faster than Oracle 19 (the basic answer is at this URL) if he would run it under Postgres and send me the execution plans.

The code required two changes – the first to handle the change in dialect, the second to supply a change in scale because my model produced such a small output that Postgres didn’t bother to use the new feature. Here’s the modified SQL to generate the original test data:

create table t1 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)       small_vc,
        rpad('x',50,'x')        padding
from
        generator;

And the three plans (so that you can compare the content and style of output with Oracle plans) that went with this small data set – first the no-index plan, then the plan for the (n1) index, then the plan with (n1,n2) indexed:

                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
Sort  (cost=2835.85..2836.00 rows=62 width=74) (actual time=19.534..19.600 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Seq Scan on t1  (cost=0.00..2834.00 rows=62 width=74) (actual time=0.067..19.417 rows=60 loops=1)
         Filter: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))
         Rows Removed by Filter: 99940

Planning Time: 0.351 ms
Execution Time: 19.703 ms


                                                    QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Sort  (cost=11.50..11.66 rows=62 width=74) (actual time=0.224..0.289 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Index Scan using t1_i1 on t1  (cost=0.42..9.66 rows=62 width=74) (actual time=0.024..0.113 rows=60 loops=1)
         Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.665 ms
Execution Time: 0.391 ms



                                                 QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Index Scan using t1_i1 on t1  (cost=0.42..114.66 rows=62 width=74) (actual time=0.022..0.155 rows=60 loops=1)
   Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.690 ms
Execution Time: 0.259 ms
 

As you can see, there’s very little difference between Oracle’s plans and Postgres’ plans in this example. (Though it’s rather nice to see what extra details appear in the Postgres plans, which were generated with the equivalent of Oracle’s “statistics_level = all” setting.

The middle plan shows us that Postgres didn’t use the “incremental sort” – but it’s useful to see it anyway so that we can compare the structure of the plan when we increase the volume of data – which the following script is for:

create table t2 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 10000000
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator;
 
create index t2_i1 on t2(n1);

explain analyze
select  *
from    t2
where   n1 between 25000 and 30000
order by
        n1, n2
;

analyze t2;

Now we have 10M rows, still with 10 rows per value of n1, and our query requests 5,001 values of n1, so 50,010 rows in total. With that much data the optimizer decided to use the incremental sort rather than sorting the whole result set in one go:

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Incremental Sort  (cost=0.49..4532.57 rows=52677 width=74) (actual time=0.160..164.125 rows=50010 loops=1)
   Sort Key: n1, n2
   Presorted Key: n1
   Full-sort Groups: 1251  Sort Method: quicksort  Average Memory: 30kB  Peak Memory: 30kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.055..61.663 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.197 ms
Execution Time: 204.490 ms

Note, particularly, the “Presorted Key” line leading to the “Sort Key” line. We can also check the plan without the incremental sort with a set command to disable the feature:

set enable_incremental_sort = off;

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Sort  (cost=8812.64..8944.33 rows=52677 width=74) (actual time=181.487..227.735 rows=50010 loops=1)
   Sort Key: n1, n2
   Sort Method: external merge  Disk: 4128kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.070..74.975 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.239 ms
Execution Time: 266.325 ms

Comparing the times for these two queries there isn’t really much difference – but you will notice that the old approach has had to dump 4MB to disc while the incremental sorts gets the job done in 30Kb of memory, which may be an important difference in other circumstances.

More importantly than a tiny time difference in this trivial example is the fact that Phil Florent’s original question was:

“Why is this two-table join with ‘fetch first 5’ taking 28 seconds to complete in Oracle when it takes less than one millisecond to complete in Postgres?”

The answer was:

  • Oracle is doing a hash join that produces a 25M row result set, sorting it to do an analytic row_number() call to get the first 5 rows.
  • Postgres is doing a nested loop join, fetching the first row(s) from the first table in the right order by index range scan then fetching the matching rows from the second table – then doing an incremental sort on those rows; then fetching the next row(s) and repeating the process – stopping after it’s got through a total of 5 rows instead of generating and sorting 25M rows.

Here’s the plan (with obfuscated table names):

                                                    QUERY PLAN                                                    

-------------------------------------------------------------------------------------------------------------------

Limit  (cost=284.36..287.21 rows=5 width=10)
   ->  Incremental Sort  (cost=284.36..14261803.18 rows=25000000 width=10)
         Sort Key: table1.col1 DESC, table2.col2 DESC
         Presorted Key: table1.col1
         ->  Nested Loop  (cost=1.00..13294209.39 rows=25000000 width=10)
               ->  Index Scan using table1_col1_idx on table1  (cost=0.56..1300593.47 rows=25000000 width=8)
               ->  Index Scan using table2.pk_col on table2 (cost=0.43..0.48 rows=1 width=6)
                     Index Cond: (id = table1.id)

You’ll notice in this exanple that the incremental sort can take advantage of the optimizer’s knowledge of the index definitions whether the sort is ascending or descending.

You’ll also notice that Postgres has the same problem as Oracle when it comes to coping with Fetch First (or, in Oracle’s case, rownum <= N and optimizer_mode = first_rows_N). Even when it “knows” that a query is going to stop fetching data very early the plan still reports 25M rows as the expected volume of data.

Summary

Postgres 13 has a wonderful mechanism for optimising sorts that can make a huge difference to “first rows” queries and even basic “order by” clauses involving join query result sets.

Footnote

Browsing the Internet for documentation and comment on the incremental sort I found the following. Given my level of ignorance about Postgres I can’t comment on the completeness or correctness of the information, but it looked good to me, and I found its comments about this feature very informative.

October 10, 2020

Interval Oddity

Filed under: CBO,Execution plans,Oracle,Partitioning — Jonathan Lewis @ 2:51 pm BST Oct 10,2020

Interval partitioning is a popular strategy for partitioning date-based data. It’s an enhanced variant of range partitioning that allows you to define a starting partition and an interval that should be used to derive the high values for all subsequent partitions – and Oracle doesn’t even have to create intervening partitions if you insert data that goes far beyond the current partition, it automatically creates exactly the right partition (with the correct high_value and correctly inferred lower boundary) for the incoming data and behaves as if the intervening partitions will become available when they’re needed at some later point in time. So no need for DBAs to work out actual partition high_values, no need to ensure that all the partitions you need out into the future have been pre-created, no more batch processes crashing with Oracle error ORA-14400: inserted partition key does not map to any partition.

But there’s a surprising difference between traditional range partitioning and the newer interval partitioning that will increase CPU usage in some cases and may (though I haven’t yet investigated this in sufficient detail to create an example) lead to variations in execution plans.

To demonstrate the difference I’m going to create two tables with the same structure and content then run the same query against them and show you the resulting execution plans. Here’s the code to create and populate the tables:

rem
rem     Script:         interval_or_range.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem


create table t_interval(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
interval (numtoyminterval(1,'MONTH'))
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy'))
        )
;

create table t_range(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy')),
                partition start_p2 values less than (to_date('01-Feb-2020','dd-mon-yyyy')),
                partition start_p3 values less than (to_date('01-Mar-2020','dd-mon-yyyy')),
                partition start_p4 values less than (to_date('01-Apr-2020','dd-mon-yyyy')),
                partition start_p5 values less than (to_date('01-May-2020','dd-mon-yyyy')),
                partition start_p6 values less than (to_date('01-Jun-2020','dd-mon-yyyy')),
                partition start_p7 values less than (to_date('01-Jul-2020','dd-mon-yyyy')),
                partition start_p8 values less than (to_date('01-Aug-2020','dd-mon-yyyy')),
                partition start_p9 values less than (to_date('01-Sep-2020','dd-mon-yyyy'))
        )
;

insert into t_range select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue  
;

insert into t_interval select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_RANGE',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_INTERVAL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve used composite partitioned tables in this example but the same anomaly appears with simple partitioning – which you can test by commenting out the “subpartion by ..” lines. The two tables have partitions defined to hold a month’s data. I’ve inserted a few rows into most of the partitions, and ensured that I haven’t accidentally attempted to insert data that falls outside the legal range of the table with the predefined partitions .

To show that the tables are nominally identical here’s the list of partitions with their high values and number of rows:

column table_name       format a15
column partition_name   format a15
column high_value       format a80

break on table_name skip 1

select
        table_name, partition_name, num_rows, high_value
from
        user_tab_partitions
where
        table_name in ('T_INTERVAL','T_RANGE')
order by
        table_name, partition_name
;


ABLE_NAME      PARTITION_NAME    NUM_ROWS HIGH_VALUE
--------------- --------------- ---------- --------------------------------------------------------------------------------
T_INTERVAL      START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10722              30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10723              29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10724              31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10725              30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10726              31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10727              30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10728              31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10729              28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

T_RANGE         START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P2                30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P3                29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P4                31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P5                30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P6                31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P7                30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P8                31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P9                28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


18 rows selected.

With this setup we can now run a simple query against the two tables using a where clause that is an exact match for the partition definition for the July data:

set serveroutput off

select 
        count(*) 
from 
        t_range 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

select 
        count(*) 
from 
        t_interval 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

I’ve left in two versions of the date predicates – the “ANSI” style, and a strictly formatted “to_char()” style that is still the approached used most commonly in Oracle systems. The effect on the execution plans is the same for both variations of the date declaration, and here are the two execution plans – reported from an instance of 19.3.0.0:

---------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |         |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|         |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|   3 |    TABLE ACCESS FULL    | T_RANGE |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
---------------------------------------------------------------------------------------------------


------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |            |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|            |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|*  3 |    TABLE ACCESS FULL    | T_INTERVAL |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(("ORDER_DATE">=TO_DATE(' 2020-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DATE"<TO_DATE(' 2020-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))

It’s quite possible that you won’t notice the difference these plans, even though I’ve put them directly one below the other; and it’s very likely that most people wouldn’t think about the difference if they didn’t have the two plans available at the same time to compare.

If you can’t spot the difference it’s because I’ve deliberately arranged them in a way that might fool you into not looking carefully enough.

If you’ve spotted the difference it’s probably because you’ve examined the Predicate Information section. There isn’t one for the the t_range example but there is one for the t_interval example – and it was a little sneaky of me to make it easy for you to assume that the one set of predicates I showed was common to the two plans.

Since the plans have been pulled from memory (v$sql_plan) it seems likely that they are truthful and the query based on the interval partitioning is actually checking every row in the partition against two date values. In my example that’s not going to make much difference to performance, but in a datawarehouse with millions of rows per partition there’s scope for a noticeable increase in CPU between the two queries – especially since the block reads are likely to be direct path, bypassing much of the usual (CPU -intensive) buffer cache activity.

Interestingly when I checked the 10053 trace files for the two queries the CPU cost for the pair was identical. Normally you expect to see at least a little CPU cost (even if it’s only tens of units out of millions) for checking a predicate. This raises the question – does the optimizer allow a cost for the t_range table for an event that isn’t going to happen, or is it reporting an event that isn’t going to happen for the t_interval table. [Edit: easy enough to check by doing the test on a simple heap table that clones the data from that one partition, and checking the calculated cost with and without the predicate]

There’s a further ramification to this anomaly, relating to the question on the Oracle-L list server that prompted the investigation. What happens on Exadata where the predicates can be offloaded to storage?

The original question wasn’t actually about the appearance (or not) of the predicates, it was about an unexpected cardinality estimate for a query involving two equi-partitioned tables, so this note has gone completely off-topic from the question; but another strange detail about the predicates showed up when I suggested the creation of a column group on the join columns. A (redundant) predicate that had been offloaded to storage stopped being offloaded; here, from the original posting with the original tables, are the two different Predicate Information sections that appeared – the first without the column group, the second after the column group had been created and its stats collected:

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - storage("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Predicate Information (identified by operation id):
---------------------------------------------------
   4 -
access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<TO_DATE(' 2020-08-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Notice how the storage() predicate that appears at operation 8 in the first set has disappeared from the second even though operation 6 manages to retain its storage() predicate throughout.

I’ve modelled a further example of odd behaviour using two pairs of tables – one pair using range/hash partitioning the other interval/hash partitioning. I won’t reproduce it here but the problem of redundant predicates appearing and then disappearing gets even stranger.

I haven’t yet produced an example where the unexpected predicate behaviour has affected the costs or cardinalities – but I’ve only spent a couple of hours playing around with well-formed examples: it’s possible that in badly formed examples (e.g. with statistical inconsistencies) the side effect could mean that two notionally identical queries produce different numbers and different plans because they end up with different predicates in the final transformed query.

Footnote

Following an email from David Kurtz, it occurred to me that I should have made it clear that the disappearance of predicates on the partition key is expected behaviour when the predicates are clearly synchronised with the partition boundaries. The behaviour for the interval partitioning is the oddity, the behaviour for the “normal” range partitioning is the standard.

Next Page »

Website Powered by WordPress.com.