## December 22, 2015

### Predicates

Filed under: Execution plans,Oracle,Tuning — Jonathan Lewis @ 12:58 pm BST Dec 22,2015

I received an email recently that started with the sort of opening sentence that I see far more often than I want to:

I have come across an interesting scenario that I would like to run by you, for your opinion.

It’s not that I object to being sent interesting scenarios, it’s just that they are rarely interesting – and this wasn’t one of those rare interesting ones. On the plus side it reminded me that I hadn’t vented one of my popular rants for some time.

Here’s the problem – see if you can work out the error before you get to the rant:

“I’ve got a table and a view on that table; and I’ve got a query that is supposed to use the view. Whether I use the table or the view in query the optimizer uses the primary key on the table to access the table – but when I use the table the query takes about 30 ms, when I use the view the query takes about 903 ms”.

The email included a stripped-down version of the problem (which I’ve stripped even further) – so score some brownie points on that one.  Here, in order, are the table, the view, and two variations of the query:

```
create table table_a (
col_1  varchar2(20)	not null,
col_2  number(10)	not null,
col_3  varchar2(20)	not null,
col_4  varchar2(100)
);

insert /*+ append */ into table_a
select
from
all_objects
where
rownum <= 10000  -- > comment to avoid wordpress formatting issue
;

commit;

alter table table_a add constraint ta_pk primary key(col_1, col_2, col_3);
execute dbms_stats.gather_table_stats(user,'table_a',method_opt=>'for all columns size 1')

create or replace view view_a (
col1,
col2,
col3,
col4
)
as
select
col_1 as col1,
cast(col_2 as number(9)) as col2,
col_3 as col3,
col_4 as col4
from
table_a
;

variable b1 varchar2(10)
variable b2 number

exec :b2 := 0

select /*+ index(table_a) tracking_t2 */
*
from	table_a
where
col_1 = :b1
and	col_2 = :b2
;

select /*+ index(view_a.table_a) tracking_v2 */
*
from	view_a
where
col1 = :b1
and	col2 = :b2
;

```

Question 1 (for no points): Why would there be a difference (though very small in this example) in performance ?

Question 2 (for a virtual pat on the head): What did the author of the email not do that made him think this was an interesting problem ?

Just to muddy the water for those who need a hint (that’s a hint hint, not an Oracle hint) – here are the two execution plans reported from v\$sql_plan in version 12.1.0.2:

```
SQL_ID  514syc2mcb1wp, child number 0
-------------------------------------
select /*+ index(table_a) tracking_t2 */   * from table_a where  col_1
= :b1 and col_2 = :b2

Plan hash value: 3313752691

---------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |      1 |        |     10 |00:00:00.01 |      13 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_A |      1 |      1 |     10 |00:00:00.01 |      13 |
|*  2 |   INDEX RANGE SCAN                  | TA_PK   |      1 |      1 |     10 |00:00:00.01 |       3 |
---------------------------------------------------------------------------------------------------------

SQL_ID  ck0y3v9833wrh, child number 0
-------------------------------------
select /*+ index(view_a.table_a) tracking_v2 */  * from view_a where
col1 = :b1 and col2 = :b2

Plan hash value: 3313752691

---------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |      1 |        |     10 |00:00:00.01 |      13 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_A |      1 |      1 |     10 |00:00:00.01 |      13 |
|*  2 |   INDEX RANGE SCAN                  | TA_PK   |      1 |      1 |     10 |00:00:00.01 |       3 |
---------------------------------------------------------------------------------------------------------

```

I’ve even shown you the Plan Hash Values for the two queries so you can check that the execution plans were the same.

So what have I just NOT done in my attempt to make it harder for you to understand what is going on ?

Give yourself a pat on the head if you’ve been thinking “Where’s the predicate section for these plans ?”  (9 years old today).

Here are the two predicate sections (in the same order as the plans above):

```
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COL_1"=:B1 AND "COL_2"=:B2)

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COL_1"=:B1)
filter(CAST("COL_2" AS number(9))=:B2)

```

Notice how the optimizer can use both predicates to probe the index when we query the table but, thanks to the function applied to the column in the view, can only probe the index on the first column of the view and has to check every index entry for the first input value to see of the result of the cast matches the second input value. The size of the range scan in the second case could be much larger than the size of the range scan in the first case – the difference in performance could simply be a reflection that col_1 is very repetitive with many different values of col_2 for every value of col_1.

### Interesting

While the problem itself isn’t interesting – it does raise a couple of points worth mentioning (and I’m not going to ask why the view has that surprising cast() in it – but if pushed I could invent a reason)

First, what steps have been taken to ensure that a query against the view won’t crash with Oracle error 1438:

```SQL> insert into table_a values(:b1, 1e9,'x','x');

1 row created.

SQL> select * from view_a where col1 = :b1;
ERROR:
ORA-01438: value larger than specified precision allowed for this column

```

Possibly there’s a check constraint on the column restricting it to values that can survive the cast to number(9).

Secondly, it’s often possible to use constraints or virtual columns (or both together) that allow the optimizer to get clever with expression substitution and come up with optimal execution plans even when there are traps like this put in the way. In this case I couldn’t manage to make the usual tricks work. Possibly the only way to get the hoped-for performance is to create a second index on (col_1, cast(col_2) as number(9), col_3).

## December 9, 2015

### 12c Scalar Subquery

Filed under: 12c,Execution plans,Oracle — Jonathan Lewis @ 2:25 pm BST Dec 9,2015

Every version of the optimizer enhances existing mechanisms and introduces new features and 12c has introduced some of the most sophisticated transformation to date; in this note I want to demonstrate an enhancement to subquery unnesting that could give a significant performance boost to a certain query pattern but which might, unfortunately, result in worse performance.

Historically subquery unnesting turned subqueries (correlated or not) in the where clause into joins. In 12c subquery unnesting can also turn scalar subqueries in the select list into joins – we’ll discuss why this could be a good thing but might occasionally be a bad thing later on in the article, but let’s start with a test case.

### Sample data.

In my demonstration I’m going to use three tables which, for convenience, are three clones of the same data.

```
rem
rem     Script: 12c_scalar_subq.sql
rem     Dated:  April 2014
rem     Author: J.P.Lewis
rem

create table t1
as
with generator as (
select
rownum id
from dual
connect by
level <= 1e4
)
select
rownum				id,
mod(rownum-1,100)		mod100,
trunc((rownum - 1)/100)		trunc100,
rownum				n1,
rownum				n2,
from
generator
where
rownum <= 10000
;

create table t2 as select * from t1;
create table t3 as select * from t1;

create index t1_i1 on t1(id);
create index t2_i1 on t2(id,mod100);
create index t3_i1 on t3(id,trunc100);

begin
dbms_stats.gather_table_stats(user,'t1');
dbms_stats.gather_table_stats(user,'t2');
dbms_stats.gather_table_stats(user,'t3');
end;
/
```

I’ll be examining a query against t1 that includes two correlated scalar subqueries in the select list that reference one each of t2 and t3:

```
explain plan for
select
/*+
qb_name(main)
*/
n1, n2,
(
select	/*+ qb_name(sq1) */
max(mod100)
from	t2
where	t2.id = t1.id
) new_n1,
(
select	/*+ qb_name(sq2) */
max(trunc100)
from	t3
where	t3.id = t1.id
) new_n2
from
t1
where
t1.id between 101 and 200
;

select * from table(dbms_xplan.display);
```

### 11g Plan

This is the execution plan you might expect to see from 11g – in my case, with my system stats etc. and running 11.2.0.4:

```
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |   101 |  1212 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |     7 |            |          |
|   2 |   FIRST ROW                  |       |     1 |     7 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| T2_I1 |     1 |     7 |     2   (0)| 00:00:01 |
|   4 |  SORT AGGREGATE              |       |     1 |     7 |            |          |
|   5 |   FIRST ROW                  |       |     1 |     7 |     2   (0)| 00:00:01 |
|*  6 |    INDEX RANGE SCAN (MIN/MAX)| T3_I1 |     1 |     7 |     2   (0)| 00:00:01 |
|   7 |  TABLE ACCESS BY INDEX ROWID | T1    |   101 |  1212 |     4   (0)| 00:00:01 |
|*  8 |   INDEX RANGE SCAN           | T1_I1 |   101 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("T2"."ID"=:B1)
6 - access("T3"."ID"=:B1)
8 - access("T1"."ID">=101 AND "T1"."ID"<=200)
```

As you can see, the operations for the subqueries appear first in the plan (lines 1-3, and 4-6), with the operations for the main query appearing as the last section of the plan (lines 7-8). You might note that the total cost of the plan doesn’t cater for the cost of the subqueries – technically we might expect to see the optimizer producing a cost of something like 408 on the basis that it’s going to run each subquery an estimated 101 times and each subquery has a cost of 2, and the 101 rows are generated from a query with a cost of 4 giving: 4 + 101 * (2 + 2) = 408.

### 12c Plan

On the upgrade to 12c, the same code produces the following plan:

```
--------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |   101 |  6464 |     8   (0)| 00:00:01 |
|*  1 |  HASH JOIN OUTER                      |          |   101 |  6464 |     8   (0)| 00:00:01 |
|*  2 |   HASH JOIN OUTER                     |          |   101 |  3838 |     6   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |   101 |  1212 |     4   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_I1    |   101 |       |     2   (0)| 00:00:01 |
|   5 |    VIEW                               | VW_SSQ_2 |   101 |  2626 |     2   (0)| 00:00:01 |
|   6 |     HASH GROUP BY                     |          |   101 |   707 |     2   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN                 | T2_I1    |   101 |   707 |     2   (0)| 00:00:01 |
|   8 |   VIEW                                | VW_SSQ_1 |   101 |  2626 |     2   (0)| 00:00:01 |
|   9 |    HASH GROUP BY                      |          |   101 |   707 |     2   (0)| 00:00:01 |
|* 10 |     INDEX RANGE SCAN                  | T3_I1    |   101 |   707 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("ITEM_1"(+)="T1"."ID")
2 - access("ITEM_2"(+)="T1"."ID")
4 - access("T1"."ID">=101 AND "T1"."ID"<=200)
7 - access("T2"."ID">=101 AND "T2"."ID"<=200)
10 - access("T3"."ID">=101 AND "T3"."ID"<=200)
```

As you can see, the separate plans for the subqueries have disappeared and the plan is showing a three-table join (with two outer joins, and two of the “tables” being the non-mergeable view vw_ssq_2 and vw_ssq_1).

There are several details to pick up in this plan (apart from the unnesting). The cost is only 8 – but in this case it isn’t the effect of the optimizer “ignoring” the cost of the subqueries, it’s the optimizer correctly working out the cost of the unnested subqueries with joins. The cost happens to be low in this case because the optimizer has used transitive closure to pass the predicate from the driving query into the subqueries – so we need only do a couple of short index range scans to get all the data we need in the unnested subqueries.

The outer joins are needed because it is valid for the original scalar subquery mechanism to return no data for a subquery and still report a row (with nulls) for t1. If the rewrite didn’t introduce the outer join then t1 rows for which there were no matching t2 or t3 rows would disappear from the result set.

### Threats and workarounds

In this (lightweight) example it looks as if this transformation is a good idea, but it’s always possible that the optimizer might choose to do this when it’s a bad idea. In fact, a quick check of the optimizer trace (10053) suggests that this is an uncosted transformation that will take place “because it can”. Here are six highly suggestive consecutive lines from the trace file:

```
SU: Unnesting query blocks in query block MAIN (#1) that are valid to unnest.
Subquery Unnesting on query block MAIN (#1)
SU: Performing unnesting that does not require costing.
SU: Considering subquery unnest on query block MAIN (#1).
SU:   Unnesting  scalar subquery query block SQ2 (#2)
Registered qb: SEL\$2E540226 0x50b1a950 (SUBQ INTO VIEW FOR COMPLEX UNNEST SQ2)
```

Even if this transformation is cost-based rather than heuristic it’s always possible for the optimizer to make a very poor estimate of cost and do the wrong thing. Fortunately it’s possible to block the unnesting with the “traditional” /*+ no_unnest */ hint:

```
select
/*+
qb_name(main)
no_unnest(@sq1)
no_unnest(@sq2)
*/
n1, n2,
...
```

With these hints in place the execution plan changes back to the 11g form – though there is a notable change in the estimated final cost of the query:

```
---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |   101 |  1212 |   206   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                     |       |     1 |     7 |            |          |
|   2 |   FIRST ROW                         |       |     1 |     7 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)       | T2_I1 |     1 |     7 |     2   (0)| 00:00:01 |
|   4 |  SORT AGGREGATE                     |       |     1 |     7 |            |          |
|   5 |   FIRST ROW                         |       |     1 |     7 |     2   (0)| 00:00:01 |
|*  6 |    INDEX RANGE SCAN (MIN/MAX)       | T3_I1 |     1 |     7 |     2   (0)| 00:00:01 |
|   7 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   101 |  1212 |     4   (0)| 00:00:01 |
|*  8 |   INDEX RANGE SCAN                  | T1_I1 |   101 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("T2"."ID"=:B1)
6 - access("T3"."ID"=:B1)
8 - access("T1"."ID">=101 AND "T1"."ID"<=200)
```

It’s a little surprising that the final cost is (4 + 202) rather than the  (4 + 404) that we calculated earlier, but a few variations on the test suggest that the optimizer is using half the cost of each of the scalar subqueries in the final cost estimate – perhaps as a nod in the direction of scalar subquery caching.

As always it is important to remember that you cannot look at a query like this and know which plan is going to perform better unless you know the data content, the pattern of data distribution, and the probably effect of scalar subquery caching. In some cases it may be that an extremely “lucky” data pattern will mean that a scalar subquery will run a very small number of times while, with the same data arranged in a different order the benefit of scalar subquery caching will disappear and the unnesting approach may be much better.

It is a convenience to be able to recognize that there is a new optimizer feature in 12c that might give you a performance boost but might hit you with an unlucky performance penalty (especially in cases, perhaps, where you have previously engineered your code very carefully to take advantage of scalar subquery caching) and know that there is a workaround available. In your search for pre-empting production performance problems, this code structure is another pattern to scan for in your source code.

## November 6, 2015

### Filter Hash

Filed under: Execution plans,Hints,Oracle — Jonathan Lewis @ 6:43 am BST Nov 6,2015

One of the most irritating features of solving problems for clients is that the models I build to confirm my diagnosis and test my solutions often highlight further anomalies, or make me ask questions that might produce some useful answers to future problems.

Recently I had cause to ask myself if Oracle would push a filter subquery into the second tablescan of a hash join – changing a plan from this:

```filter
hash join
table access full t1
table access full t2
table access by rowid t3
index range scan t3_i1
```

to this:

```hash join
table access full t1
filter
table access full t2
table access by rowid t3
index range scan t3_i1
```

or, perhaps more likely, to this:

```hash join
table access full t1
table access full t2
table access by rowid t3
index range scan t3_i1
```

The final variation here is an example where the FILTER operation itself is swallowed up in line 3 of the plan, twisting the body of the plan in a way that makes the “first child first” rule of thumb lead to an incorrect interpretation. I’ve discussed this pattern of behaviour before, but in the earlier cases the “missing filter” has either applied to an index or to the first table of the hash join.

The type of query where the the strategy for pushing a filter subquery into the second table of a hash join might be appropriate would be something like the following (although in this simple case we’d probably expect Oracle to unnest the subquery and turn it into a semi-join):

```select
t1.n1,
t2.n1
from
t1, t2
where
mod(t1.n1,100) = 0
and     t2.id = t1.id           -- join condition with a possible order t1 -> t2
and     exists (
select          -- subquery that could be pushed against t2
null
from    t3
where   t3.id = t2.n1
)
;
```

The benefit of using a filter subquery and pushing it would only appear in specific circumstances – you would would need the number of executions of the subquery to be significantly larger AFTER the hash join than BEFORE in order for the early subquery filter to be a good idea.

Since there are always special cases that can be improved by carefully selected optimisation strategies I created three tables to find out what plans I could produce by blocking unnesting and trying to push the filter subquery. Here’s the code I used for the tables:

```
create table t1 nologging
as
with generator as (
select  --+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
rownum                  id,
rownum                  n1,
from
generator       v1,
generator       v2
where
rownum <= 1e5
;

create table t2 nologging as
select * from t1;

create table t3 nologging as
select * from t1;

create index t3_i1 on t3(id);

-- gather stats if needed (version dependent) with no histograms

```

With this data in place I can experiment with hinting the path I want to see; there are two basically two parts to the hints I need, the first in the main query to control the join: /*+ leading (t1 t2) use_hash(t2) no_swap_join_inputs(t2) */, the second in the subquery /*+ no_unnest push_subq */. So here are a couple of plans – first without the push_subq hint:

```
Plan hash value: 2281699686

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |      1 |        |   926 (100)|   1000 |00:00:00.94 |    5409 |       |       |          |
|*  1 |  FILTER             |       |      1 |        |            |   1000 |00:00:00.94 |    5409 |       |       |          |
|*  2 |   HASH JOIN         |       |      1 |   1000 |   425   (5)|   1000 |00:00:00.91 |    3295 |  1888K|  1888K| 1502K (0)|
|*  3 |    TABLE ACCESS FULL| T1    |      1 |   1000 |   214   (6)|   1000 |00:00:00.03 |    1614 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2    |      1 |    100K|   209   (3)|    100K|00:00:00.23 |    1681 |       |       |          |
|*  5 |   INDEX RANGE SCAN  | T3_I1 |   1000 |      1 |     1   (0)|   1000 |00:00:00.02 |    2114 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( IS NOT NULL)
2 - access("T2"."ID"="T1"."ID")
3 - filter(MOD("T1"."N1",100)=0)
5 - access("T3"."ID"=:B1)

```

In the absence of the push_subq hint the optimizer has taken the hash join (operations 2 – 4) and filtered late (operations 1 and 5).

When I included the push_subq hint this is what I got in 11.2.0.4:

```
Plan hash value: 2281699686

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |      1 |        |   424 (100)|   1000 |00:00:00.94 |    5409 |       |       |          |
|*  1 |  FILTER             |       |      1 |        |            |   1000 |00:00:00.94 |    5409 |       |       |          |
|*  2 |   HASH JOIN         |       |      1 |   1000 |   423   (5)|   1000 |00:00:00.91 |    3295 |  1888K|  1888K| 1535K (0)|
|*  3 |    TABLE ACCESS FULL| T1    |      1 |   1000 |   214   (6)|   1000 |00:00:00.03 |    1614 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2    |      1 |   5000 |   209   (3)|    100K|00:00:00.23 |    1681 |       |       |          |
|*  5 |   INDEX RANGE SCAN  | T3_I1 |   1000 |      1 |     1   (0)|   1000 |00:00:00.02 |    2114 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( IS NOT NULL)
2 - access("T2"."ID"="T1"."ID")
3 - filter(MOD("T1"."N1",100)=0)
5 - access("T3"."ID"=:B1)

```

The plan hasn’t changed!

Clearly the shape of the plan hasn’t changed, the numbers for Starts and A-rows haven’t changed, the Buffers haven’t changed, the Time hasn’t changed – in fact the session stats for the two queries were virtually identical. Subquery pushing has clearly NOT taken place. But take a look at the E-rows and Cost: operation 4 in the “pushed” plan reports E-Rows = 5,000 which is the classic 5% for an existence subquery when compared with the E-rows = 100K in the first plan; the cost of the hash join is slightly smaller, and the cost of the whole query has halved – but the run-time engine is doing the same amount of work and following the same plan. The optimizer seems to have pushed the arithmetic, without pushing the subquery!

I could force subquery pushing to take place if I reversed the join order – and all I have to do is change the main hint to /*+ leading (t2 t1) use_hash(t1) no_swap_join_inputs(t1) */ to see this happen; here’s the resulting plan:

```
------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |   424 (100)|   1000 |00:00:02.02 |     104K|       |       |          |
|*  1 |  HASH JOIN         |       |      1 |   1000 |   423   (5)|   1000 |00:00:02.02 |     104K|  5984K|  2337K| 5601K (0)|
|*  2 |   TABLE ACCESS FULL| T2    |      1 |   5000 |   209   (3)|    100K|00:00:01.31 |     102K|       |       |          |
|*  3 |    INDEX RANGE SCAN| T3_I1 |    100K|      1 |     1   (0)|    100K|00:00:00.58 |     101K|       |       |          |
|*  4 |   TABLE ACCESS FULL| T1    |      1 |   1000 |   214   (6)|   1000 |00:00:00.03 |    1681 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T2"."ID"="T1"."ID")
2 - filter( IS NOT NULL)
3 - access("T3"."ID"=:B1)
4 - filter(MOD("T1"."N1",100)=0)

```

You can see (as I implied earlier on) that it was a bad idea to push the subquery with this data set; the subquery has now run 100,000 times adding an extra 1.08 seconds of CPU to the run-time activity; but I’m only trying to establish a principle, so I’m not worried about that. Perhaps, having got subquery pushing in this plan, I could change that no_swap_join_inputs(t1) hint to a swap_join_inputs(t1) to see the plan I want with lines 2 and 3 below line 4 – and here’s what I get when I do:

```
------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |   424 (100)|   1000 |00:00:01.97 |     104K|       |       |          |
|*  1 |  HASH JOIN         |       |      1 |   1000 |   423   (5)|   1000 |00:00:01.97 |     104K|  1888K|  1888K| 1499K (0)|
|*  2 |   TABLE ACCESS FULL| T1    |      1 |   1000 |   214   (6)|   1000 |00:00:00.02 |    1614 |       |       |          |
|*  3 |   TABLE ACCESS FULL| T2    |      1 |   5000 |   209   (3)|    100K|00:00:01.28 |     103K|       |       |          |
|*  4 |    INDEX RANGE SCAN| T3_I1 |    100K|      1 |     1   (0)|    100K|00:00:00.56 |     101K|       |       |          |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T2"."ID"="T1"."ID")
2 - filter(MOD("T1"."N1",100)=0)
3 - filter( IS NOT NULL)
4 - access("T3"."ID"=:B1)

```

So we can get where we want to be by starting backwards and reversing the join order! You might notice, by the way, that in the last two plans the optimizer “thinks” it will have to run the subquery 5,000 (or possibly 100,000) times, but the cost of the query is still less than the initial case where the optimizer thought it would have to run the subquery just 1,000 times. (You can see these numbers by looking at the E-rows that feed the filter operation.)

### Summary

In this particular case it doesn’t make sense to force the plan I’ve managed to achieve – when filter subqueries are involved the patterns in the data can make a huge difference to performance – but in demonstrating that I can get to a plan that I want I’ve had to work through the option of starting with the wrong join order and then swapping sides on the hash join, and I’ve demonstrated in passing that there is a curious costing anomaly that could affect the optimizer’s choice in more complex executions plans.

Reference script: filter_hash.sql

## September 2, 2015

### IN/EXISTS bugs

Filed under: 12c,Bugs,CBO,Execution plans,Oracle,subqueries — Jonathan Lewis @ 8:11 am BST Sep 2,2015

Here’s a simple data set – I’m only interested in three of the columns in the work that follows, but it’s a data set that I use for a number of different models:

```
execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
select	--+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
trunc(dbms_random.value(0,1000))	n_1000,
trunc(dbms_random.value(0,750))		n_750,
trunc(dbms_random.value(0,600))		n_600,
trunc(dbms_random.value(0,400))		n_400,
trunc(dbms_random.value(0,90))		n_90,
trunc(dbms_random.value(0,72))		n_72,
trunc(dbms_random.value(0,40))		n_40,
trunc(dbms_random.value(0,3))		n_3
from
generator	v1,
generator	v2
where
rownum <= 1e6
;
```
```create table t2 nologging
as
select * from t1
;

begin
dbms_stats.gather_table_stats(
ownname		 => user,
tabname		 =>'T1',
method_opt 	 => 'for all columns size 1'
);

dbms_stats.gather_table_stats(
ownname		 => user,
tabname		 =>'T2',
method_opt 	 => 'for all columns size 1'
);
end;
/

```

The columns I want to consider are n_3, n_400, and n_1000. As their names suggest the columns have 3, 400, and 1000 distinct values respectively and since I’ve used the dbms_random.value() function to generate the data the distinct values are fairly evenly spread across the million rows of the table.

Consider, then, the following two queries:

```
select
*
from
t1
where
exists (
select  null
from    t2
where   n_1000 = 0
and     t2.n_400 = t1.n_400
and     t2.n_3 = t1.n_3
)
;

select
*
from
t1
where
(t1.n_400, t1.n_3) in (
select  t2.n_400, t2.n_3
from    t2
where   t2.n_1000 = 0
)
;

```

The first point to check is that these two queries are logically equivalent.

Once you’re happy with that idea we can work out, informally, how many rows we should expect the queries ought to return: there are 1,200 combinations for (n_400, n_3) so each combination should return roughly 833 rows; if we pick 1,000 rows from the 1 million available we can expect to see 679 of those combinations (that’s Alberto Dell’Era’s “selection without replacement” formula that Oracle uses for adjusting num_distinct to allow for filter predicates). So we might reasonably suggest that the final number of rows as 833 * 679 = 565,607. It turns out that that’s a pretty good estimate – when I ran the query the result was actually 567,018 rows.

So what does Oracle produce for the two execution plans – here are the result from 12c (EXISTS first, then IN):

```
===================
Multi-column EXISTS
===================
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   920K|    34M|  1259  (11)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT SEMI|      |   920K|    34M|  1259  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   3 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T2"."N_400"="T1"."N_400" AND "T2"."N_3"="T1"."N_3")
2 - filter("N_1000"=0)

===================
Equivalent IN query
===================
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   833K|    30M|  1259  (11)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT SEMI|      |   833K|    30M|  1259  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   3 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."N_400"="T2"."N_400" AND "T1"."N_3"="T2"."N_3")
2 - filter("T2"."N_1000"=0)

```

The first thing to note is that the shape of the plans is identical, and the predicate sections are identical – but the final cardinalities are different. Clearly at least one of the cardinalities has to be wrong by a significant amount (7.5% or 10.4%, depending which way round you want to look at it). If you run the test on 11.2.0.4 you find that both plans give the same estimated row count – and it’s the 920,000 rows; so arguably 12c has “fixed” the IN subquery calculation, bringing it closer to a reasonable prediction, but it hasn’t fixed the EXISTS subquery calculation. That 833K prediction, by the way, is what you would expect to see with this data with a basic join – and a semi-join shouldn’t be able to produce more data than  a join.

But both predictions are way off the (informal) expectation, so how have they appeared ?

Working backwards it’s easy to spot that: 833K = 833 * 1,000: Oracle is behaving as if every single row identified in the subquery will produce a separate combination of (n_400, n_3). If we reverse engineer 920K we get: 920K / 833 = 1104 – it would appear that the optimizer thinks the 1,000 rows produced by the subquery will produce 1,104 distinct combinations of (n_400, n_3) so we how did the impossible 1,104 appear in the arithmetic.

If you apply the “selection without replacement” formula to picking 1,000 rows with 400 distinct values from 1,000,000 rows the expected number of distinct values (with rounding) will be 368; if you apply the formula for picking 1,000 rows with 3 distinct values from 1,000,000 rows the expected number will be 3. And 3 * 368 = 1,104. (Remember that in my original estimate I applied the formula after multiplying out the combination of distinct values). The optimizer is using its standard methods, but using internediate results in an unsuitable fashion.

It’s impossible to say what the impact of this particular code path – and the change on the upgrade – might be. The optimizer has over-estimated by 47% in one case and 62% in the other but (a) there may be something about my data that exaggerated an effect that few people will see in the wild and (b) in many cases getting in the right ballpark is enough to get a reasonable plan, and a factor of 2 is the right ballpark.

Of course, a few people will be unlucky with a few queries on the upgrade where the estimate changes – after all a single row difference in the estimate can cause the optimizer to flip between a hash join and a nested loop – but at least you’ve got a little extra information that might help when you see a bad estimate on an important semi-join.

So is there a workaround ? Given that I’ve got 12c, the obvious thing to try is to create a column group at both ends of the semi-join and see what happens. It shouldn’t really make any difference because column groups are targeted at the problems of correlated column – but we might as well try it:

```
execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns (n_400,n_3) size 1')
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for columns (n_400,n_3) size 1')

```

Unfortunately when I did this the final cardinality estimate for both queries dropped to just 833 (the absence of a K on the end isn’t a typo!).

Manually unnesting got me closer:

```
select
*
from
(
select  distinct n_3, n_400
from    t2
where   n_1000 = 0
)       sq,
t1
where
sq.n_400 = t1.n_400
and     sq.n_3 = t1.n_3
;

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   649K|    33M|  1260  (11)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   649K|    33M|  1260  (11)| 00:00:01 |
|   2 |   VIEW               |      |   779 | 20254 |   612   (8)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |   779 |  8569 |   612   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("SQ"."N_400"="T1"."N_400" AND "SQ"."N_3"="T1"."N_3")
4 - filter("N_1000"=0)

```

The cardinality of 649K is (allowing for rounding) 833 * 779; so we need to know where the 779 came from. It’s the optimizer standard arithmetic for “distinct” – multiply the N individual selectivities together then divide by the sqrt(2) “N-1” times. So we apply the “selection without replacement formula twice”:

• adjusted selectivity of n_400 = 367.21
• adjusted selectivity of n_3 = 3
• 367.21 * 3 / sqrt(2) = 779

If you create column group statistics for (n_400, n_3) this doesn’t change the optimizer’s estimate for the number of distinct combinations after selection – maybe that’s another enhancement in the pipeline – but, at least in this case, the manual unnesting has got us a little closer to the right estimates without any statistical intervention.

### Footnote:

Just for the sake of completeness, here are the plans (with yet more cardinality predictions) that you get if you block the unnesting:

```
select
*
from
t1
where
exists (
select
/*+ no_unnest */
null
from	t2
where	n_1000 = 0
and	t2.n_400 = t1.n_400
and	t2.n_3 = t1.n_3
)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  1179 | 33012 |   766K (12)| 00:00:30 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   |  1000K|    26M|   632  (11)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |     1 |    11 |   638  (12)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T2" "T2" WHERE
"N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2))
3 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)

=====================================
Unnesting blocked and subquery pushed
=====================================
select
*
from
t1
where
exists (
select
/*+ no_unnest push_subq */
null
from	t2
where	n_1000 = 0
and	t2.n_400 = t1.n_400
and	t2.n_3 = t1.n_3
)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 50000 |  1367K|  1271  (12)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL | T1   | 50000 |  1367K|   632  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |     1 |    11 |   638  (12)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST */ 0 FROM "T2"
"T2" WHERE "N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2))
2 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)

```

The 1179 comes from the magic of sqrt(2):  1179 = 1,000,000 / (400 * 3 / sqrt(2)).

The 50,000 is just the basic “I dunno, let’s call it 5%”.

Reference script: aggregate_selectivity_c.sql

## July 29, 2015

### Existence

Filed under: Execution plans,Oracle,subqueries,Subquery Factoring,Tuning — Jonathan Lewis @ 1:05 pm BST Jul 29,2015

I need to check if at least one record present in table before processing rest of the statements in my PL/SQL procedure. Is there an efficient way to achieve that considering that the table is having huge number of records like 10K.

I don’t think many readers of the forum would consider 10K to be a huge number of records; nevertheless it is a question that could reasonably be asked, and should prompt a little discssion.

First question to ask, of course is: how often do you do this and how important is it to be as efficient as possible. We don’t want to waste a couple of days of coding and testing to save five seconds every 24 hours. Some context is needed before charging into high-tech geek solution mode.

Next question is: what’s wrong with writing code that just does the job, and if it finds that the job is complete after zero rows then you haven’t wasted any effort. This seems reasonable in (say) a PL/SQL environment where we might discuss the following pair of strategies:

```
Option 1:
=========
-- execute a select statement to see in any rows exist

if (flag is set to show rows) then
for r in (select all the rows) loop
do something for each row
end loop;
end if;

Option 2:
=========
for r in (select all the rows) loop
do something for each row;
end loop;

```

If this is the type of activity you have to do then it does seem reasonable to question the sense of putting in an extra statement to see if there are any rows to process before processing them. But there is a possibly justification for doing this. The query to find just one row may produce a very efficient execution plan, while the query to find all the rows may have to do something much less efficient even when (eventually) it finds that there is no data. Think of the differences you often see between a first_rows_1 plan and an all_rows plan; think about how Oracle can use index-only access paths and table elimination – if you’re only checking for existence you may be able to produce a MUCH faster plan than you can for selecting the whole of the first row.

Next question, if you think that there is a performance benefit from the two-stage approach: is the performance gain worth the cost (and risk) of adding a near-duplicate statement to the code – that’s two statements that have to be maintained every time you make a change. Maybe it’s worth “wasting” a few seconds on every execution to avoid getting the wrong results (or an odd extra hour of programmer time) once every few months. Bear in mind, also, that the optimizer now has to optimize two statement instead of one – you may not notice the extra CPU usage in testing but perhaps in the live environment the execution benefit will be eroded by the optimization cost.

Next question, if you still think that the two-stage process is a good idea: will it result in an inconsistent database state ?! If you select and find a row, then run and find that there are no rows to process because something modified and “hid” the row you found on the first pass – what are you going to do. Will this make the program crash ? Will it produce an erroneous result on this run, or will a silent side effect be that the next run will produce the wrong results. (See Billy Verreynne’s comment on the original post). Should you set the session to serializable before you start the program, or maybe lock a critical table to make sure it can’t change.

So, assuming you’ve decided that some form of “check for existence then do the job” is both desirable and safe, what’s the most efficient strategy. Here’s one of the smarter solutions that minimises risk and effort (in this case using a pl/sql environment).

```
select  count(*)
into    m_counter
from    dual
where   exists ({your original driving select statement})
;

if m_counter = 0 then
null;
else
for c1 in {your original driving select statement} loop
-- do whatever
end loop;
end if;

```

The reason I describe this solution as smarter, with minimum risk and effort, is that (a) you use EXACTLY the same SQL statement in both locations so there should be no need to worry about making the same effective changes twice to two slightly different bits of SQL and (b) the optimizer will recognise the significance of the existence test and run in first_rows_1 mode with maximum join elimination and avoidance of redundant table visits. Here’s a little data set I can use to demonstrate the principle:

```
create table t1
as
select
mod(rownum,200)         n1,     -- scattered data
mod(rownum,200)         n2,
from
dual
connect bytin
level <= 10000  -- > comment to avoid WordPress formatting problem
;

delete from t1 where n1 = 100;
commit;

create index t1_i1 on t1(n1);

begin
dbms_stats.gather_table_stats(
user,
't1',
method_opt => 'for all columns size 1'
);
end;
/

```

It’s just a simple table with index, but the index isn’t very good for finding the data – it’s repetitive data widely scattered through the table: 10,000 rows with only 200 distinct values. But check what happens when you do the dual existence test – first we run our “driving” query to show the plan that the optimizer would choose for it, then we run with the existence test to show the different strategy the optimizer takes when the driving query is embedded:

```
alter session set statistics_level = all;

select  *
from    t1
where   n1 = 100
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

select  count(*)
from    dual
where   exists (
select * from t1 where n1 = 100
)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

```

Notice how I’ve enabled rowsource execution statistics and pulled the execution plans from memory with their execution statistics. Here they are:

```
select * from t1 where n1 = 100

-------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    38 (100)|      0 |00:00:00.01 |     274 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     50 |    38   (3)|      0 |00:00:00.01 |     274 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("N1"=100)

select count(*) from dual where exists (   select * from t1 where n1 = 100  )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE    |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |
|*  2 |   FILTER           |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|   3 |    FAST DUAL       |       |      0 |      1 |     2   (0)|      0 |00:00:00.01 |       0 |
|*  4 |    INDEX RANGE SCAN| T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter( IS NOT NULL)
4 - access("N1"=100)

```

For the original query the optimizer did a full tablescan – that was the most efficient path. For the existence test the optimizer decided it didn’t need to visit the table for “*” and it would be quicker to use an index range scan to access the data and stop after one row. Note, in particular, that the scan of the dual table didn’t even start – in effect we’ve got all the benefits of a “select {minimum set of columns} where rownum = 1” query, without having to work out what that minimum set of columns was.

But there’s an even more cunning option – remember that we didn’t scan dual when there were no matching rows:

```
for c1 in (

with driving as (
select  /*+ inline */
*
from    t1
)
select  /*+ track this */
*
from
driving d1
where
n1 = 100
and     exists (
select
*
from    driving d2
where   n1 = 100
);
) loop

end loop;

```

In this specific case the factored subquery would automatically be copied inline so the hint here is actually redundant; in general you’re likely to find the optimizer materializing your subquery and bypassing the cunning strategy if you don’t use the hint. (This example is one of the special cases where subquery factoring doesn’t automatically materialize – there’s no where clause in the subquery.)

Here’s the execution plan pulled from memory (after running this SQL through an anonymous PL/SQL block):

```
SQL_ID  7cvfcv3zarbyg, child number 0
-------------------------------------
WITH DRIVING AS ( SELECT /*+ inline */ * FROM T1 ) SELECT /*+ track
this */ * FROM DRIVING D1 WHERE N1 = 100 AND EXISTS ( SELECT * FROM
DRIVING D2 WHERE N1 = 100 )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |    39 (100)|      0 |00:00:00.01 |       2 |
|*  1 |  FILTER            |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|*  2 |   TABLE ACCESS FULL| T1    |      0 |     50 |    38   (3)|      0 |00:00:00.01 |       0 |
|*  3 |   INDEX RANGE SCAN | T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( IS NOT NULL)
2 - filter("T1"."N1"=100)
3 - access("T1"."N1"=100)

```

You’ve got just one statement – and you’ve only got one version of the complicated text because you put it into a factored subquery; but the optimizer manages to use one access path for one instantiation of the text and a different one for the other. You get an efficient test for existence and only run the main query if some suitable data exists, and the whole thing is entirely read-consistent.

I have to say, though, I can’t quite make myself 100% enthusiastic about this code strategy – there’s just a nagging little doubt that the optimizer might come up with some insanely clever trick to try and transform the existence test into something that’s supposed to be faster but does a lot more work; but maybe that’s only likely to happen on an upgrade, which is when you’d be testing everything very carefully anyway (wouldn’t you) and you’ve got the “dual/exists” fallback position if necessary.

### Footnote:

Does anyone remember the thing about reading execution plans “first child first” – this particular usage of an existence test is one of the interesting cases where it’s not the first child of a parent operation that runs first: it’s the case I often refer to as the “constant subquery”.

## July 13, 2015

### Missing Bloom

Filed under: CBO,Execution plans,Oracle,Partitioning — Jonathan Lewis @ 1:37 pm BST Jul 13,2015

Here’s a little surprise that came up on the OTN database forum a few days ago. Rather than describe it I’m just going to create a data set to demonstrate it, initially using 11.2.0.4 although the same thing happens on 12.1.0.2. The target is a query that joins to a range/hash composite partitioned table and uses a Bloom filter to do partition pruning at the subpartition level.  (Note to self: is it possible to see Bloom filters that operate at both the partition and subpartition level from a single join? I suspect not.). Here’s my code to create and populate both the partitioned table and a driving table for the query:

```
rem
rem     Script: bloom_filter_fail.sql
rem     Dated:  4th July 2015
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem

create table pt_composite_1 (
part_key        number(8),
subp_key        number(8),
small_vc        varchar2(40),
)
nologging
partition by range(part_key)
subpartition by hash (subp_key)
subpartition template (
subpartition g1,
subpartition g2,
subpartition g3,
subpartition g4
)
(
partition p01 values less than ( 10),
partition p02 values less than ( 20),
partition p03 values less than ( 30),
partition p04 values less than ( 40),
partition p05 values less than ( 50),
partition p06 values less than ( 60),
partition p07 values less than ( 70),
partition p08 values less than ( 80),
partition p09 values less than ( 90),
partition p10 values less than (100),
partition p11 values less than (110),
partition p12 values less than (120)
)
;

insert into pt_composite_1 (
)
select
trunc(dbms_random.value(0,120)) part_key,
trunc(dbms_random.value(0,50))  subp_key,
to_char(trunc((rownum-1)/20))   small_vc,
from
dual
connect by
rownum <= 25000
;

insert /*+ append */ into pt_composite_1 select * from pt_composite_1;
commit;

insert /*+ append */ into pt_composite_1 select * from pt_composite_1;
commit;

insert /*+ append */ into pt_composite_1 select * from pt_composite_1;
commit;

insert /*+ append */ into pt_composite_1 select * from pt_composite_1;
commit;

create table driver (
part_key        number(8),
subp_key        number(8),
test            number(4)
)
;

execute dbms_random.seed(0)

insert into driver
select
trunc(dbms_random.value(0,120)) part_key,
trunc(dbms_random.value(0,50))  subp_key,
mod(rownum - 1, 30)
from
dual
connect by
level <= 60 ; begin dbms_stats.gather_table_stats( ownname => user,
tabname         => 'driver',
method_opt      => 'for all columns size 1'
);
dbms_stats.gather_table_stats(
ownname         => user,
tabname         => 'pt_composite_1',
method_opt      => 'for all columns size 1',
granularity     => 'all'
);
end;
/

```

So I’ve got a table with 12 partitions, each hash subpartitioned into 4 subpartitions, a total of 400,000 rows, and a driving table with 60 rows with two rows per value for column test, which probably means two separate subpartitions identified for most values of test. I set this data up to do a number of different experiments but the only result I’m going to report here is about the sub-partition key. Here’s a query that selects all the data from the partitioned table that matches the subp_key value from a subset of the driver table:

```
select
ptc.part_key, ptc.subp_key, count(*), max(ptc.small_vc)
from
pt_composite_1  ptc
where
(ptc.subp_key) in (
select  subp_key
from    driver
where   test = 0
)
group by
ptc.part_key, ptc.subp_key
;

```

The optimizer has the option to unnest the subquery and turn the query into a semi-join (specifically a right outer join), and we might hope to see a hash join with Bloom filtering being used to restrict the hash subpartitions that we visit. (We’ve (probably) picked two values for the subp_key, so we don’t expect to visit more than 2 of the hash subpartitions from each of the range partitions.) Here’s the execution plan I got, with rowsource execution statistics:

```
-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name           | Starts | E-Rows | Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                |      1 |        |       |       |    238 |00:00:01.74 |    2329 |       |       |          |
|   1 |  HASH GROUP BY                |                |      1 |    238 |       |       |    238 |00:00:01.74 |    2329 |   960K|   960K| 1377K (0)|
|*  2 |   HASH JOIN RIGHT SEMI        |                |      1 |  15997 |       |       |  15856 |00:00:01.69 |    2329 |  2440K|  2440K|  905K (0)|
|   3 |    PART JOIN FILTER CREATE    | :BF0000        |      1 |      2 |       |       |      2 |00:00:00.01 |      23 |       |       |          |
|*  4 |     TABLE ACCESS FULL         | DRIVER         |      1 |      2 |       |       |      2 |00:00:00.01 |      23 |       |       |          |
|   5 |    PARTITION RANGE ALL        |                |      1 |    400K|     1 |    12 |    104K|00:00:01.04 |    2306 |       |       |          |
|   6 |     PARTITION HASH JOIN-FILTER|                |     12 |    400K|:BF0000|:BF0000|    104K|00:00:00.63 |    2306 |       |       |          |
|   7 |      TABLE ACCESS FULL        | PT_COMPOSITE_1 |     12 |    400K|     1 |    48 |    104K|00:00:00.22 |    2306 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (part_keyentified by operation part_key):
---------------------------------------------------
2 - access("PTC"."SUBP_KEY"="SUBP_KEY")
4 - filter("TEST"=0)

```

Oracle has unnested the subquery and converted to a right outer semi-join using a hash join. While building the in-memory hash table it has constructed a Bloom filter at operation 3 of the plan to help it eliminate hash subpartitions, and used that Bloom filter at operation 6 of the plan. Our query does nothing to eliminate any of the range partitions so we can see operation 5 is a “partition range all”, and the application of the Bloom filter at operation 6 starts 12 times, once for each range partition. As we can see from operation 7, the Bloom filter generated by our selection from the driver table happened to identify just one subpartition – we start the TABLE (subpartition) ACCESS FULL 12 times, once for each range scan. If our driver data (and the Bloom filter) had identified 2 subpartitions we would have seen operation 7 start 24 times.

So we’ve met our first target – demonstrating that we can get a Bloom filter to eliminate at the subpartition level. Now we need to break things – the OP had a problem with a query that used Bloom filters on one system but didn’t use them for (nominally) the same setup on another system. Here’s my first attempt, with the resulting execution plan:

```
alter table pt_composite_1 add partition p13 values less than (130) subpartitions 8;

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Starts | E-Rows | Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |      1 |        |       |       |    238 |00:00:01.75 |    2628 |       |       |          |
|   1 |  HASH GROUP BY             |                |      1 |   3310 |       |       |    238 |00:00:01.75 |    2628 |   960K|   960K| 2529K (0)|
|*  2 |   HASH JOIN RIGHT SEMI     |                |      1 |  16000 |       |       |  15856 |00:00:01.71 |    2628 |  2440K|  2440K|  743K (0)|
|*  3 |    TABLE ACCESS FULL       | DRIVER         |      1 |      2 |       |       |      2 |00:00:00.01 |      23 |       |       |          |
|   4 |    PARTITION RANGE ALL     |                |      1 |    400K|     1 |    13 |    104K|00:00:01.05 |    2605 |       |       |          |
|   5 |     PARTITION HASH SUBQUERY|                |     13 |    400K|KEY(SQ)|KEY(SQ)|    104K|00:00:00.64 |    2605 |       |       |          |
|   6 |      TABLE ACCESS FULL     | PT_COMPOSITE_1 |     13 |    400K|     1 |    56 |    104K|00:00:00.22 |    2306 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

```

I’ve added a new partition – with a different number of subpartitions from the table default. The Bloom filter has disappeared and the optimizer has decided to do subquery pruning instead. Drop the partition and recreate it with 2 subpartitions and the same thing happens; drop it and recreate it with 4 subpartitions and we’re back to a Bloom filter. It seems that the Bloom filter depends on every partition having the same number of subpartitions. (That’s not too surprising – the code to handle a Bloom filter when there are a variable number of subpartitions could get a little messy, and there probably aren’t many sites that use variable numbers of subpartitions.)

You might note from the Starts value for operation 5 (the subquery line) that the subquery had to run 13 times. Checking the 10046 trace file we can see the following SQL:

```
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 0, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 1, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 2, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 3, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 4, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 5, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 6, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 7, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 8, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 9, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 10, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 11, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1
SELECT distinct TBL\$OR\$IDX\$PART\$NUM("PT_COMPOSITE_1", 0, 2, 12, "SUBP_KEY") FROM (SELECT "DRIVER"."SUBP_KEY" "SUBP_KEY" FROM "DRIVER" "DRIVER" WHERE "DRIVER"."TEST"=0) ORDER BY 1

```

This is the optimizer trying to work out, for each of the 12 partitions, which subpartitions it needs to visit. In my case this resulted in a full tablescan of the driver table for each partition. For hash subpartitions, at least, this does seem to be overkill (and can anyone say “bind variables”) – wouldn’t it be possible to run the query once for the partition with the most subpartitions and then derive the correct subpartition number for all other cases ? Maybe, but perhaps that’s just too much special-case code, or maybe it’s on the todo list. Realistically we might guess that a driver table would be very small compared to the size of the subpartitions you were eventually going to scan, so the excess extra work may be a tiny fraction of the total workload – so the added complexity might be seen as too much investment (and risk) for too little return. Maybe in a future release there will be a bit of patching to reduce this overhead.

### Conclusion

You may find that some execution plans involving hash subpartitions become less efficient if you don’t keep the number of subpartitions per partition constant across the entire table. I’ve only tested with range/hash composites but there may be other variations of composite partitoning where a similar change in plans occurs.

### Footnote

I haven’t done any exhaustive investigation yet, but so far I haven’t been able to create a data set, or perhaps a query, that allows the optimizer to create a Bloom filter (or two) from the driving table and then filter both the range partitions and the hash subpartitions. The closest I’ve come is a plan that shows a Bloom filter being used to filter the range partitions followed by a pruning subquery for the hash subpartitions.

## June 2, 2015

### Predicate Order

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 7:10 pm BST Jun 2,2015

A recent OTN post demonstrated a very important point about looking at execution plans – especially when you don’t use the right data types. The question was:

We’ve this query which throws invalid number

```SELECT * FROM table A
WHERE A.corporate_id IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';```

However it works fine if we use not in instead of in

```SELECT * FROM table A
WHERE A.corporate_id  NOT IN (59375,54387) AND TRUNC(created_dt) BETWEEN '19-DEC-14' AND '25-DEC-14';```

A follow-up post told us that corporate_id was a varchar() type – so the root cause of the ORA-01722: invalid number error is simply that you shouldn’t be mixing data types. Either the corporate_id should have been defined as numeric or the in-list should have been a list of varchar2() values. (And, of course, the character strings that look like dates should have been converted explicitly to date data types using either the to_date() function with a 4-digit year or the date ‘yyyy-mm-dd’ syntax; and using “created_dt >=  19th Dec and created_dt < 26th Dec” would have given the optimizer a chance to get a better cardinality estimate)

The answer to the slightly more specific problem – why does changing NOT IN to IN allow the query to run rather than crashing – is (probably) one that I first addressed in an article in Oracle Magazine just over eleven years ago: with CPU costing enabled Oracle can change the order in which it applies filter predicates to a table. It’s also a question that can easily be answered by my commonest response to many of the optimizer questions that appear on OTN – look at the execution plan.

In this example it’s a fairly safe bet that there’s a reasonable small volume of data (according to the optimizer’s estimate) where to_number(corporate_id) is one of the required values, and a much larger volume of data where it is not; with some intermediate volume of data where the created_dt falls in the required date range. With CPU costing enabled (optional in 9i, enabled by default in 10g) the optimizer would then do some arithmetic to calculate the most cost-effective order of applying the filter predicates based on things like: the number of CPU cycles it takes to walk along a row to find a particular column. the number of CPU cycles it takes to convert a character column to a number and compare it with a number; the number of CPU cycles it takes truncate a date column and compare it with a string, the number of rows that would pass the numeric test hence requiring the first-applied date test, compared with the number of rows that would survive the first-applied date test hence requiring either the second date test or the numeric test to take place.

Here’s some code to demonstrate the point. It may require the system stats to be set to a particular values to ensure that it is probably repeatable, but there’s probably some flexibility in the range, which is why I’ve called dbms_stats.set_system_stats() in the first few lines:

```drop table t1 purge;

create table t1 (
v1      varchar2(10),
d1      date
)
;

insert into t1 values(1,'01-Jan-2015');

insert into t1 values('x','02-Jan-2015');

insert into t1 values(3,'03-Jan-2015');
insert into t1 values(4,'04-Jan-2015');
insert into t1 values(5,'05-Jan-2015');
insert into t1 values(6,'06-Jan-2015');
insert into t1 values(7,'07-Jan-2015');
insert into t1 values(8,'08-Jan-2015');
insert into t1 values(9,'09-Jan-2015');
insert into t1 values(10,'10-Jan-2015');

execute dbms_stats.gather_table_stats(user,'t1');

```

First we create a table, load some data, and gather stats. You’ll notice that I’ve got a varchar2(10) column into which I’ve inserted numbers for all rows except one where it holds the value ‘x’. Now we just run some code to check the execution plans for a couple of queries.

```
explain plan for
select
*
from    t1
where   v1 in (4,6)
and     d1 between '03-Jan-2015' and '09-Jan-2015'
;

select * from table(dbms_xplan.display);

explain plan for
select
*
from    t1
where   v1 not in (4,6)
and     d1 between '03-Jan-2015' and '&1-Jan-2015'
;

select * from table(dbms_xplan.display);

```

As with the original question I’ve take a query with an IN operator and changed it to NOT IN. The in-list is numeric even though the relevant column is varchar2(10). The first query crashes with ORA-01722: invalid number, the second one runs and returns the correct result. You’ll notice, of course, that the “bad” value for v1 is not in the set of rows
where d1 is between 3rd and 9th Jan 2015. You’ll also notice that in my code I’ve used &1 for the end day in the query with the NOT IN clause so that I can re-run the query a few times to show the effects of changing the date range. Here are the execution plans – first with the IN clause:

```
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |    20 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    20 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter((TO_NUMBER("V1")=4 OR TO_NUMBER("V1")=6) AND
"D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

```

The optimizer arithmetic predicts 2 rows returned using a full tablescan – it doesn’t know the query is going to crash. Notice the predicate information, though. The first predicate says Oracle will attempt to convert v1 to a number and compare it with 4 and then (if the first test fails) with 6. The query will crash as soon as it hits a row with a non-numeric value for v1. In outline, the optimizer has decided that the numeric conversion and test is very cheap (on CPU) and only a few rows will survive to take the more expensive date comparison; wherease either of the (expensive) date comparisons would leave a lot of rows that would still have to be checked with the numeric test. It makes sense to do the numeric comparison first.

Here’s the plan for the query with the NOT IN clause when I set the date range to be 3rd Jan to 7th Jan.

```
Execution plan for NOT IN:  7th Jan
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     5 |    50 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     5 |    50 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("D1"<=TO_DATE(' 2015-01-07 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

```

The plan is still a full tablescan – there are no indexes available – and the estimated number of rows has gone up to 5. The important thing, though, is the predicate section. In this case the optimizer has decided that the first thing it will apply is the (relatively expensive) predicate “d1 >= 3rd Jan” before worrying about the “NOT IN” numeric predicate. The optimizer has worked out that almost all the data will survive the NOT IN predicate, so it’s not efficient to apply it before using other predicates that eliminate more data.

By a stroke of luck my simple example happened to be a very good example. Here’s what happened when I set the end date to 8th Jan:

```
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     6 |    60 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     6 |    60 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "D1"<=TO_DATE(' 2015-01-08 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6)

```

The estimated rows has gone up to 6 – but the interesting thing, as before, is the predicate section: in the previous example Oracle did the tests in the order “upper bound”, “lower bound”, “numeric”; in this test it has done “lower bound”, “upper bound”, “numeric”.

And this is what I got when I ran the test with 9th Jan:

```
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     7 |    70 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     7 |    70 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("D1">=TO_DATE(' 2015-01-03 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND TO_NUMBER("V1")<>4 AND TO_NUMBER("V1")<>6 AND
"D1"<=TO_DATE(' 2015-01-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
```

Again the estimated rows has gone up by one, but the ever-interesting predicate section now shows the evaluation order as: “lower bound”, “numeric”, “upper bound”.

There are only 6 possible orders for the predicate evaluation for the query with the NOT IN clause, and we’ve seen three of them. Three will fail, three will succeed – and I got all three of the successful orders. It wouldn’t take much fiddling around with the data (careful choice of duplicate values, variation in ranges of low and high values, and so on) and I could find a data set where small changes in the requested date range would allow me to reproduce all six variations. In fact when I changed my nls_date_format to “dd-mon-yy” and used a 2 digit year for testing I got two of the three possible failing predicate evaluation orders – “numeric”, “lower bound”, “higher bound” and “higher bound”, “numeric”, “lower bound” without changing the data set. (To be able to get all six orders with a single data set I’d probably need a data set where the “bad” v1 value corresponded to a d1 value somewhere mear the middle of the d1 range.)

The bottom line – use the correct data types; make sure your date literals are dates with a 4-digit year; check the predicate section to see if the optimizer did any implicit conversions with your predicates and what order it used them in. If you don’t do this you may find that a query can work perfectly for months, then crash because you finally got unlucky with the arithmetic.

## May 21, 2015

### Understanding SQL

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 6:12 pm BST May 21,2015

From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix. Then, after you’ve made a couple of suggestions and explained a couple of ideas the provider simply fades into the distance and doesn’t tell you any more about the query, or whether they’ve taken advantage of your advice, or found some other way to address the problem.

Such a query, with its execution plan, appeared a couple of weeks ago:

```UPDATE FACETS_CUSTOM.MMR_DTL
SET
CAPITN_PRCS_IND = 2,
FIL_RUN_DT = Current_fil_run_dt,
ROW_UPDT_DT = dta_cltn_end_dttm
WHERE
CAPITN_PRCS_IND = 5
AND	HSPC_IND ='Y'
AND	EXISTS (
SELECT	1
FROM	FACETS_STAGE.CRME_FUND_DTL_STG STG_CRME
WHERE	STG_CRME.MBR_CK = MMR_DTL.MBRSHP_CK
AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN STG_CRME.ERN_FROM_DT AND STG_CRME.ERN_THRU_DT
AND	STG_CRME.FUND_ID IN ('AAB1', '1AA2', '1BA2', 'AAB2', '1AA3', '1BA3', '1B80', '1A80')
)
AND	EXISTS (
SELECT	1
FROM	FACETS_CUSTOM.FCTS_TMS_MBRID_XWLK XWLK
WHERE	XWLK.MBR_CK = MMR_DTL.MBRSHP_CK
AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN XWLK.HSPC_EVNT_EFF_DT AND XWLK.HSPC_EVNT_TERM_DT
)
;

-------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT              |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|   1 |  UPDATE                       | MMR_DTL               |       |       |            |          |
|   2 |   NESTED LOOPS SEMI           |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|*  3 |    HASH JOIN RIGHT SEMI       |                       |    49 |  5488 | 12375   (2)| 00:02:29 |
|   4 |     TABLE ACCESS FULL         | FCTS_TMS_MBRID_XWLK   |  6494 | 64940 |    24   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL         | MMR_DTL               |   304K|    29M| 12347   (2)| 00:02:29 |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CRME_FUND_DTL_STG     |     1 |    36 |     5   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | IE1_CRME_FUND_DTL_STG |     8 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("XWLK"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
filter("XWLK"."HSPC_EVNT_EFF_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
"XWLK"."HSPC_EVNT_TERM_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
5 - filter("CAPITN_PRCS_IND"=5 AND "HSPC_IND"='Y')
6 - filter(("STG_CRME"."FUND_ID"='1A80' OR "STG_CRME"."FUND_ID"='1AA2' OR
"STG_CRME"."FUND_ID"='1AA3' OR "STG_CRME"."FUND_ID"='1B80' OR "STG_CRME"."FUND_ID"='1BA2' OR
"STG_CRME"."FUND_ID"='1BA3' OR "STG_CRME"."FUND_ID"='AAB1' OR "STG_CRME"."FUND_ID"='AAB2') AND
"STG_CRME"."ERN_FROM_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
"STG_CRME"."ERN_THRU_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
7 - access("STG_CRME"."MBR_CK"="MMR_DTL"."MBRSHP_CK")

```

The most informative bit of narrative that went with this query said:

“The table MMR_DTL doesnt have index on these columns CAPITN_PRCS_IND , HSPC_IND .. Since this is an update stmt which will update 85k records, In a dilema whether to index these columns or not .. And this table MMR_DTL is an ever growing table. Worried about the update performance. “

This was in response an observation that there was a full tablescan on MMR_DTL at operation 5 despite the predicate “CAPITN_PRCS_IND”=5 AND “HSPC_IND”=’Y’. You’ll note that the predicted cardinality for that scan is 304K and the update statement is going to change CAPITN_PRCS_IND from the value 5 to the value 2 – so it’s not entirely unreasonable to be worried about the impact of creating an index that included the column capitn_prcs_ind.

What more can we say about this query, given the limited information. Quite a lot – unfortunately the owner of the query isn’t giving anything else away.

I’m going to leave this note unfinished to give people a little chance to think about the clues in the request, the questions they might ask, reasons why there might be a performance problem, and strategies they might investigate, then I’ll update the posting with a few ideas some time in the next 24 hours.

### Update 1 – 24th May

There are so many ideas that spring up from a small amount of information that it’s very hard to write a concise and coherent description of what you’ve noticed, when and how far you pursued it, and how relevant the ideas might be to the problem in hand – especially when most of the thoughts require you to ask for more information. Unfortunately the time I had available to write this note has just disappeared, so I’m just going to have to complete it in rapidly written installments. The first bit is an outline of the immediate response I had to the initial presentation of the problem and the execution plan that went with it.

The only comment from the OP on this statement and plan was: “I couldn’t optimize this query for better performance and optimized cost.. Can some one guide me on this.”

We have no idea how many rows would be updated, how long it took, or how long the OP thinks it ought to take; it’s not until a subsequent post that we learn that the number of rows targetted for update is 85,000 – which tells us that the optimizer has run into some problems with its cardinality estimates. This suggests that IF there’s a serious performance problem then POSSIBLY there’s a better execution plan and we might get the optimizer to find it automatically if we could tell it how to adjust its cardinality estimates. It would be nice, however to know where the query spent its time (i.e. can we re-run it with rowsource execution stats or monitoring enabled, and see the actual run-time work in the plan).

If it took a couple of minutes to update that 85,000 rows, I probably wouldn’t want to spend time making it go faster; if it took 2 hours, of which 1 hour 50 minutes was spent waiting for a transaction (row) lock then I’d want to look at why the update collision could happen and see if that problem could be avoided – it might then be the case that the last 10 minutes was spent rolling back and restarting an update that ought to have taken 2 minutes “in vacuo”. Underlying all this, I would want to be sure (as I’ve implicitly, and I think reasonably, assumed) that it’s an update that runs only occasionally, perhaps once per day or once per week.

In the absence of hard information – let’s resort to a few hypotheticals; looking at the plan itself (and knowing the target 85,000 rows) I am prepared to make a few guesses about the run-time activity.

1. We build an inmemory hash table from the whole of FCTS_TMS_MBRID_XWLK, a step for which the optimizer ought to be able to give a reasonable cost and cardinality – assuming (as I will from now on) that the basic stats are reasonably accurate.
2. We scan the (fairly large) MMR_DETAIL table, applying a couple of filters; again the optimizer ought to do a reasonable job of estimating the cost of such a tablescan, and we might expect a significant fraction of the time to be spent on multiblock (possibly direct path) reads of the table. The cardinality reported is 304,000 but we note there are two predicates and both are for columns which we might guess have a small number of distinct values – one of which we are changing. Perhaps there’s a bad cardinality error there and maybe a couple of single column histograms would help, but maybe column group stats with a histogram on the pair would be even better. I also wonder when (if) HSPC_IND ever changes from Y to N, and contemplate the possibility of creating a function-based index that records ONLY the rows that match this predicate pair (see the note on indexing that will appear some time over the next week further down the page). It’s at this point that we might ask whether the number of rows returned by this scan should be very similar to the number of rows updated, or whether the scan identifies far too many rows and the two existence tests do a lot of work to eliminate the excess and, if the latter, which test should we apply first and how should we apply it.
3. Having scanned the MMR_DTL we probe the in-memory hash table copy of FCTS_TMS_MBRID_XWLK for the first match, using an equality predicate (which will be the access predicate) and a range-based (filter) predicate which looks as if it is checking that some “start date” is between an “effective date” and a “termination date”. The estimated size of the result set is FAR too small at 49 rows when we know we have to have at least 85,000 rows survive this test; moreover, this tiny estimate comes from inputs of 6,500 and 304,000 rows being joined so we ought to wonder how such a tiny estimate could appear. A possible explanation is that the application has used some extreme dates to represent NULL values. If that’s the case then it’s possible that suitable histograms might help the optimizer recognise the extreme distribution; alternatively virtual columns that change the extreme values back to NULL and a predicate referencing the virtual columns may help.
4. After estimating the cardinality of the intermediate result so badly, the optimizer decides that the second existence test can be performed as a semi-join using a nested loop. The thing to note here is that the optimizer “knows” that this is an expensive step – the cost of each table access operation is 5 (4 + 1) – but it’s a step that the optimizer “thinks” shouldn’t happen very frequently so the cost is considered acceptable. We know, however, that this step has to execute at least 85,000 times, so the optimizer’s prediction of visiting 4 blocks in the table to identify (on average) 8 rows and discard (on average) 7 of them looks nasty. Again we note that one of the predicates is range-based on a pair of dates – and in this case we might wonder whether or not most of the rows we discard are outside the date range, and whether we ought to consider (as a general point, and not just for this query) whether or not we should add one, other, or both the ERN_FROM_DT and ERN_THRU_DAT to the IE1_CRME_FUND_DTL_STG index. It’s at this point in the query that we are most ignorant of time spent witht the present data set and how that time will change in the future as the data in the MMR_DTL table grows – on one hand it’s possible that the rows for each MMR_DTL are widely scattered across the CRME_FUND_DTL_STG and this step could do a lot of random I/O, on the other hand the (assumed) time-dependent nature of the data may mean that the only MMR_DTL rows we look at are recently entered and the associated CRME_FUND_DTL_STG rows are therefore also recently entered and closely clustered – leading to a beneficial “self-caching” effect at the “high” end of the table as the query runs, which introduces an efficiency that the optimizer won’t notice. There is one numerical highlight in this join – we have a cost of 5 for each probe and 49 rows to test, so we might expect the incremental cost of the query to be around 250 (i.e. 49 * 5), but the difference between operations 3 and 2 is only 56 – suggesting that the optimizer does have some “self-caching” information, possibly based on there being a significant difference between the two tables for the number of distinct values of the join column. (See, for example: http://oracle-randolf.blogspot.co.uk/2012/05/nested-loop-join-costing.html )

### Update 2 – 25th May

Scalability is a question that should always be considered – and there’s a scalability threat in the information we have so far. The plan shows a full tablescan of the MMR_DTL table, and while tablescans are not necessarily a bad thing we’ve been told that: “this table MMR_DTL is an ever growing table“. It’s possible that Oracle can be very quick and efficient when doing the existence tests on the rows it selects from the table – but it is inevitable that the tablescan will take longer to complete as time passes. Whether or not this is likely to matter is something we can’t decide from the information given: we don’t know how much of the time is the tablescan, we don’t know what fraction of the total time is due to the tablescan, and we don’t know  how much larger the table will grow each day (although, taking a wild guess, if we assume that the query runs once per day to manipulate recently arrived data the table would appear to be growing fairly rapidly.)

Another scalability detail we ought to ask about is the volume of data that we expect to update each time we run this statement. As time passes do we expect to see the same number of rows waiting to be updated, or are we expecting the business (whatever that may be) to grow steadily each month with an increase of a few percent in the number of rows to be updated on each execution. Our coding strategy may vary depending on the answer to that question – we might, for example, try to pre-empt a future problem by introducing some partitioning now.

The final scalablility issue is one I’ve raised already and comes from the CRME_FUND_DTL_STG table. According to the plan there about 8 rows (the number of rowids reporteed by the index range scan in operation 7) in this table for each distinct value of MMR_DTL.MBRSHP_CK; if MMR_DTL is large and growing, is CRME_FUND_DTL_STG very large and growing at 8 times the speed, or worse – as time passes will there be more rows for each distinct value of MMR_DTL.MBRSHP_CK.  Answers to these questions will help us decide whether we should use a hash join or a nested loop in the join to this table, and how to index the table to minimise random I/O.

### Update 3 – 1st June

Since we know that the optimizer has come up with some very bad estimates, and given that the statement is short and simple, I’d be happy to pass the statement to the Tuning Advisor to see what suggestions it made. At the least it ought to come up with a suggestion for an SQL profile (i.e a set of opt_estimate() hints) to address the extremely poor cardinality estimates; it’s also likely to make some suggestions about potentially helpful indexes. I can think of three basic warnings, though:

• Don’t run the advisor right after you’ve just done the big update – wait until the next big batch of data is ready for update
• Be cautious about the indexes – the indexing suggestions may point you at the problem, but you may do better by using function-based indexes and modified SQL

The most obvious “simple” index to (assuming a lot of time goes on the table scan) is one of:

```
create index dtl_cpi on mmr_dtl(capitn_prcs_ind) compress;
create index dtl_cpi on mmr_dtl(capitn_prcs_ind, hspc_ind) compress;

```

If almost all the rows with capitn_prcs_ind = 5 also had hspc_ind = ‘Y’ then I’d probably choose the former, if a large number of rows (actually index rowids) could be eliminated before visiting the table I’d choose the latter. I would also make sure I had a frequency – or Top-N frequency in 12c – histogram on capitn_prcs_ind, and may even “fake” it to ensure that an automatic call to gather table stats didn’t do something that made the histogram a threat instead of a benefit. If I created the two-column index I would also create a frequency histogram on hspc_ind. The benefit of the histograms is based on my assumption that both columns have a very small number of values and a highly skewed distribution. I might go one step further and create a column group, but keeping an eye open for an “out of range” anomaly, on the pair of columns if there was a strong degree of correlation between the two columns (in particular if there were a relatively small number of ‘Y’, but most of the ‘Y’ was also capitn_prcs_ind).

A simple index, though, might get used for other statements where it was not appropriate; there’s also the extra overhead of maintence as the capitn_prcs_ind value changes from 5 to 2 – to update the index we delete the old entry and insert the new entry. Just to make matters a little worse, when an index has a large number of rows for the same key value it’s fairly common to find that its space utilisation averages about 50% per leaf block. On top of everything else, if 2 is a “final state” value for most of the rows in this very large table then a very large fraction of the index we’re building is probably a total waste of space. So we might look at maximising efficiency and minimising space (and undo and redo wastage) with a function-based index.  Again one or two columns as appropriate.

```
create index dtl_cpi_1 on mmr_dtl(case when capitn_prcs_ind = 5 then 0 end) compress;

create index dtl_cpi_2 on mmr_dtl(case when capitn_prcs_ind = 5 and hspc_ind = 'Y' then 0 end) compress;

execute dbms_stats.gather_table_stats(user, 'mmr_dtl',method_opt=>'for all hidden columns size 1')

```

Again my choice of index might be affected by the extra benefit of having two columns to eliminate the maximum number of rows from the table, but since the index would be very small either way I’d probably go for the two-column index. Note that once you’ve created a function-based index you have to create stats on the underlying hidden column, which I’ve done a little lazily in this case by simple gathering “for all hidden columns”. I don’t need a histogram since there’s only a single value in the index. Apart from the size benefit, the other efficiency benefit is that when I update the table I only have to worry about deleting a row from the index, I don’t have to insert a replacement row – getting rid of the third undo record and redo vector is likely to save about 30% on the index maintenance background costs.

I now have to change the original SQL to match the index definition, of course, so my driving where clause would become one of:

```WHERE
(case when CAPITN_PRCS_IND = 5 then 0 end) = 0
AND	HSPC_IND ='Y'
AND	EXISTS (...

WHERE
(case when CAPITN_PRCS_IND = 5 AND HSPC_IND ='Y' then 0 end ) = 0
AND	EXISTS (...

```

### Update 4 – 2nd June 2015

At this point we really do need better information to make sensible decisions about what to do next; however, as I indicated earlier on, the hash semi join to the small table FCTS_TMS_MBRID_XWLK does look like a sensible choice: it’s small so it’s a good target for a build table and small tables joined to big tables tend to be things that are used to eliminate rapidly (at least, in the case of existence tests).

The final big question is what to do about the table CRME_FUND_DTL_STG which looks as if it might be quite big (several rows for each MMR_DTL row) and therefore be contributing a lot of random I/O. We don’t really have many options here – and we’ve seen what they are in a previous post about “NOT EXISTS” subqueries. We could try to make the nested loop semi-join more efficient by adding columns to the index so that we can do more filtering in the index; we could consider forcing Oracle to stick with a filter subquery and see if we can introduce some indexing that could make the subquery run a very small number of times, or we could consider the impact of switching to a Hash semi-join (possibly switch the build and probe data sets to make it a “hash join right semi”).

A full tablescan on the CRME_FUND_DTL_STG to do the hash join might be rather expensive, though, so we might look at the significance of the predicate on STG_CRME.FUND_ID which lists 8 different literal values. Perhaps there’s some scope for creating an index on that column if those fund_ids represent a small fraction of the total data; maybe even consider list-partitioning the table (not necessarily one fund per partition) on the FUND_ID.

The only other significant “pre-processing” predicate on the CRME_FUND_DTL_STG table is the one that requires the MMR_DTL.PMT_MSA_STRT_DT to be between STG_CRME.ERN_FROM_DT and the STG_CRME.ERN_THRU_DT; so we might look at the how many rows in the table could be eliminated by finding the maximum and minimum values for MMR_DTL.PMT_MSA_STRT_DT and eliminating any rows where STG_CRME.ERN_FROM_DT was greater than the maximum PMT_MSA_START_DT or the STG_CRME.ERN_THRU_DT was less than the minimum PMT_MSA_START_DT.

We might play around with CTEs (subquery factoring / common table expressions) to make this happen – I’m not going to work the exact SQL, but it might start something like:

```
with starting_view as (
select from mmr_detail
where  ...
and exists (
select from FACETS_CUSTOM.FCTS_TMS_MBRID_XWLK XWLK
where ...
)
),
minmax_view as (
select  /*+ materialize */
min(PMT_MSA_STRT_DT),
max(PMT_MSA_STRT_DT) max_date
from    starting_view
),
crme_fund_view as (
select
from
minmax_view,
CRME_FUND_DTL_STG
where   ...
)
select
from
starting_view   sv,
crme_fund_view  cv
where
...
```

An alternative strategy to get the minimum and maximum dates more efficiently might be to include them in the function-based index, and then include TWO inline views, or maybe scalar subqueries in the where clause, that depend on the min/max range scan to find the values that can be used to eliminate as much data from CRME_FUND_DTL_STG as early as possible.

### Footnote

I think it took me about 10 minutes looking at the original posting to come up with a few thoughts that might be appropriate – but it’s taken five sessions spread over nearly two week to write those ideas down. The same difference can appear between theoretical strategy and final action – everything I’ve said revolves around the two simple ideas of “how much data” (absolute and relative) and “how hard to we work to collect it” so it’s easy to come up with ideas; but it may take time to learn what the data looks like, and finding a SAFE way of making it possible to collect it efficiently.

## April 13, 2015

### Not Exists

Filed under: CBO,Execution plans,Oracle,Performance — Jonathan Lewis @ 12:51 pm BST Apr 13,2015

The following requirement appeared recently on OTN:

```
=========================================================================================================
I have a following query and want to get rid of the "NOT EXISTS' clause without changing the end results.

SELECT   A.c,
A.d,
A.e,
A.f
FROM   A
WHERE   NOT EXISTS (SELECT   1
FROM   B
WHERE   B.c = A.c AND B.d = A.d AND B.e = A.e);
===========================================================================================================
```

Inevitably this wasn’t the problem query, and almost inevitably the OP was asking us how to implement a solution which wasn’t appropriate for a problem that shouldn’t have existed. Despite this it’s worth spending a little time to take the request at its face value and look at the sort of thing that could be going on.

First, of course, you cannot get rid of the “not exists” clause, although you may be able to make it look different. If you want “all the rows in A that are not referenced in B” then you HAVE to examine all the rows in A, and you have to do some sort of check for each row to see whether or not it exists in B. The only option you’ve got for doing something about the “not exists” clause is to find a way of making it as a cheap as possible to implement.

A couple of people came up with suggestions for rewriting the query to make it more efficient. One suggested writing it as a “NOT IN” subquery, but it’s worth remembering that the optimizer may cheerfully transform a “NOT IN” subquery to a “NOT EXISTS” subquery if it’s legal and a manual rewrite may overlook the problem of NULLs; another suggested rewriting the query as an outer join, but again it’s worth remembering that the optimimzer may transform a “NOT EXISTS” subquery to an “ANTI-JOIN” – which is a bit like an outer join with filter, only more efficient. So, before suggesting a rewrite, it’s worth looking at the execution plan to see what the optimizer is doing just in case it’s doing something silly. There are two options – anti-join or filter subquery.

Here, with code I’ve run under 10.2.0.5 to match the OP, is a demonstration data set, with the two plans you might expect to see – first, some the data:

```
execute dbms_random.seed(0)

create table t1
as
with generator as (
select  --+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
trunc(dbms_random.value(0,4))           c,
trunc(dbms_random.value(0,5))           d,
trunc(dbms_random.value(0,300))         e,
rownum                                  f,
from
generator       v1,
generator       v2
where
rownum <= 1e6
;

create table t2
as
with generator as (
select  --+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
trunc(dbms_random.value(0,4))           c,
trunc(dbms_random.value(0,5))           d,
trunc(dbms_random.value(0,300))         e,
rownum                                  f,
from
generator       v1,
generator       v2
where
rownum <= 24000
;

create index t1_i1 on t1(c,d,e);
create index t2_i1 on t2(c,d,e);

begin
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'T1',
method_opt       => 'for all columns size 1'
);

dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'T2',
method_opt       => 'for all columns size 1'
);
end;
/

```

The OP had followed up their original query with a claim that “Table A holds 100 million rows and table B holds 24,000” – that’s a lot of checks (if true) and you ought to be asking how quickly the OP expects the query to run and how many of the 100 M rows are going to survive the check. I’ve set up just 1M rows with 6,000 distinct values for the column combination (c,d,e), and a reference table with 24,000 rows which are likely to include most, but not all, of those 6,000 combinations.

Rather than generate a very large output, I’ve written a query that generates the required data set, then counts it:

```
select
max(f), count(*)
from (
SELECT   /*+ no_merge */
A.c,
A.d,
A.e,
A.f
FROM   t1 A
WHERE   NOT EXISTS (SELECT   /* no_unnest */
1
FROM   t2 B
WHERE   B.c = A.c AND B.d = A.d AND B.e = A.e)
)
;

```

This took about 0.35 seconds to run – aggregating roughly 14,500 rows from 1M. The plan was (as I had expected) based on a (right) hash anti join:

```
---------------------------------------------------------------------------------
| Id  | Operation               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |       |     1 |    13 |  2183   (5)| 00:00:11 |
|   1 |  SORT AGGREGATE         |       |     1 |    13 |            |          |
|   2 |   VIEW                  |       |   999K|    12M|  2183   (5)| 00:00:11 |
|*  3 |    HASH JOIN RIGHT ANTI |       |   999K|    23M|  2183   (5)| 00:00:11 |
|   4 |     INDEX FAST FULL SCAN| T2_I1 | 24000 |   234K|    11  (10)| 00:00:01 |
|   5 |     TABLE ACCESS FULL   | T1    |  1000K|    14M|  2151   (4)| 00:00:11 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("B"."C"="A"."C" AND "B"."D"="A"."D" AND "B"."E"="A"."E")

```

Oracle has built an in-memory hash table from the 24,000 rows in t2, then scanned the t1 table, probing the hash table with each row in turn. That’s 1M probe in less than 0.35 seconds. You ought to infer from this that most of the time spent in the original query should have been spent scanning the 100M rows, and only a relatively small increment appear due to the “not exists” clause.

You’ll notice, though that there was a comment in my subquery with the /* no_unnest */ hint embedded – if I change this from a comment to a hint (/*+ */) I should get a plan with a filter subquery, and maybe that’s what’s happening to the OP for some odd reason. Here’s the plan:

```
------------------------------------------------------------------------------
| Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |     1 |    13 | 15166   (1)| 00:01:16 |
|   1 |  SORT AGGREGATE      |       |     1 |    13 |            |          |
|   2 |   VIEW               |       |   999K|    12M| 15166   (1)| 00:01:16 |
|*  3 |    FILTER            |       |       |       |            |          |
|   4 |     TABLE ACCESS FULL| T1    |  1000K|    14M|  2155   (4)| 00:00:11 |
|*  5 |     INDEX RANGE SCAN | T2_I1 |     4 |    40 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter( NOT EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T2" "B"
WHERE "B"."E"=:B1 AND "B"."D"=:B2 AND "B"."C"=:B3))
5 - access("B"."C"=:B1 AND "B"."D"=:B2 AND "B"."E"=:B3)

```

The query took 1.65 seconds to complete. (And re-running with rowsource execution statistics enabled, I found that the subquery had executed roughly 914,000 times in that 1.65 seconds). Even if the original query had used the filter subquery plan the subquery shouldn’t have made much difference to the overall performance. Of course if T2 didn’t have that index on (c,d,e) then the filter subquery plan would have been much more expensive – but then, we would really have expected to see the hash anti-join.

If you’re wondering why the subquery ran 914,000 times instead of 1M times, you’ve forgotten “scalar subquery caching”.  The session caches a limited number of results from subquery execution as a query runs and may be able to use cached results (or simply a special “previous-execution” result) to minimise the number of executions of the subquery.

Did you notice the index I created on t1(c,d,e) ? If I drive the query through this index I’ll access all the rows for a given combination of (c,d,e) one after the other and only have to run the subquery once for the set. To make this happen, though, I’ll have to declare one of the columns to be NOT NULL, or add a suitable “column is not null” predicate to the query; and then I’ll probably have to hint the query anyway:

```
select
max(f)
from (
SELECT   /*+ no_merge index(a) */
A.c,
A.d,
A.e,
A.f
FROM   t1 A
WHERE   NOT EXISTS (SELECT   /*+ no_unnest */
1
FROM   t2 B
WHERE   B.c = A.c AND B.d = A.d AND B.e = A.e)
and     c is not null
)
;

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |     1 |    13 | 65706   (1)| 00:05:29 |
|   1 |  SORT AGGREGATE               |       |     1 |    13 |            |          |
|   2 |   VIEW                        |       |   999K|    12M| 65706   (1)| 00:05:29 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    | 50000 |   732K| 52694   (1)| 00:04:24 |
|*  4 |     INDEX FULL SCAN           | T1_I1 | 50000 |       |  2869   (2)| 00:00:15 |
|*  5 |      INDEX RANGE SCAN         | T2_I1 |     4 |    40 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("C" IS NOT NULL AND  NOT EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM
"T2" "B" WHERE "B"."E"=:B1 AND "B"."D"=:B2 AND "B"."C"=:B3))
5 - access("B"."C"=:B1 AND "B"."D"=:B2 AND "B"."E"=:B3)

```

Re-running this code with rowsource execution statistics enabled showed that the subquery ran just 6,000 times (as expected) – for a total run time that was slightly faster than the hash anti-join method (0.17 seconds – but I do have a new laptop using SSD only, with a 3.5GHz CPU and lots of memory).

Every which way, if we can get reasonable performance from the underlying table access there’s no way that introducing a “NOT EXISTS” ought to be a disaster. The worst case scenario – for some reason Oracle chooses to run a filter subquery plan and the appropriate index hasn’t been created to support it.

### Footnote:

Of course, table A didn’t really exist, it was a three table join; and it didn’t produce 100M rows, it produced anything between zero and 5 million rows, and the effect of the subquery (which correlated back to two of the joined tables) was to leave anything between 0 and 5 million rows. And (apparently) the query was quick enough in the absence of the subquery (producing, for example, 1 million rows in only 5 minutes), but too slow with the subquery in place.

But that’s okay. Because of our tests we know that once we’ve produced a few million rows it takes fractions of a second more to pass them through a hash table with an anti-join to deal with the “not exists” subquery; and I doubt if we have to play silly games to push the data through a filter subquery plan in the right order to squeeze a few extra hundredths of a second from the query.

If the OP is happy with the basic select statement before the “not exists” subquery, all he has to do is take advantage of a no_merge hint:

```
select  {list of columns}
from
(
select /*+ no_merge */ .... rest of original query
)    v1
where
not exists (
select  null
from    b
where   b.c = v1.c and b.d = v1.d and b.e = v1.e
)
;

```

You’re probably wondering why the OP currently sees a performance problem as the subquery is added. The best guess is that the subquery has introduce a “magic 5% fudge factor” to the arithmetic (did you notice the cardinality of t1 dropping to 50,000 from 1M in the plan above) and made it pick a worse execution plan for the rest of the query. We can’t tell, though, since the OP hasn’t yet given us the information that would allow us to see what’s going wrong.

## April 10, 2015

### Counting

Filed under: Execution plans,Indexing,Oracle,Performance — Jonathan Lewis @ 5:27 pm BST Apr 10,2015

There’s a live example on OTN at the moment of an interesting class of problem that can require some imaginative thinking. It revolves around a design that uses a row in one table to hold the low and high values for a range of values in another table. The problem is then simply to count the number of rows in the second table that fall into the range given by the first table. There’s an obvious query you can write (a join with inequality) but if you have to join each row in the first table to several million rows in the second table then aggregate to count them that’s an expensive strategy.  Here’s the query (with numbers of rows involved) that showed up on OTN; it’s an insert statement, and the problem is that it takes 7 hours to insert 37,600 rows:

```
INSERT INTO SA_REPORT_DATA
(REPORT_ID, CUTOFF_DATE, COL_1, COL_2, COL_3)
(
SELECT 'ISRP-734', to_date('&DateTo', 'YYYY-MM-DD'),
SNE.ID AS HLR
,      SNR.FROM_NUMBER||' - '||SNR.TO_NUMBER AS NUMBER_RANGE
,      COUNT(M.MSISDN) AS AVAILABLE_MSISDNS
FROM
SA_NUMBER_RANGES SNR          -- 10,000 rows
,      SA_SERVICE_SYSTEMS SSS        --  1,643 rows
,      SA_NETWORK_ELEMENTS SNE       --    200 rows
,      SA_MSISDNS M                  --    72M rows
WHERE
SSS.SEQ = SNR.SRVSYS_SEQ
AND    SSS.SYSTYP_ID = 'OMC HLR'
AND    SNE.SEQ = SSS.NE_SEQ
AND    SNR.ID_TYPE = 'M'
AND    M.MSISDN  >= SNR.FROM_NUMBER
AND    M.MSISDN  <= SNR.TO_NUMBER
AND    M.STATE  = 'AVL'
GROUP BY
SNE.ID,SNR.FROM_NUMBER||' - '||SNR.TO_NUMBER
)

```

The feature here is that we are counting ranges of msisdn: we take 10,000 number ranges (snr) and join with inequality to a 72M row table. It’s perfectly conceivable that at some point the data set expands (not necessarily all at once) to literally tens of billions of rows that are then aggregated down to the 37,600 that are finally inserted.

The execution plan shows the optimizer joining the first three tables before doing a merge join between that result set and the relevant subset of the MSISDNs table – which means the MSISDNs have to be sorted and buffered (with a probable spill to disc) before they can be used. It would be interesting to see the rowsource execution stats for the query – partly to see how large the generated set became but also to see if the ranges involved were so large that most of the time went in constantly re-reading the sorted MSISDNs from the temporary tablespace.

As far as optimisation is concerned there are a couple of trivial things around the edges we can examine: we have 10,000 number ranges but insert 37,600 results, and the last stages of the plan generated those results so we’ve scanned and aggregated the sorted MSISDNs 37,600 times. Clearly we could look for a better table ordering that eliminated any number ranges early, then did the minimal number of joins to MSISDN, aggregated, then scaled up to 37,600. With the best join order we might reduce the run time by a factor of around 3.76 or more. (But that’s still a couple of hours run time.)

What we really have to do to make a significant difference is change the infrastructure in some way – preferably invisibly to the rest of the application. There are a number of specific details relating to workload, read-consistency, timing, concurrency, etc. that will need to be considered but, broadly speaking, we need to take advantage of a table that effectively holds the “interesting” MSISDNs in sorted order. When considering the options it’s worth remembering that currently the result is “wrong” because by the time the 7 hour run is complete some (or even many) of the MSISDNs are probably no longer available – so how accurate does the report have to be ?

I’ve kept the approach simple here, and it would need a few modifications for a production system. The important bit of the report is the bit that produces the count so I’m only going to worry about a two-table join – number ranges and msidn; here’s some model data:

```
execute dbms_random.seed(0)

create table msisdns
as
with generator as (
select  --+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
trunc(dbms_random.value(1e9,1e10))      msisdn
from
generator       v1,
generator       v2
where
rownum <= 1e6
;

create table number_ranges
as
with generator as (
select  --+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
trunc(dbms_random.value(1e9,1e10))      from_number,
trunc(dbms_random.value(1e9,1e10))      to_number
from
generator       v1
where
rownum  <= 1000
;

update number_ranges set
from_number = to_number,
to_number = from_number
where
to_number < from_number
;

commit;

```

I’ve created a table of numbers with values between 10e9 and 10e10 to represent 1 million MSISDNs, and a list of 1,000 number ranges – making sure that the FROM number is not greater than the TO number. Now I need a “summary” table of the MSISDNs, which I’m going to create as an index-organized table:

```
create table tmp_msisdns (
msisdn,
counter,
constraint tmp_pk primary key (msisdn, counter)
)
organization index
as
select
msisdn,
row_number() over(order by msisdn)      counter
from
msisdns
;

```

This is only a demonstration so I’ve haven’t bothered with production-like code to check that the MSISDNs I had generated were unique (they were); and I’ve casually included the row_number() as part of the primary key as a performance fiddle even though it’s something that could, technically, allow some other program to introduce bad data if I made the table available for public use rather than keeping it task specific.

Finally we get down to the report. To find out how many MSISDN values there are between the FROM and TO number in a range I just have to find the lowest and highest MSISDNs from in that range and find the difference between their counter values, and add 1. And there’s a very fast way to find the lowest or highest values when you have the appropriate index – the index range scan (min/max) – but you have to access the table twice, once for the low, once for the high. Here’s the necessary SQL, with execution plan from 12.1.0.2:

```
select
nr.from_number, nr.to_number,
--      fr1.msisdn, fr1.counter,
--      to1.msisdn, to1.counter,
1 + to1.counter - fr1.counter range_count
from
number_ranges   nr,
tmp_msisdns     fr1,
tmp_msisdns     to1
where
fr1.msisdn = (
select min(msisdn) from tmp_msisdns where tmp_msisdns.msisdn >= nr.from_number
)
and     to1.msisdn = (
select max(msisdn) from tmp_msisdns where tmp_msisdns.msisdn <= nr.to_number
)
;

-------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |               |       |       |  4008 (100)|          |
|   1 |  NESTED LOOPS                   |               |  1000 | 38000 |  4008   (1)| 00:00:01 |
|   2 |   NESTED LOOPS                  |               |  1000 | 26000 |  2005   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL            | NUMBER_RANGES |  1000 | 14000 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN             | TMP_PK        |     1 |    12 |     2   (0)| 00:00:01 |
|   5 |     SORT AGGREGATE              |               |     1 |     7 |            |          |
|   6 |      FIRST ROW                  |               |     1 |     7 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN (MIN/MAX)| TMP_PK        |     1 |     7 |     3   (0)| 00:00:01 |
|*  8 |   INDEX RANGE SCAN              | TMP_PK        |     1 |    12 |     2   (0)| 00:00:01 |
|   9 |    SORT AGGREGATE               |               |     1 |     7 |            |          |
|  10 |     FIRST ROW                   |               |     1 |     7 |     3   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN (MIN/MAX) | TMP_PK        |     1 |     7 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("FR1"."MSISDN"=)
7 - access("TMP_MSISDNS"."MSISDN">=:B1)
8 - access("TO1"."MSISDN"=)
11 - access("TMP_MSISDNS"."MSISDN"<=:B1)

```

Execution time – with 1 million MSISDNs and 1,000 ranges: 0.11 seconds.

For comparative purposes, and to check that the code is producing the right answers, here’s the basic inequality join method:

```
select
nr.from_number, nr.to_number, count(*) range_count
from
number_ranges   nr,
msisdns         ms
where
ms.msisdn >= nr.from_number
and     ms.msisdn <= nr.to_number
group by
nr.from_number, nr.to_number
order by
nr.from_number
;

-----------------------------------------------------------------------------------------------
| Id  | Operation             | Name          | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |               |       |       |       |   472K(100)|          |
|   1 |  HASH GROUP BY        |               |   707K|    14M|  6847M|   472K (17)| 00:00:19 |
|   2 |   MERGE JOIN          |               |   255M|  5107M|       | 13492  (77)| 00:00:01 |
|   3 |    SORT JOIN          |               |  1000 | 14000 |       |     3  (34)| 00:00:01 |
|   4 |     TABLE ACCESS FULL | NUMBER_RANGES |  1000 | 14000 |       |     2   (0)| 00:00:01 |
|*  5 |    FILTER             |               |       |       |       |            |          |
|*  6 |     SORT JOIN         |               |  1000K|  6835K|    30M|  3451   (7)| 00:00:01 |
|   7 |      TABLE ACCESS FULL| MSISDNS       |  1000K|  6835K|       |   245  (14)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("MS"."MSISDN"<="NR"."TO_NUMBER")
6 - access("MS"."MSISDN">="NR"."FROM_NUMBER")
filter("MS"."MSISDN">="NR"."FROM_NUMBER")

```

The two queries produced the same results (apart from ordering); but the second query took 2 minutes 19.4 seconds to complete.

### Update:

In a moment of idle curiosity I recreated the data with 40 Million rows in the MSISDNs table to get some idea of how fast the entire report process could go when re-engineered (remember the OP has 72M rows, but selects the subset flagged as ‘AVL’). It took 1 minute 46 seconds to create the IOT – after which the report for 1,000 number ranges still took less than 0.2 seconds.

### Footnote:

My random generation of data doesn’t attempt to avoid duplicate MSISDNs, but they are unlikely to appear in the 1M row test; the 40M row test, however, will almost certainly produce a small percentage of duplicates. As a consequence the final result may exceed 1,000 rows by one or two, but since this is just a demonstration of the principle and a quick performance check that doesn’t worry me.

### Update 12th Dec 2015

Stew Ashton was in the audience at the UKOUG Tech 15 conference when I described the problem in a presentation on “avoiding work you don’t need to do” and immediately suggested using match_recognize() as an alternative strategy for solving the problem. He has since published his solution for two different scenarios on his blog.

When I tested his “overlapping ranges” solution against my temporary table solution it produced two advantages – first it ran twice as quickly to produce the result, secondly it didn’t need an auxiliary table. Having brought the run-time down from hours to a minute or two the second advantage is rather more desirable, probably, than the first – especially since you then don’t have to mess around producing a point-in-time read-consistent result. Of course, you may still base your choice of method on the ease of comprehension of the code – and in the short term the match_recognize() mechanism is not well-known.

## February 12, 2015

### Parallel rownum

Filed under: Execution plans,Oracle,Parallel Execution — Jonathan Lewis @ 7:27 am BST Feb 12,2015

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code. Here’s a little trap I came across recently that’s probably documented somewhere, which could be spotted easily if you had access to the OEM SQL Monitoring screen, but would be very easy to miss if you didn’t check the execution plan very carefully. I’ll start with a little script to generate some data:

```
create table t1 nologging
as
select * from all_objects where rownum <= 50000 -- > comment to bypass wordpress formatting problem
;

insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;
insert /*+ append */ into t1 select * from t1;
commit;

begin
dbms_stats.gather_table_stats(
ownname		 => user,
tabname		 =>'T1',
method_opt	 => 'for all columns size 1'
);
end;
/

create table t2 as select * from t1;
alter table t2 add id number(10,0);

```

All I’ve done is create some data – 800,000 rows – and then created a table to copy it to. And now, while I copy it, I’m going to add a temporary id to the rows which I’ll do with a call to rownum; and since there’s a lot of data I’ll use parallel execution:

```
alter session enable parallel dml;

insert /*+ parallel(t2 3) */ into t2
select /*+ parallel(t1 4) */ t1.* , rownum from t1;

```

For the purposes of experiment and entertainment I’ve done something a little unusual by supplying two possible degrees of parallelism, mainly because this lets me ask a question: Will this statement run parallel 3, parallel 4, both of the above, or neither ? (You may assume that I have enough parallel execution slaves available when the statement runs.)

The answer is both – because that rownum does something nasty to the execution plan (I didn’t include the 50,000 limit when creating t1 in my original test, which is why the plan reports 993K rows instead of 800,000):

```
--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |          |   993K|    92M|  1076   (1)| 00:00:13 |        |      |            |
|   1 |  PX COORDINATOR            |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)      | :TQ20001 |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT          | T2       |       |       |            |          |  Q2,01 | PCWP |            |
|   4 |     PX RECEIVE             |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | PCWP |            |
|   5 |      PX SEND ROUND-ROBIN   | :TQ20000 |   993K|    92M|  1076   (1)| 00:00:13 |        | S->P | RND-ROBIN  |
|   6 |       COUNT                |          |       |       |            |          |        |      |            |
|   7 |        PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   8 |         PX SEND QC (RANDOM)| :TQ10000 |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | P->S | QC (RAND)  |
|   9 |          PX BLOCK ITERATOR |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWC |            |
|  10 |           TABLE ACCESS FULL| T1       |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------

```

See that “P->S” (parallel to serial) at operation 8. The select statement runs in parallel (degree 4) to scan the data, and then sends it all to the query co-ordinator to supply the rownum; then the query co-ordinator re-distributes the data (including rownum) to another set of slaves (S->P) to do the parallel (degree 3) insert. The P->S at line 2 shows the parallel execution slaves passing details to the query co-ordinator of the private segments that they have created so that the query co-ordinator can stitch the segments together into a single data segment for the table. (If you watch the run-time activity closely you’ll see the query co-ordinator doing a few local writes as it tidies up the header blocks in those segment blocks.)

There are two threats to this rownum detail. The first, of course, is that the operation essentially serialises through the query co-ordinator so it’s going to take longer than you might expect; secondly an accident of this type is typically going to allocate twice as many parallel execution slaves as you might have expected – the select and the insert are two separate data flow operations (note how the Name column shows TQ1xxxx and TQ2xxxx) each gets its own slave sets and both sets of slaves are held for the duration of the statement. If this statement is demanding twice the slaves it should be using then you may find that other statements that start running at the same time get their degree of parallelism downgraded because you’ve run out of PX slaves. Although the rownum solution is nice and clean – it requires no further infrastructure – you probably need to introduce a sequence (with a large cache) to get the same effect without losing parallelism.

If you look at v\$pq_tqstat after running this statement the results are a little disappointing – there are a few problems connecting lines from the plan with rows in the view – here’s my original output (and you’ll now see why I chose to have two different degrees of parallelism):

```
DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
1          0 Consumer               1 P000                331330   39834186         74         71           0
1 P001                331331   39844094         75         72           0
1 P002                330653   39749806         74         71           0

1 Producer               1 P000                     1        131       2263        396           0
1 P001                     1        131       2238        417           0
1 P002                     1        131       2182        463           0

2          0 Producer               1 P003                247652   28380762         13          0           0
1 P004                228857   26200574         13          1           0
1 P005                267348   30496182         14          0           0
1 P006                249457   28401982         13          0           0
1 QC                  993314  119428086 4294967269 4294967286           0
Consumer               1 QC                  993314  113479500        125         65           0

1 Consumer               1 QC                       3        393          2          1           0

```

The first problem is that the DFO_number reported in the view doesn’t match with the :TQ1xxxx and :TQ2xxxx reported in the plan – the parallel 4 bit is the select, which is covered by :TQ1000, but it’s listed under DFO_Number = 2 in the view, and the insert is the parallel 3 bit, which is covered by :TQ2000 and :TQ20001 but listed under DFO_Number = 1.

More confusingly, potentially, is that the all appearances of the query coordinator have been assigned to DFO_Number = 2. Ignoring the fact that the DFO_Number column switches the 1 and 2 from the plan, what we should see is as follows:

• The consumer at line 16 is consuming from the 4 producers at lines 11 – 14.
• The producer at line 15 is producing FOR the 3 consumers at lines 3 – 5
• The consumer at line 18 is consuming from the producers at lines 7 – 9

Ideally (including the correction for the DFO_Number) I think the view content should be as follows:

```
DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
1          0 Producer               1 P003                247652   28380762         13          0           0
1 P004                228857   26200574         13          1           0
1 P005                267348   30496182         14          0           0
1 P006                249457   28401982         13          0           0
Consumer               1 QC                  993314  113479500        125         65           0

2          0 Producer               1 QC                  993314  119428086 4294967269 4294967286           0
Consumer               1 P000                331330   39834186         74         71           0
1 P001                331331   39844094         75         72           0
1 P002                330653   39749806         74         71           0

1 Producer               1 P000                     1        131       2263        396           0
1 P001                     1        131       2238        417           0
1 P002                     1        131       2182        463           0
Consumer               1 QC                       3        393          2          1           0

```

Please don’t ask me why the waits and timeouts for the QC as producer seem to be counting backwards from 2^32, I don’t know the answer.

Footnote – there was a change in terminology some time ago when Oracle started talking about DFO trees. The DFO_number in the v\$pq_tqstat output is (now) the “DFO tree number”, and the TQ_id is now a “DFO number” (rather than a table queue id).

### Update (Apr 2017):

I’ve just discovered, while searching for anything I’d written about parallel and rownum that I’d written something about this problem in the past.

## January 14, 2015

### Bind Effects

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 1:24 pm BST Jan 14,2015

A couple of days ago I highlighted an optimizer anomaly caused by the presence of an index with a descending column. This was a minor (unrelated) detail that appeared in a problem on OTN where the optimizer was using an index FULL scan when someone was expecting to see an index RANGE scan. My earlier posting supplies the SQL to create the table and indexes I used to model the problem – and in this posting I’ll explain the problem and answer the central question.

Here’s the query and execution plan (from 11.2.0.x) as supplied by the OP – the odd appearance of the sys_op_descend() function calls is the minor detail that I explained in the previous post, but that’s not really relevant to the question of why Oracle is using an index full scan rather than an index range scan. The /*+ first_rows */ hint isn’t something you should be using but it was in the OP’s query, so I’ve included it in my model:

```
select /*+ FIRST_ROWS gather_plan_statistics scanned */ count(1)      FROM  XXX
where  (((((COL1 = '003' and COL2 >= '20150120') and COL3 >= '00000000') and COL4>= '000000000000' )
or ((COL1 = '003' and COL2 >= '20150120') and COL3> '00000000' )) or (COL1= '003' and COL2> '20150120'))
order by COL1,COL2,COL3,COL4

Plan hash value: 919851669

---------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |      1 |        |  18533 |00:01:47.04 |    156K |  70286 |
|   1 |  TABLE ACCESS BY INDEX ROWID| XXX    |      1 |  7886K |  18533 |00:01:47.04 |    156K |  70286 |
|*  2 |   INDEX FULL SCAN           | XXXXPK |      1 |  7886K |  18533 |00:01:30.36 |    131K |  61153 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((("COL2">:B2 AND "COL1"=:B1 AND
SYS_OP_DESCEND("COL2")<SYS_OP_DESCEND(:B2)) OR ("COL1"=:B1 AND "COL2">=:B2
AND "COL3">:B3 AND SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2)) OR
("COL1"=:B1 AND "COL2">=:B2 AND "COL3">=:B3 AND "COL4">=:B4 AND
SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2))))

```

If you look closely you’ll see that the OP has NOT supplied the output from a call to dbms_xplan.display_cursor() – the column and table names are highly suspect (but that’s allowable cosmetic change for confidentiality reasons) the giveaway is that the SQL statement uses literals but the execution plan is using bind variables (which are of the form B{number}, suggesting that the real SQL is embedded in PL/SQL with PL/SQL variables being used to supply values): the bind variables make a difference.

Let’s go back to my model to demonstrate the problem. Here’s a query with the same predicate structure as the problem query (with several pairs of brackets eliminated to improve readability) showing the actual run-time plan (from 11.2.0.4) when using literals:

```
select
/*+ first_rows */
*
from t1
where
(C1 = 'DE' and C2 >  'AB')
or      (C1 = 'DE' and C2 >= 'AB' and C3 > 'AA' )
or      (C1 = 'DE' and C2 >= 'AB' and C3 >= 'AA' and C4 >= 'BB')
order by
C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |    21 |  2478 |     4  (25)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |    21 |       |     3  (34)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("C1"='DE')
filter(((SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') AND "C2">'AB') OR
(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C3">'AA' AND "C2">='AB') OR
(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C4">='BB' AND "C2">='AB' AND
"C3">='AA')))

```

As you can see, the optimizer has managed to “factor out” the predicate C1 = ‘DE’ from the three disjuncts and has then used it as an access() predicate for an index range scan. Now let’s see what the code and plan look like if we replace the four values by four bind variables:

```
variable B1 char(2)
variable B2 char(2)
variable B3 char(2)
variable B4 char(2)

begin
:b1 := 'DE';
:b2 := 'AB';
:b3 := 'AA';
:b4 := 'BB';
end;
/

select
/*+ first_rows */
*
from t1
where
(C1 = :B1  and C2 >  :B2 )
or      (C1 = :B1  and C2 >= :B2 and C3 >  :B3 )
or      (C1 = :B1  and C2 >= :B2 and C3 >= :B3 and C4 >= :B4)
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |    31 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   437 | 51566 |    31   (4)| 00:00:01 |
|*  2 |   INDEX FULL SCAN           | T1_IASC |   437 |       |    27   (4)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter((("C1"=:B1 AND "C2">:B2) OR ("C1"=:B1 AND "C3">:B3 AND
"C2">=:B2) OR ("C1"=:B1 AND "C4">=:B4 AND "C2">=:B2 AND "C3">=:B3)))

```

The optimizer hasn’t factored out the common expression C1 = :B1. The reason for this, I think, is that though WE know that :B1 is supposed to be the same thing in all three occurrences the optimizer isn’t able to assume that that’s the case; in principle :B1 could be the place holder for 3 different values – so the optimizer plays safe and optimizes for that case. This leaves it with three options: Full tablescan with filter predicates, index full scan with filter predicates, three-part concatenation with index range scans in all three parts. The combination of the /*+ first_rows */ hint and the “order by” clause which matches the t1_1asc index has left the optimizer choosing the index full scan path – presumably to avoid the need to collect all the rows and sort them before returning the first row.

Given our understanding of the cause of the problem we now have a clue about how we might make the query more efficient – we have to eliminate the repetition of (at least) the :B1 bind variable. In fact we can get some extra mileage by modifying the repetition of the :B2 bind variable. Here’s a rewrite that may help:

```
select
/*+ first_rows */
*
from t1
where
(C1 = :B1 and C2 >= :B2)
and     (
C2 > :B2
or  C3 > :B3
or (C3 >= :B3 and C4 > :B4)
)
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("C1"=:B1 AND "C2">=:B2 AND "C2" IS NOT NULL)
filter(("C2">:B2 OR "C3">:B3 OR ("C4">:B4 AND "C3">=:B3)))

```

I’ve factored out as much of the C1 and C2 predicates as I can – and the optimizer has used the resulting conditions as the access() predicate on the index (adding in a “not null” predicate on C2 that looks redundant to me – in fact the index was on the primary key in the original, but I hadn’t included that constraint in my model). You’ll notice, by the way, that the cardinality is now 148; compare this with the previous cardinality of 437 and you might (without bothering to look closely as the 10053 trace) do some hand-waving around the fact that 437 = (approximately) 148 * 3, which fits the idea that the optimizer was treating the three :B1 appearances as if they were three different possible values accessing three sets of data.

### Miscellaneous.

This isn’t the end of the story; there are always more complications and surprises in store as you look further into the detail. For example, on the upgrade to 12c the execution plan for the query with bind variables was the same (ignoring the sys_op_descend() functions) as the query using literals – the optimizer managed to factor out the C1 predicate: does this mean SQL*Plus got smarter about telling the optimizer about the bind variables, or does it mean the optimizer got smarter about something that SQL*Plus has been doing all along ?

This change might make you think that the optimizer is supposed to assume that bind variables of the same name represent the same thing – but that’s not correct, and it’s easy to show; here’s a trivial example (accessing the same table with a query that, for my data, identifies the first row):

```
declare
m_id number := 1;
m_c1 char(2) := 'BV';
m_c2 char(2) := 'GF';
m_n number := 0;
begin
execute immediate
'SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1'
into m_n
using m_id, m_c1, m_c2
;
end;
/

select sql_id, sql_text from V\$sql where sql_text like 'SELECT%FIND THIS%';

SQL_ID        SQL_TEXT
------------- ----------------------------------------------------------------------------------
9px3nuv54maxp SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1

```

If you were looking at the contents of v\$sql, or a trace file, or an AWR report, you might easily be fooled into thinking that this was a query where the same value had been used three times – when we know that it wasn’t.

So, as we upgrade from 11g to 12c my model of the original problem suggests that the problem is going to go away – but, actually, I don’t really know why that’s the case (yet). On the other hand, I have at least recognised a pattern that the 11g optimizer currently has a problem with, and I have a method for helping the optimizer to be a little more efficient.

## January 12, 2015

### FBI Bug reprise

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 12:52 pm BST Jan 12,2015

I’ve just had cause to resurrect a blog note I wrote three years ago. The note says that an anomaly I discovered in 9.2.0.8 wasfixed in 10.2.0.3 – and this is true for the simple example in the posting; but a recent question on the OTN database forum has shown that the bug still appears in more complex cases.  Here’s some code to create a table and two indexes:

```
create table t1
nologging
as
with generator as (
select	--+ materialize
rownum id
from dual
connect by
level <= 1e4
)
select
rownum			id,
cast(dbms_random.string('U',2) as char(2))	c1,
cast(dbms_random.string('U',2) as char(2))	c2,
cast(dbms_random.string('U',2) as char(2))	c3,
cast(dbms_random.string('U',2) as char(2))	c4,
from
generator	v1,
generator	v2
where
rownum <= 1e5
;

begin
dbms_stats.gather_table_stats(
ownname		 => user,
tabname		 =>'T1',
method_opt	 => 'for all columns size 1'
);
end;
/

create index t1_iasc  on t1(c1, c2,      c3, c4) nologging;
create index t1_idesc on t1(c1, c2 desc, c3, c4) nologging;

```

I’ve designed the table to model the way a problem was presented on OTN, it’s possible that the anomaly would appear in simpler circumstance; note that I have two indexes on the same four columns, but the second column of one of the indexes is declared as descending. To identify the indexes easily in execution plans the latter index has the text “desc” in its name. So here’s a query, with execution plan, that should use one of these indexes:

```
select
*
from t1
where
(C1 = 'DE' and C2 >  'AB')
or      (C1 = 'DE' and C2 >= 'AB' and C3 > 'AA' )
or      (C1 = 'DE' and C2 >= 'AB' and C3 >= 'AA' and C4 >= 'BB')
order by
C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |    21 |  2478 |     4  (25)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |    21 |       |     3  (34)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("C1"='DE')
filter(((SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') AND "C2">'AB') OR
(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C3">'AA' AND "C2">='AB') OR
(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C4">='BB' AND "C2">='AB' AND
"C3">='AA')))

```

Sure enough – the query has used the t1_iasc index – but why has the optimizer introduced all those predicates with the sys_op_descend() function calls in them when we’re not using an index with a descending column ? Somewhere in the code path the optimizer has picked up the other index, and decided to use some information from it that is totally redundant. One side effect of this is that the cardinality prediction is 21 – if I drop the index t1_idesc the sys_op_descend() calls disappear and the cardinality changes to 148.

Oracle 12c behaves differently – it uses concatenation to turn the query into three separate index probes unless I add the hint /*+ first_rows */ (which I shouldn’t be using, but it’s the hint supplied by the OP on OTN). With the hint in place we get an example of the actual execution plan differing from the prediction made through explain plan:

```
12c execution plan unhinted (concatenation, and lots of sys_op_descend()):

--------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |       |       |    12 (100)|          |
|   1 |  SORT ORDER BY                        |          |   149 | 17582 |    12   (9)| 00:00:01 |
|   2 |   CONCATENATION                       |          |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |   142 | 16756 |     4   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_IDESC |     1 |       |     2   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     6 |   708 |     4   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | T1_IDESC |     1 |       |     2   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |   118 |     3   (0)| 00:00:01 |
|*  8 |     INDEX RANGE SCAN                  | T1_IASC  |     1 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("C1"='DE' AND "C3">='AA' AND "C4">='BB' AND
SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB'))
filter("C4">='BB' AND SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB' AND "C3">='AA')
6 - access("C1"='DE' AND "C3">'AA' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB'))
filter("C3">'AA' AND SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB' AND
(LNNVL("C4">='BB') OR LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR
LNNVL(SYS_OP_UNDESCEND(SYS_OP_DESCEND("C2"))>='AB') OR LNNVL("C3">='AA')))
8 - access("C1"='DE' AND "C2">'AB' AND "C2" IS NOT NULL)
filter((LNNVL("C3">'AA') OR LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR
LNNVL("C2">='AB')) AND (LNNVL("C4">='BB') OR
LNNVL(SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB')) OR LNNVL("C2">='AB') OR
LNNVL("C3">='AA')))

12c Execution plan with first_rows hint (and the sys_op_descend have gone)
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |   150 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |   150   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("C1"='DE')
filter(("C2">'AB' OR ("C3">'AA' AND "C2">='AB') OR ("C4">='BB' AND
"C2">='AB' AND "C3">='AA')))

12c Execution plan with first_rows according to Explain Plan (and they're back again)
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   148 | 17464 |   150   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |   150   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("C1"='DE')
filter("C2">'AB' AND SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') OR
"C3">'AA' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C2">='AB' OR
"C4">='BB' AND SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C2">='AB' AND
"C3">='AA')

```

On the plus side the upgrade to 12c has removed some of the sys_op_descend() appearances and seems to have fixed the cardinality anomaly when the sys_op_descend() calls do appear – but things are still going wrong in odd ways. (And this looks like another case where you could drop an index that you’re not using and seeing an execution plan change.)

### Footnote:

When playing around with 12c, I had to be very careful to avoid confusing the issue as SQL Plan Directives were appearing from time to time, and giving me unexpected results as I repeated some of the experiments.

### Footnote 2:

I haven’t actually addressed the question originally posed on OTN, but that can wait for another day.

## January 7, 2015

### Most Recent

Filed under: Execution plans,Oracle,Performance,subqueries — Jonathan Lewis @ 6:21 pm BST Jan 7,2015

There’s a thread on the OTN database forum at present asking for advice on optimising a query that’s trying to find “the most recent price” for a transaction given that each transaction is for a stock item on a given date, and each item has a history of prices where each historic price has an effective start date. This means the price for a transaction is the price as at the most recent date prior to the transaction date.

There is an absolutely standard way of expressing “the most recent occurrence” in SQL. Assume we have a table of (item_code, effective_date, price) with the obvious primary key of (item_code, effective_date), then a requirement to find “the most recent price for item XXXX as at 25th Dec 2014” case would give us code like the following (note – all the examples in this note were run against Oracle 11.2.0.4):

```
select  *
from    prices  pri1
where   item_code = 'XXXX'
and     effective_date = (
select  max(effective_date)
from    prices  pri2
where   pri2.item_code = 'XXXX'
and     pri2.effective_date <= date'2014-12-25'
)
/

```

The ideal execution plan that we should expect to see for this query is as follows (with a small variation if you had created the prices table as an index-organized table – which would probably be sensible in many cases):

```
-----------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |     1 |    52 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID   | PRICES |     1 |    52 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN            | PRI_PK |     1 |       |     1   (0)| 00:00:01 |
|   3 |    SORT AGGREGATE              |        |     1 |    32 |            |          |
|   4 |     FIRST ROW                  |        |     1 |    32 |     2   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN (MIN/MAX)| PRI_PK |     1 |    32 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ITEM_CODE"='XXXX' AND "EFFECTIVE_DATE"= (SELECT
MAX("EFFECTIVE_DATE") FROM "PRICES" "PRI2" WHERE
"PRI2"."EFFECTIVE_DATE"<=TO_DATE(' 2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
AND "PRI2"."ITEM_CODE"='XXXX'))

5 - access("PRI2"."ITEM_CODE"='XXXX' AND "PRI2"."EFFECTIVE_DATE"<=
TO_DATE('2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

```

As you can see, this plan is using the “driving subquery” approach – the order of operation is 5, 4, 3, 2, 1, 0: we do an index min/max range scan in line 5 to find the maximum effective date for the item, then pass that up through the (essentially redundant) First Row and Sort Aggregate operations to use as an input to the index unique scan at operation 2 which passes the rowid up to operation 1 to find the specific row. In my case this was 2 consistent gets for the range scan, 2 more for the unique scan, and one for the table access.

You might point out that my example uses the item_code ‘XXXX’ twice, once in the main query, once in the subquery; and you might decide that this was in very poor taste since we should clearly be using a correlated subquery – the correlating predicate ought to be: pri2.item_code = pri1.item_code. Here’s the execution plan I got when I made that change:

```
----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |     1 |    78 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |         |     1 |    78 |     3   (0)| 00:00:01 |
|   2 |   NESTED LOOPS               |         |     1 |    78 |     3   (0)| 00:00:01 |
|   3 |    VIEW                      | VW_SQ_1 |     1 |    26 |     2   (0)| 00:00:01 |
|*  4 |     FILTER                   |         |       |       |            |          |
|   5 |      HASH GROUP BY           |         |     1 |    32 |     2   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN       | PRI_PK  |     1 |    32 |     2   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN         | PRI_PK  |     1 |       |     0   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID| PRICES  |     1 |    52 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("PRI2"."ITEM_CODE"='XXXX')
6 - access("PRI2"."ITEM_CODE"='XXXX' AND "PRI2"."EFFECTIVE_DATE"<=
TO_DATE('2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
7 - access("ITEM_CODE"='XXXX' AND "EFFECTIVE_DATE"="MAX(EFFECTIVE_DATE)")

```

The plan changes dramatically, the optimizer has unnested the subquery. In my case this didn’t make any difference to the overall performance as my data set was small, I only had one or two prices per item code, and the query was very basic; but in most other cases the change could be catastrophic.

### The Problem Query

The requirement on OTN had a stock transactions (xo_stock_trans) table and a prices (xo_prices) table, and the OP had supplied some code to create and populate these tables with 6.4 million and 4.5 million rows respectively. Unfortunately the xo_prices table didn’t have a suitable unique constraint on it and ended up with lots of items having multiple prices for the same date.  The OP had created a function to return a price for an item given a driving date and price_type, and had a query that called that function three times per row (once for each of three price types); but this did not perform very well and the OP wanted to know if there was a way of addressing the requirement efficiently using pure SQL; (s)he had already tried the following:

```
select tr.item, tr.trans_date, tr.quantity
, pr.gross_price
, pr.net_price
, pr.special_price
from xo_stock_trans tr
join xo_prices pr on pr.item = tr.item
and pr.price_date = (select max(pr2.price_date)
from xo_prices pr2
where pr2.item = pr.item
and pr2.price_date <= tr.trans_date
)
where tr.trans_date between '01-AUG-2014' and '31-AUG-2014';

```

That was SO close – it’s clearly implementing the right sort of strategy: but it didn’t perform well, so let’s check the execution plan:

```------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                    |     1 |    70 |       |   168M(100)|234:06:13 |
|   1 |  NESTED LOOPS                 |                    |     1 |    70 |       |   168M(100)|234:06:13 |
|   2 |   NESTED LOOPS                |                    |     9 |    70 |       |   168M(100)|234:06:13 |
|   3 |    NESTED LOOPS               |                    |     9 |   450 |       |   168M(100)|234:06:13 |
|   4 |     VIEW                      | VW_SQ_1            |   286 | 10010 |       |   168M(100)|234:06:11 |
|   5 |      HASH GROUP BY            |                    |   286 |  7722 |       |   168M(100)|234:06:11 |
|   6 |       MERGE JOIN              |                    |   456G|    11T|       |  9153K(100)| 12:42:50 |
|   7 |        SORT JOIN              |                    |   202K|  2960K|       |   548   (2)| 00:00:03 |
|*  8 |         INDEX RANGE SCAN      | XO_STOCK_TRANS_IX2 |   202K|  2960K|       |   548   (2)| 00:00:03 |
|*  9 |        SORT JOIN              |                    |  4045K|    46M|   154M| 19043   (6)| 00:01:36 |
|* 10 |         INDEX FAST FULL SCAN  | XO_PRICES_IX1      |  4045K|    46M|       |  1936  (10)| 00:00:10 |
|* 11 |     TABLE ACCESS BY USER ROWID| XO_STOCK_TRANS     |     1 |    15 |       |     1   (0)| 00:00:01 |
|* 12 |    INDEX RANGE SCAN           | XO_PRICES_IX1      |     1 |       |       |     2   (0)| 00:00:01 |
|  13 |   TABLE ACCESS BY INDEX ROWID | XO_PRICES          |     1 |    20 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
8 - access("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
9 - access(INTERNAL_FUNCTION("PR2"."PRICE_DATE")<=INTERNAL_FUNCTION("TR"."TRANS_DATE"))
filter(INTERNAL_FUNCTION("PR2"."PRICE_DATE")<=INTERNAL_FUNCTION("TR"."TRANS_DATE"))
10 - filter("PR2"."PRICE_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
11 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - access("ITEM_1"="PR"."ITEM" AND "PR"."PRICE_DATE"="MAX(PR2.PRICE_DATE)")
filter("PR"."ITEM"="TR"."ITEM")

```

The query was limited to August 2014, which was about 198,000 rows in my table, so we might expect some signs of a brute-force approach (tablescans and hash joins rather than indexes and nested loops) – but what we get ends up with a high-precision approach with a very bad cardinality estimate after a brute-force unnesting of the “max(price_date)” subquery. The unnesting has done a range scan over 200,000 stock_trans rows, and an index fast full scan on 4.5 million prices to do a merge join and hash aggregation to find the maximum price_date for each target row in the xo_stock_trans table. (See my earlier posting on table duplication for a variation and explanation of what Oracle has done here). This step is a lot of work, but the optimizer thinks it’s going to produce only 286 rows in the aggregated result, so the next steps in the plan are indexed nested loops – which actually operate 198,000 times.

With the clue from my initial description, we need to aim for a strategy where Oracle doesn’t unnest that subquery – so let’s experiment with a basic /*+ no_unnest */ hint in the subquery and see what happens. Here’s the resulting execution plan:

```
--------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                |   527 | 18445 |       |  6602M  (1)|999:59:59 |
|*  1 |  FILTER                       |                |       |       |       |            |          |
|*  2 |   HASH JOIN                   |                |  3423M|   111G|  5336K| 76973  (90)| 00:06:25 |
|*  3 |    TABLE ACCESS FULL          | XO_STOCK_TRANS |   202K|  2960K|       |  2531  (13)| 00:00:13 |
|   4 |    TABLE ACCESS FULL          | XO_PRICES      |  4571K|    87M|       |  2275  (11)| 00:00:12 |
|   5 |   SORT AGGREGATE              |                |     1 |    12 |       |            |          |
|   6 |    FIRST ROW                  |                |     1 |    12 |       |     3   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */ MAX("PR2"."PRICE_DATE") FROM
"XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND "PR2"."ITEM"=:B2))
2 - access("PR"."ITEM"="TR"."ITEM")
3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

```

The subquery now survives, and we can see a min/max range scan in the plan – but the subquery is a filter() subquery and is applied to the result of joining the 200,000 transactions to every price that applies for the item in each transaction. The optimizer thinks that this join will produce roughly 3.4 million rows but in fact with the sample data set (which had many prices per item) the join resulted in 4.4 Billion rows. The min/max subquery is as efficient as it can be, but it’s running far too often; ideally we would like it to run at most once per transaction, so why is it running late ? We could try adding the /*+ push_subq */ hint to the subquery but if we do the plan doesn’t change.

Our rapid “most recent occurrence” revolved around accessing the prices table by index while “pre-querying” for the date using a min/max subquery that knew the relevant item code already. In this case, though, we’re doing a full tablescan of the xo_prices table so the method doesn’t apply. So let’s manipulate the query to force an indexed access path for the join to the xo_prices table by adding the hints /*+ leading(tr pr) use_nl(pr) index(pr) */ to the main body of the query. This is the resulting plan:

```
--------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                |   527 | 18445 |  6614M  (1)|999:59:59 |
|   1 |  NESTED LOOPS                   |                |  3413K|   113M|    11M  (2)| 16:29:13 |
|   2 |   NESTED LOOPS                  |                |  3413K|   113M|    11M  (2)| 16:29:13 |
|*  3 |    TABLE ACCESS FULL            | XO_STOCK_TRANS |   202K|  2960K|  2531  (13)| 00:00:13 |
|*  4 |    INDEX RANGE SCAN             | XO_PRICES_IX1  |    16 |       |    52   (2)| 00:00:01 |
|   5 |     SORT AGGREGATE              |                |     1 |    12 |            |          |
|   6 |      FIRST ROW                  |                |     1 |    12 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |     3   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID   | XO_PRICES      |    17 |   340 |    59   (2)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
4 - access("PR"."ITEM"="TR"."ITEM")
filter("PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */ MAX("PR2"."PRICE_DATE") FROM
"XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND "PR2"."ITEM"=:B2))
7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

```

We’re nearly there, the shape of the execution plan – lines 4 to 7, at any rate – matches the shape of the very simple example at the start of this article, we seem to be driving from the min/max subquery at line 7; unfortunately when we look at the predicate section of line 4 of the plan we can see that the subquery is still a filter() subquery not an access() subquery – it’s (nominally) being performed for every index entry in the range scan of the xo_prices index that we do for each xo_stock_trans row. What we want to see is an access() subquery – and checking the SQL we can see how to get there: the subquery currently correlates the item back to the xo_prices table, not to the xo_stock_trans table,  so let’s correct that correlation. Here’s our final query (though not formatted to my preference) with execution plan:

```
tr.item, tr.trans_date, tr.quantity
, pr.gross_price
, pr.net_price
, pr.special_price
from xo_stock_trans tr
join xo_prices pr on pr.item = tr.item
and pr.price_date = (select /*+ no_unnest */  -- hint added
max(pr2.price_date)
from xo_prices pr2
where pr2.item = tr.item  -- correlate to tr, not pr
and pr2.price_date <= tr.trans_date
)
where tr.trans_date between '01-AUG-2014' and '31-AUG-2014'
;

--------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                |  3423M|   111G|  1824K  (1)| 02:32:02 |
|   1 |  NESTED LOOPS                   |                |  3413K|   113M|  1824K  (1)| 02:32:02 |
|   2 |   NESTED LOOPS                  |                |  3413K|   113M|  1824K  (1)| 02:32:02 |
|*  3 |    TABLE ACCESS FULL            | XO_STOCK_TRANS |   202K|  2960K|  2531  (13)| 00:00:13 |
|*  4 |    INDEX RANGE SCAN             | XO_PRICES_IX1  |    16 |       |     2   (0)| 00:00:01 |
|   5 |     SORT AGGREGATE              |                |     1 |    12 |            |          |
|   6 |      FIRST ROW                  |                |     1 |    12 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |     3   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID   | XO_PRICES      |    17 |   340 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
4 - access("PR"."ITEM"="TR"."ITEM" AND "PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */
MAX("PR2"."PRICE_DATE") FROM "XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND
"PR2"."ITEM"=:B2))
7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

```

Finally we can see (from the predicate for line 4) the we run the subquery at most once for each row from xo_stock_trans and we use the result of each subquery execution to drive the index range scan to pick up the matching rows from xo_prices with no further filtering. The order of operation is: 3, 7, 6, 5, 4, 2, 8, 1, 0

The only thing we can do now is decide whether the strategy for indexing into the xo_prices table 200,000 times (for our 30 day requirement) is better than a brute force approach that does a massive join and sort, or a data duplication approach that puts a “price end date” on each xo_prices row to avoid the need to check all prices for an item to find the appropriate one. Ultimately the choice may depend on trading off the human development resources against the machine run-time resources, with an eye on the number of times the query runs and the size of the date range typically involved.

### Footnote:

There’s plenty more I could say about this query and how to handle it – but there are too many questions about the correctness of the data definition and content to make it worth pursuing in detail.  You will note, however, that the various execution plans which logically should be returning the same data report dramatically different cardinalities for the final row source; if nothing else this should warn you that maybe the optimizer is going to have trouble producing a good plan because it’s model produced a bad cardinality estimate at some point in a series of transformations.

In fact, when I first saw this query I converted to traditional Oracle syntax (anticipating, incorrectly, a need to do something messy with hints), corrected the subquery correlation to the “obvious” choice, and put in a cardinality hint /*+ cardinality(tr 100) */ for the xo_stock_trans table, and got the execution plan that I’ve managed to produce as the final plan above.

Tactically the correlation column is the really important bit – if that can be set up suitably we just have to work around the optimizer’s arithmetic assumptions.

My Reference: most_recent_2.sql

## January 3, 2015

### Table Duplication

Filed under: Execution plans,Oracle,subqueries — Jonathan Lewis @ 11:54 am BST Jan 3,2015

I’ve probably seen a transformation like the following before and I may even have written about it (though if I have I can’t find the article), but since it surprised me when I was experimenting with a little problem a few days ago I thought I’d pass it on as an example of how sophisticated the optimizer can be with query transformation.  I’ll be talking about the actual problem that I was working on in a later post so I won’t give you the table and data definitions in this post, I’ll just show some SQL and its plan:

```
rem
rem     Script:         most_recent.sql
rem     Author:         Jonathan Lewis
rem     Dated:          2nd Jan 2015
rem

select
tr.item, tr.trans_date, tr.quantity
, pr.gross_price
, pr.net_price
, pr.special_price
from
xo_stock_trans tr,
xo_prices pr
where
tr.trans_date between '01-AUG-2014' and '3-AUG-2014'
and     pr.item = tr.item
and     pr.price_date = (
select
max(pr2.price_date)
from
xo_prices pr2
where   pr2.item = tr.item
and     pr2.price_date <= tr.trans_date
)
;

```

The code is a fairly standard expression of “find me the most recent price available for each stock item as at the stock date of that item”. As you can see I’ve referenced the stock table once and the pricing table twice – the second appearance being in a “max()” correlated subquery. Oracle has decided to unnest the subquery – but spot the interesting detail in the plan:

```
-----------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |    14M|   748M|       | 77955  (84)| 00:06:30 |
|*  1 |  HASH JOIN                 |                |    14M|   748M|    37M| 77955  (84)| 00:06:30 |
|*  2 |   HASH JOIN                |                |   829K|    28M|       | 70867  (92)| 00:05:55 |
|   3 |    JOIN FILTER CREATE      | :BF0000        | 25274 |   370K|       |  2530  (13)| 00:00:13 |
|*  4 |     TABLE ACCESS FULL      | XO_STOCK_TRANS | 25274 |   370K|       |  2530  (13)| 00:00:13 |
|   5 |    VIEW                    | VW_SQ_1        |   210M|  4206M|       | 64135  (94)| 00:05:21 |
|   6 |     HASH GROUP BY          |                |   210M|  5408M|       | 64135  (94)| 00:05:21 |
|   7 |      JOIN FILTER USE       | :BF0000        |   210M|  5408M|       | 11807  (67)| 00:01:00 |
|*  8 |       HASH JOIN            |                |   210M|  5408M|       | 11807  (67)| 00:01:00 |
|*  9 |        TABLE ACCESS FULL   | XO_STOCK_TRANS | 25274 |   370K|       |  2530  (13)| 00:00:13 |
|* 10 |        INDEX FAST FULL SCAN| XO_PRICES_IX1  |  3918K|    44M|       |  1936  (10)| 00:00:10 |
|  11 |   TABLE ACCESS FULL        | XO_PRICES      |  4571K|    87M|       |  2275  (11)| 00:00:12 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("PR"."ITEM"="TR"."ITEM" AND "PR"."PRICE_DATE"="MAX(PR2.PRICE_DATE)")
2 - access("ITEM_1"=ROWID)
4 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - access("PR2"."ITEM"="TR"."ITEM")
filter("PR2"."PRICE_DATE"<="TR"."TRANS_DATE") 9 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
10 - filter("PR2"."PRICE_DATE"<=TO_DATE(' 2014-08-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

```

I was running 11.2.0.4 at the time, which is why we can get a serial Bloom filter on the hash join (though, perhaps, only on non-mergeable, aggregate, views) – and it’s interesting to see that the filter has been pushed inside the view operator; but the really interesting part of the plan is the second appearance of the XO_STOCK_TRANS table.

My correlated subquery returns a value that is used in a comparison with a column in the XO_PRICES table, but the correlation predicates referred back to the XO_STOCK_TRANS table, so the optimizer has added the XO_STOCK_TRANS to the subquery as it unnested it.

I’ve written several examples of how we can optimise SQL manually by rewriting it to introduce extra copies of some of the tables (typically in a fashion analogous to the optimizer’s mechanism for star transformations), so it’s nice to see another variation on the theme of the optimizer using table duplication to optimise a statement.

### Footnote:

The execution plan in 10.2.0.5 is slightly different, but it still unnests the subquery, introducing a second occurrence of XO_STOCK_TRANS as it does so.

« Previous PageNext Page »