Oracle Scratchpad

February 16, 2014

Recursive subquery factoring

Filed under: Hints,Ignoring Hints,Oracle,Subquery Factoring,Tuning — Jonathan Lewis @ 6:11 pm BST Feb 16,2014

This is possibly my longest title to date – I try to keep them short enough to fit the right hand column of the blog without wrapping – but I couldn’t think of a good way to shorten it (Personally I prefer to use the expression CTE – common table expression – over “factored subquery” or “subquery factoring” or “with subquery”, and that would have achieved my goal, but might not have meant anything to most people.)

If you haven’t come across them before, recursive CTEs appeared in 11.2, are in the ANSI standard, and are (probably) viewed by Oracle as the strategic replacement for “connect by” queries. Here’s a simple (and silly) example:


with data(p) as (
	select 1 p from dual
	union all
	select p + 1 from data where p < 100
)
select	p
from	data
where	rownum <= 10
;

         P
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 37253879

---------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |      |     2 |    26 |     4   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                             |      |       |       |            |          |
|   2 |   VIEW                                     |      |     2 |    26 |     4   (0)| 00:00:01 |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|      |       |       |            |          |
|   4 |     FAST DUAL                              |      |     1 |       |     2   (0)| 00:00:01 |
|*  5 |     RECURSIVE WITH PUMP                    |      |       |       |            |          |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   5 - filter("P"<100)

A recursive CTE has three features that identify it. First, the query alias (“data” in this case) is followed by a list of column aliases; secondly the query includes a UNION ALL; and thirdly the second subquery in the UNION ALL references the query alias – i.e. it’s the recursive bit. There are other optional bits but I’m not planning to go into those – all I want to talk about is how to control the materialization (or not) of a recursive CTE through hinting.

The reason I wrote this note was because Jeff Jacobs, in his presentation on “Performance Anti-patterns” at RMOUG last week, raised the question of whether or not the /*+ materialize */ and /*+ inline */ hints worked with recursive CTEs and gave an example of a UNION ALL query where the CTE always materialized, no matter how you applied the /*+ inline */ hint. The CTE seemed to be following the basic guideline for CTEs – if you use it once in the main query it goes inline, if you use it more than once it will (almost invariably) materialize.

I’m always interested in examples where “the hint is ignored”, so I exchanged a couple of email messages with Jeff and he sent me an example (which I’ve simplified for this blog) of a query that demonstrated the issue; and I spent a little while thinking about it and decided that it simply wasn’t possible to hint the code the way we wanted to and it was just one of those cases where it takes a bit of time for new features to catch up and fit in to the standard framework. Here’s a simplified version of the query, with its execution plan:

with data(p) as (
	select 1 p from dual
	union all
	select p + 1 from data where p < 100
)
select	p
from	data
where	rownum <= 10
union all
select	p
from	data
where	rownum <= 10
;

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                            |     4 |    52 |     4   (0)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION                 |                            |       |       |            |          |
|   2 |   LOAD AS SELECT                           | SYS_TEMP_0FD9D6608_7391CD7 |       |       |            |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|                            |       |       |            |          |
|   4 |     FAST DUAL                              |                            |     1 |       |     2   (0)| 00:00:01 |
|*  5 |     RECURSIVE WITH PUMP                    |                            |       |       |            |          |
|   6 |   UNION-ALL                                |                            |       |       |            |          |
|*  7 |    COUNT STOPKEY                           |                            |       |       |            |          |
|   8 |     VIEW                                   |                            |     2 |    26 |     2   (0)| 00:00:01 |
|   9 |      TABLE ACCESS FULL                     | SYS_TEMP_0FD9D6608_7391CD7 |     2 |    12 |     2   (0)| 00:00:01 |
|* 10 |    COUNT STOPKEY                           |                            |       |       |            |          |
|  11 |     VIEW                                   |                            |     2 |    26 |     2   (0)| 00:00:01 |
|  12 |      TABLE ACCESS FULL                     | SYS_TEMP_0FD9D6608_7391CD7 |     2 |    12 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("P"<100)
   7 - filter(ROWNUM<=10)
  10 - filter(ROWNUM<=10)

The following morning I woke up with one of those “overnight insights” where you seem to have worked out the answer in your sleep. To make a hint work you have to put it in the right query block, or you have to name the right query block in the main query block: in this case the right query block doesn’t exist in the text, and it’s not possible to figure out what the name of the right query block would be if it came into existence.

If you try putting the /*+ inline */ hint into the query after the select at line 2 above, you’ve put the hint into the first query block of a union all, NOT into the query block of the recursvie CTE.

Having identified the problem, the solution (or at least, a possible solution) was obvious – create the query block you need. This (with its execution plan from 11.2.0.4) is what worked:

with data(p) as (
	select 1 p from dual
	union all
	select p + 1 from data where p < 100
),
data1 as (
	select /*+ inline */ * from data
)
select	p
from	(
	select * from data1 where rownum <= 10
	union all
	select * from data1 where rownum <= 10
	)
;

-----------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |      |     4 |    52 |     8   (0)| 00:00:01 |
|   1 |  VIEW                                        |      |     4 |    52 |     8   (0)| 00:00:01 |
|   2 |   UNION-ALL                                  |      |       |       |            |          |
|*  3 |    COUNT STOPKEY                             |      |       |       |            |          |
|   4 |     VIEW                                     |      |     2 |    26 |     4   (0)| 00:00:01 |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|      |       |       |            |          |
|   6 |       FAST DUAL                              |      |     1 |       |     2   (0)| 00:00:01 |
|*  7 |       RECURSIVE WITH PUMP                    |      |       |       |            |          |
|*  8 |    COUNT STOPKEY                             |      |       |       |            |          |
|   9 |     VIEW                                     |      |     2 |    26 |     4   (0)| 00:00:01 |
|  10 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|      |       |       |            |          |
|  11 |       FAST DUAL                              |      |     1 |       |     2   (0)| 00:00:01 |
|* 12 |       RECURSIVE WITH PUMP                    |      |       |       |            |          |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(ROWNUM<=10)
   7 - filter("P"<100)
   8 - filter(ROWNUM<=10)
  12 - filter("P"<100)

All I’ve done is create a second CTE that selects from the first CTE. This is now a simple select, so I can add a perfectly placed hint to it – in the hope that this would effectively require the dependent recursive CTE to be inlined inside it. It seems to be sufficient.

I haven’t tested the strategy exhaustively – so I can give you no guarantee that this has to work – unfortunately I did have another example that I applied the method to, and after several seconds of no response it crashed with an ORA-00600 error :( but that might have been a side effect of the nature of query (it included a couple of the optional extras) rather than a specific feature of inlining.)

[Further reading on "ignoring hints"]

[Further reading on "subquery factoring"]

February 14, 2014

12c Subquery Factoring

Filed under: 12c,Oracle,Subquery Factoring,Tuning — Jonathan Lewis @ 11:44 am BST Feb 14,2014

From time to time I’ve posted a reminder that subquery factoring (“with subquery”) can give you changes in execution plans even if the subquery that you’ve taken out of line is written back inline by Oracle rather than being materialized. This can still happen in 12c – here’s a sample query in the two forms with the result sets and execution plans.  First, the “factored” version:


with e as (
	select
		deptno
	from	emp
	order by
		deptno
)
select
	deptno, count(*)
from
	e
group by deptno
;

    DEPTNO   COUNT(*)
---------- ----------
        10          3
        20          5
        30          6

3 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1536940522

-------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     3 |     9 |     6 |
|   1 |  SORT GROUP BY NOSORT|      |     3 |     9 |     6 |
|   2 |   VIEW               |      |    14 |    42 |     6 |
|   3 |    SORT ORDER BY     |      |    14 |    42 |     6 |
|   4 |     TABLE ACCESS FULL| EMP  |    14 |    42 |     2 |
-------------------------------------------------------------

Note that the view has not been materialized (no “TEMP TABLE TRANSFORMATION”). There is a “sort group by” with NOSORT, and the data has been reported in sorted order as a side effect of the “sort order by” in line 3.

Now with the inline view manually written in place:


select   deptno, count(*)
from (
	select
		/*+ gather_plan_statistics */
		deptno
	from	emp
	order by
		deptno
) e
group by deptno
;

    DEPTNO   COUNT(*)
---------- ----------
        30          6
        20          5
        10          3

3 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4067220884

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     3 |     9 |     6 |
|   1 |  HASH GROUP BY     |      |     3 |     9 |     6 |
|   2 |   TABLE ACCESS FULL| EMP  |    14 |    42 |     2 |
-----------------------------------------------------------

In principle, the optimizer should have rewritten the first query in the second form and optimized it – but there’s clearly something out of order in the optimisation. As you can see, the result is no longer in the same order (but we don’t expect any specific order, of course, since we didn’t end with an “order by” clause) and the aggregation step is a hash group by rather than a sort group by.

Footnote:

I know that I got this example off someone else, but all I’ve written into my script is a URL, and it’s the wrong URL – my clipboard buffer seems to lag a little sometimes when I’m copying from the host to a virtual machine. The URL I’ve got is to an article by Steven Pinker in the New Republic, and it had nothing to do with Oracle (though it was comparing science and the humanities).

If you’re the original source of the example, please let me know so that I can add an acknowledgement.

[Further reading on "subquery factoring"]

Test case

If you would like to reproduce the example, here’s the script – it’s a simple hack of the Scott emp/dept schema.

CREATE TABLE EMP (
	EMPNO		NUMBER(4) NOT NULL,
        ENAME		VARCHAR2(10),
        JOB		VARCHAR2(9),
        MGR		NUMBER(4),
        HIREDATE	DATE,
        SAL		NUMBER(7, 2),
        COMM		NUMBER(7, 2),
        DEPTNO		NUMBER(2)
);

insert into emp values
        (7369, 'SMITH',  'CLERK',     7902,
        to_date('17-DEC-1980', 'DD-MON-YYYY'),  800, NULL, 20);

insert into emp values
        (7499, 'ALLEN',  'SALESMAN',  7698,
        to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);

insert into emp values
        (7521, 'WARD',   'SALESMAN',  7698,
        to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);

insert into emp values
        (7566, 'JONES',  'MANAGER',   7839,
        to_date('2-APR-1981', 'DD-MON-YYYY'),  2975, NULL, 20);

insert into emp values
        (7654, 'MARTIN', 'SALESMAN',  7698,
        to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);

insert into emp values
        (7698, 'BLAKE',  'MANAGER',   7839,
        to_date('1-MAY-1981', 'DD-MON-YYYY'),  2850, NULL, 30);

insert into emp values
        (7782, 'CLARK',  'MANAGER',   7839,
        to_date('9-JUN-1981', 'DD-MON-YYYY'),  2450, NULL, 10);

insert into emp values
        (7788, 'SCOTT',  'ANALYST',   7566,
        to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);

insert into emp values
        (7839, 'KING',   'PRESIDENT', NULL,
        to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);

insert into emp values
        (7844, 'TURNER', 'SALESMAN',  7698,
        to_date('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);

insert into emp values
        (7876, 'ADAMS',  'CLERK',     7788,
        to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);

insert into emp values
        (7900, 'JAMES',  'CLERK',     7698,
        to_date('3-DEC-1981', 'DD-MON-YYYY'),   950, NULL, 30);

insert into emp values
        (7902, 'FORD',   'ANALYST',   7566,
        to_date('3-DEC-1981', 'DD-MON-YYYY'),  3000, NULL, 20);

insert into emp values
        (7934, 'MILLER', 'CLERK',     7782,
        to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 10);

CREATE TABLE DEPT(
	DEPTNO	NUMBER(2),
        DNAME	VARCHAR2(14),
        LOC	VARCHAR2(13)
);

insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH',   'DALLAS');
insert into dept values (30, 'SALES',      'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

commit;

alter table emp add constraint emp_pk primary key(empno);
alter table dept add constraint dept_pk primary key(deptno);

February 5, 2014

Minus

Filed under: Execution plans,Oracle,Troubleshooting,Tuning — Jonathan Lewis @ 5:42 pm BST Feb 5,2014

Here’s a little script to demonstrate an interesting observation that appeared in my email this morning (that’s morning Denver time):

create table t1
as
select * from all_objects where rownum = 1;

delete from t1;
commit;

create table t2
as
select * from all_objects where rownum <= 100000;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);

	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T2',
		method_opt	 => 'for all columns size 1'
	);
end;
/

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

prompt  ======================
prompt  And now the test query
prompt  ======================

select * from t1
minus
select * from t2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set events '10046 trace name context off';

Clearly the first query block in the test query will return no rows, and since the MINUS operator returns rows from the first result set that do not appear in the second result set there is no need for Oracle to run the second query block. Well, guess what …

The ‘create where rownum = 1′ followed by ‘delete’ is a lazy workaround to avoid side effects of deferred segment creation so that you can run the script on any (recent) version of Oracle. The flush, combined with 10046 trace, allowed me to see waits that showed which objects Oracle scanned and when, and the display_cursor() was just icing on the cake.

I’ve checked 11.2.0.4 and 12.1.0.1, and both of them scan t1 first and then scan t2 unnecessarily.

This surprised me slightly given how smart the optimizer can be, but I guess it’s one of those boundary cases where the optimizer has just one strategy for an entire class of queries. I couldn’t think of any “legal” way to control the effect, but here’s the first dirty trick that came to my mind. If you’re sure that the first subquery is going to be cheap and you’re worried that the second subquery is expensive, you could do the following:

select v2.*
from
	(select * from t1 where rownum = 1)	v1,
	(
		select * from t1
		minus
		select * from t2
	)	v2
;

Introduce a spurious query to return one row from the first subquery and join it do the MINUS query. If the inline view doesn’t return any rows Oracle short-circuits the join, as shown by the following execution path with stats:

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |      1 |        |      0 |00:00:00.01 |       2 |     32 |       |       |          |
|   1 |  MERGE JOIN CARTESIAN  |      |      1 |      1 |      0 |00:00:00.01 |       2 |     32 |       |       |          |
|   2 |   VIEW                 |      |      1 |      1 |      0 |00:00:00.01 |       2 |     32 |       |       |          |
|*  3 |    COUNT STOPKEY       |      |      1 |        |      0 |00:00:00.01 |       2 |     32 |       |       |          |
|   4 |     TABLE ACCESS FULL  | T1   |      1 |      1 |      0 |00:00:00.01 |       2 |     32 |       |       |          |
|   5 |   BUFFER SORT          |      |      0 |      1 |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|   6 |    VIEW                |      |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   7 |     MINUS              |      |      0 |        |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   8 |      SORT UNIQUE       |      |      0 |      1 |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|   9 |       TABLE ACCESS FULL| T1   |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  10 |      SORT UNIQUE       |      |      0 |  70096 |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|  11 |       TABLE ACCESS FULL| T2   |      0 |  70096 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(ROWNUM=1)

The only thing to watch out for is that the “rownum = 1″ doesn’t make the optimizer switch to an unsuitable “first_rows(1)” execution plan.

January 26, 2014

Pagination

Filed under: Bugs,Indexing,Oracle,Troubleshooting,Tuning — Jonathan Lewis @ 12:08 pm BST Jan 26,2014

I was involved in a thread on Oracle-L recently started with the question: “How many LIOs is too many LIOs”. Rather than rewrite the whole story, I’ve supplied a list of links to the contributions I made, in order – the final “answer” is actually the answer to a different question – but travels an interesting path to get there.#

I’ve got a script to emulate the requirement so that people can see for themselves the bug that I mention in post 15; I’ll try to add a couple of notes to it and publish it some time, but for the moment I’ll just remind myself that it’s called (slightly counter-intuitively: no_sort_problem.sql)

December 11, 2013

Null Quiz

Filed under: Oracle,Performance,Troubleshooting,Tuning — Jonathan Lewis @ 6:42 pm BST Dec 11,2013

Here’s an example I saw a few months ago of the confusion caused by NULL. As the owner of the problem put it: the first query, run from SQL*Plus for testing purposes, takes no time to complete; but when “put into a pl/sql cursor” (as shown in the second query) it takes ages to complete.

(more…)

December 10, 2013

Subquery

Filed under: Oracle,subqueries,Tuning — Jonathan Lewis @ 6:26 pm BST Dec 10,2013

How not to write subqueries:

(more…)

September 7, 2013

Hash Joins

Filed under: CBO,Execution plans,Hints,Oracle,Tuning — Jonathan Lewis @ 12:53 pm BST Sep 7,2013

I’ve written notes about the different joins in the past – but such things are always worth revisiting, so here’s an accumulated bundle of comments about hash joins.

A hash join takes two inputs that (in most of the Oracle literature) are referred to as the “build table” and the “probe table”. These rowsources may be extracts from real tables, or indexes, or might be result sets from previous joins. Oracle uses the “build table” to build a hash table in memory, consuming and using the rowsource in a single call; it then consumes the “probe table” one row at a time, probing the in-memory hash table to find a match.  Access to the hash table is made efficient by use of a hashing function that has been used on the join columns – rows with the same value on the join column end up hashing to the same place in the hash table. It is possible for different input values to produce the same hash value (a hash collision) so Oracle still has to check the actual values once it has identified “probable” joins in the hash table. Because the comparison is based on a hash value, hash joins cannot be used for range-based comparisons.
(more…)

August 9, 2013

12c Join Views

Filed under: 12c,Oracle,Performance,Tuning — Jonathan Lewis @ 6:36 pm BST Aug 9,2013

There are a couple of posts on the blog describing problems with updateable join views or, to be more precise, join views which were key-preserved but which the optimizer did not recognize as key-preserved. Both scenarios are addressed in 12c:

August 6, 2013

12c subquery factoring

Filed under: 12c,Bugs,Oracle,Subquery Factoring,Tuning — Jonathan Lewis @ 8:08 am BST Aug 6,2013

I’ve written a few notes about anomalies in subquery factoring (with subquery) in the past, principally making a fuss about the fact that moving an inline view into a “with subquery” can cause a plan to change even when the internal code moves the subquery back in line. With the arrival of 12c one of my first sets of tests was to rerun all the examples to see how many of them had been addressed. I hadn’t written about as many examples as I had thought, and some of them had been fixed before 12c, but here are few references to a couple of outstanding items that I thought worth a mention:

[Further reading on "subquery factoring"]

July 10, 2013

12c First N

Filed under: 12c,Oracle,trace files,Tuning — Jonathan Lewis @ 5:59 pm BST Jul 10,2013

There have been a couple of nice posts about the “Top N” (or First N / Next N)  syntax that has appeared in 12c, here and here, for example. I particularly like the first set of examples because they include some execution plans that give you a good idea of what’s  going on under the covers. “Under the covers” is important, because if you don’t actually have a large data set to test on you might not realise what impact a “Top N” query might have on a production data set.

(more…)

June 7, 2013

Same Plan

Filed under: CBO,Execution plans,Oracle,Tuning — Jonathan Lewis @ 5:11 pm BST Jun 7,2013

An interesting little problem appeared on the Oracle-L mailing list earlier on this week – a query ran fairly quickly when statistics hadn’t been collected on the tables, but then ran rather slowly after stats collection even though the plan hadn’t changed, and the tkprof results were there to prove the point. Here are the two outputs (edited slightly for width – the original showed three sets of row stats, the 1st, avg and max, but since the query had only been run once the three columns showed the same results in each case):


 Rows (max)  Row Source Operation
 ----------  ---------------------------------------------------
          0  UPDATE  CXT_FAKT_PROVISIONSBUCHUNG (cr=2039813 pr=3010 pw=0 time=47745718 us)
      15456   TABLE ACCESS FULL CXT_FAKT_PROVISIONSBUCHUNG (cr=1328 pr=1325 pw=0 time=40734 us cost=370 size=880992 card=15456)
      11225   VIEW  CXV_HAUPT_VU_SPARTE (cr=2038477 pr=1684 pw=0 time=47297497 us cost=10 size=4293 card=1)
      11225    SORT UNIQUE (cr=2038477 pr=1684 pw=0 time=47284436 us cost=10 size=824 card=1)
     126457     VIEW  (cr=2038167 pr=1669 pw=0 time=27753402 us cost=9 size=824 card=1)
     126457      WINDOW SORT (cr=2038167 pr=1669 pw=0 time=27667835 us cost=9 size=853 card=1)
     126457       WINDOW SORT (cr=2038167 pr=1669 pw=0 time=26884699 us cost=9 size=853 card=1)
     126457        NESTED LOOPS  (cr=2038167 pr=1669 pw=0 time=26342292 us)
     126457         NESTED LOOPS  (cr=1995241 pr=1615 pw=0 time=26192173 us cost=7 size=853 card=1)
     141581          NESTED LOOPS  (cr=642683 pr=1066 pw=0 time=3039331 us cost=5 size=499 card=1)
      11225           TABLE ACCESS BY INDEX ROWID POP_INFO (cr=22473 pr=32 pw=0 time=109767 us cost=2 size=16 card=1)
      11225            INDEX UNIQUE SCAN PK_POP_INFO (cr=11248 pr=4 pw=0 time=51796 us cost=1 size=0 card=1)(object id 1790009)
     141581           TABLE ACCESS BY INDEX ROWID TMP_VU_SPARTE (cr=620210 pr=1034 pw=0 time=2889850 us cost=3 size=483 card=1)
    1732978            INDEX RANGE SCAN IDX_TMP_VU_SPARTE (cr=140952 pr=204 pw=0 time=2094982 us cost=2 size=0 card=1)(object id 1795724)
     126457          INDEX RANGE SCAN IDX_TMP_VU_SPARTE (cr=1352558 pr=549 pw=0 time=23078816 us cost=1 size=0 card=1)(object id 1795724)
     126457         TABLE ACCESS BY INDEX ROWID TMP_VU_SPARTE (cr=42926 pr=54 pw=0 time=94791 us cost=2 size=354 card=1)

 Rows (max)  Row Source Operation
 ----------  ---------------------------------------------------
          0  UPDATE  CXT_FAKT_PROVISIONSBUCHUNG (cr=89894995 pr=1701 pw=0 time=318766975 us)
      15456   TABLE ACCESS FULL CXT_FAKT_PROVISIONSBUCHUNG (cr=1328 pr=1031 pw=0 time=46975 us cost=370 size=880992 card=15456)
      11225   VIEW  CXV_HAUPT_VU_SPARTE (cr=89893656 pr=670 pw=0 time=1553653734 us cost=11 size=4293 card=1)
      11225    SORT UNIQUE (cr=89893656 pr=670 pw=0 time=1553640071 us cost=11 size=419 card=1)
     126457     VIEW  (cr=89893656 pr=670 pw=0 time=1533733864 us cost=10 size=419 card=1)
     126457      WINDOW SORT (cr=89893656 pr=670 pw=0 time=1533646166 us cost=10 size=155 card=1)
     126457       WINDOW SORT (cr=89893656 pr=670 pw=0 time=1532847028 us cost=10 size=155 card=1)
     126457        NESTED LOOPS  (cr=89893656 pr=670 pw=0 time=1532238656 us)
    3501658         NESTED LOOPS  (cr=89167767 pr=665 pw=0 time=1529652013 us cost=8 size=155 card=1)
    5300707          NESTED LOOPS  (cr=1339312 pr=480 pw=0 time=9657987 us cost=5 size=81 card=1)
      11225           TABLE ACCESS BY INDEX ROWID POP_INFO (cr=22473 pr=32 pw=0 time=119070 us cost=2 size=16 card=1)
      11225            INDEX UNIQUE SCAN PK_POP_INFO (cr=11248 pr=3 pw=0 time=54707 us cost=1 size=0 card=1)(object id 1790009)
    5300707           TABLE ACCESS BY INDEX ROWID TMP_VU_SPARTE (cr=1316839 pr=448 pw=0 time=8359987 us cost=3 size=65 card=1)
    5300707            INDEX RANGE SCAN IDX_TMP_VU_SPARTE (cr=140971 pr=87 pw=0 time=3603882 us cost=2 size=0 card=1)(object id 1795724)
    3501658          INDEX RANGE SCAN IDX_TMP_VU_SPARTE (cr=87828455 pr=185 pw=0 time=1518016475 us cost=2 size=0 card=1)(object id 1795724)
     126457         TABLE ACCESS BY INDEX ROWID TMP_VU_SPARTE (cr=725889 pr=5 pw=0 time=1829196 us cost=3 size=74 card=1)

As you can see, the first run took 48 seconds (time=47,745,718 us in the first line) while the second execution took 319 seconds (time=318766975 us). If you check the execution plans carefully they appear to be the same plan and, in fact, the trace file showed that the two plans had the same plan hash value. Clearly, though, they do vastly different amounts of work – the most eye-catching detail, perhaps, is the way the bad plan blows the row count up to 5 million before collapsing it back to 126,000). What do you have to do to get the change in performance (and it’s a totally reproducible change) – create or drop stats: if you have stats on the tables you get a slow execution, if you delete the stats you get the fast execution.

So what’s the problem ? Look carefully at the plan(s) – they’re not actually the same plan, but you can’t see the difference you can only see clues that they must be different. Notice in the last four lines that you access the same table (TMP_VU_SPARTE) twice using the same index (IDX_TMP_VU_SPARTE) – when you collect stats you access the two tables in the opposite order, and it makes a difference to the work you do.

To demonstrate the point I’ve created a simplified model of the problem, based on some extra information supplied in the mail thread. The model requires a correlated update, based on a view which joins a table to itself, and range-based predicates. Here’s the data generation:


execute dbms_random.seed(0)

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1e4)) contract,
	sysdate - 300 + trunc(dbms_random.value(0,300))	date_from,
	sysdate - 300 + trunc(dbms_random.value(0,300))	date_to,
	sysdate - 300 + trunc(dbms_random.value(0,300))	created,
	sysdate - 300 + trunc(dbms_random.value(0,300))	replaced
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5
;

create index t1_i1 on t1(contract, date_from, date_to);

create or replace view v1
as
select
	t1a.*
from
	t1	t1a,
	t1	t1b
where
	t1b.contract    = t1a.contract
and	t1b.date_from  <= t1a.date_from 
and	t1b.date_to    >  t1a.date_from
and	t1b.created    <  t1a.replaced 
and	t1b.replaced   >  t1a.created
;

rem
rem	We are going to update t2 from v1 using
rem	equality on all columns in the index
rem

create table t2
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1e4)) 		contract,
	sysdate - 300 + trunc(dbms_random.value(0,300))	date_from,
	sysdate - 300 + trunc(dbms_random.value(0,300))	date_to,
	sysdate - 300 + trunc(dbms_random.value(0,300))	replaced
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5 ; 

begin 	dbms_stats.gather_table_stats(
 		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 =>'for all columns size 1'
	);

	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T2',
		method_opt	 =>'for all columns size 1'
	);

end;
/

I haven’t crafted my data particularly carefully and from the point of view of realism the content is a little bizarre (I’ve got “to” dates earlier than “from” dates, for example) – but all I’m interested in is getting the right sort of shape to demonstrate a point about the plan, I’m not trying to model the actual variation in activity.

I’ve created a view with a self-join that starts with equality on first column of the index I’ve created, but uses range-based predicates on the other columns in the table – that’s probably quite important as far as the real-world performance was concerned – partly because of the nature of the data (interesting skews when comparing “valid to/from dates”) and partly because Oracle is going to have to use its 5% guess for join selectivities for range based selectivities.

And now the update – two versions with their execution plans; starting with the plan where the correlated subquery visits t1 aliased as t1a first:

explain plan for
update t2 set
	replaced = (
		select
			/*+ leading(v1.t1a v1.t1b) */
			max(replaced)
		from	v1
		where
			v1.contract = t2.contract
		and	v1.date_from = t2.date_from
		and	v1.date_to = t2.date_to
	)
;

select * from table(dbms_xplan.display(null,null,'alias'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------
Plan hash value: 2328412027

-----------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                |       |   100K|  2734K|   500K (20)| 00:41:41 |
|   1 |  UPDATE                         | T2    |       |       |            |          |
|   2 |   TABLE ACCESS FULL             | T2    |   100K|  2734K|    64   (8)| 00:00:01 |
|   3 |   SORT AGGREGATE                |       |     1 |    72 |            |          |
|   4 |    NESTED LOOPS                 |       |       |       |            |          |
|   5 |     NESTED LOOPS                |       |     1 |    72 |     4   (0)| 00:00:01 |
|   6 |      TABLE ACCESS BY INDEX ROWID| T1    |     1 |    36 |     2   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN          | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN           | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
|*  9 |     TABLE ACCESS BY INDEX ROWID | T1    |     1 |    36 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - UPD$1
   2 - UPD$1        / T2@UPD$1
   3 - SEL$F5BB74E1
   6 - SEL$F5BB74E1 / T1A@SEL$2
   7 - SEL$F5BB74E1 / T1A@SEL$2
   8 - SEL$F5BB74E1 / T1B@SEL$2
   9 - SEL$F5BB74E1 / T1B@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - access("T1A"."CONTRACT"=:B1 AND "T1A"."DATE_FROM"=:B2 AND
              "T1A"."DATE_TO"=:B3)
   8 - access("T1B"."CONTRACT"=:B1 AND "T1B"."DATE_TO">:B2 AND
              "T1B"."DATE_FROM"<=:B3)
       filter("T1B"."DATE_TO">:B1 AND "T1B"."CONTRACT"="T1A"."CONTRACT" AND
              "T1B"."DATE_FROM"<="T1A"."DATE_FROM" AND "T1B"."DATE_TO">"T1A"."DATE_FROM")
   9 - filter("T1B"."CREATED"<"T1A"."REPLACED" AND 
              "T1B"."REPLACED">"T1A"."CREATED")

Notice that there’s nothing in the body of the plan that tells you which copy of t1 is visited first – you can’t tell unless you look carefully at the predicate information, or unless you’ve requested the alias section that lets you see very easily that the t1 at line 6 (the first one accessed) is aliased as t1a and the t1 at line 9 is aliased as t1b. While you’re at it, check the plan hash value from the top of the plan output.

Now the plan when we hint the two tables into the reverse order:

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------
Plan hash value: 2328412027

-----------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                |       |   100K|  2734K|   600K (17)| 00:50:01 |
|   1 |  UPDATE                         | T2    |       |       |            |          |
|   2 |   TABLE ACCESS FULL             | T2    |   100K|  2734K|    64   (8)| 00:00:01 |
|   3 |   SORT AGGREGATE                |       |     1 |    72 |            |          |
|   4 |    NESTED LOOPS                 |       |       |       |            |          |
|   5 |     NESTED LOOPS                |       |     1 |    72 |     5   (0)| 00:00:01 |
|   6 |      TABLE ACCESS BY INDEX ROWID| T1    |     1 |    36 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN          | T1_I1 |     1 |       |     2   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN           | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
|*  9 |     TABLE ACCESS BY INDEX ROWID | T1    |     1 |    36 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - UPD$1
   2 - UPD$1        / T2@UPD$1
   3 - SEL$F5BB74E1
   6 - SEL$F5BB74E1 / T1B@SEL$2
   7 - SEL$F5BB74E1 / T1B@SEL$2
   8 - SEL$F5BB74E1 / T1A@SEL$2
   9 - SEL$F5BB74E1 / T1A@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - access("T1B"."CONTRACT"=:B1 AND "T1B"."DATE_TO">:B2 AND
              "T1B"."DATE_FROM"<=:B3) 
       filter("T1B"."DATE_TO">:B1)
   8 - access("T1B"."CONTRACT"="T1A"."CONTRACT" AND "T1A"."DATE_FROM"=:B1 AND
              "T1A"."DATE_TO"=:B2)
       filter("T1A"."CONTRACT"=:B1 AND "T1B"."DATE_FROM"<="T1A"."DATE_FROM" AND 
              "T1B"."DATE_TO">"T1A"."DATE_FROM")
   9 - filter("T1B"."CREATED"<"T1A"."REPLACED" AND 
              "T1B"."REPLACED">"T1A"."CREATED")

The plan hash value is the same – but if you look at the alias section you can see that the t1 we access first is the one with alias t1b.
Now compare the predicate sections, just for line 7 (the initial index access line):

First Plan
   7 - access("T1A"."CONTRACT"=:B1 AND "T1A"."DATE_FROM"=:B2 AND "T1A"."DATE_TO"=:B3)

Second Plan
   7 - access("T1B"."CONTRACT"=:B1 AND "T1B"."DATE_TO">:B2 AND "T1B"."DATE_FROM"<=:B3) 
       filter("T1B"."DATE_TO">:B1)

Although my model data is random garbage, it’s easy to imagine that with a large data set the selectivities of these two predicates could be dramatically different, and there is clearly some “real-world” meaning to the date_to/date_from columns for a given row that the optimizer is unlikely to recognise when looking at the individual column stats for the table. It’s not surprising that a plan with no stats (which results in dynamic sampling) could find a better plan than a table with stats that leaves the optimizer using its default “call it 5% and hope for the best” strategy for range-based joins.

Conclusion

When you reference a table more than once in an execution plan, make sure you look very carefully at the predicate section – or even call for the alias section – so that you know exactly which copy of the table appears at which point in the plan.

Footnote

Although the results don’t mean much for my example, here’s my output from tracing my queries – rather than report the whole query in each case, I’ve just given the hint to show the table order:


/*+ leading(v1.t1a v1.t1b) */

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      6.01       0.00          0     200722     204119      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      6.01       0.00          0     200722     204119      100000

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  T2 (cr=200773 pr=0 pw=0 time=0 us)
    100000     100000     100000   TABLE ACCESS FULL T2 (cr=459 pr=0 pw=0 time=0 us cost=64 size=2800000 card=100000)
    100000     100000     100000   SORT AGGREGATE (cr=200248 pr=0 pw=0 time=0 us)
         0          0          0    NESTED LOOPS  (cr=200248 pr=0 pw=0 time=0 us)
         0          0          0     NESTED LOOPS  (cr=200248 pr=0 pw=0 time=0 us cost=4 size=72 card=1)
         0          0          0      TABLE ACCESS BY INDEX ROWID T1 (cr=200248 pr=0 pw=0 time=0 us cost=2 size=36 card=1)
         0          0          0       INDEX RANGE SCAN T1_I1 (cr=200248 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 81082)
         0          0          0      INDEX RANGE SCAN T1_I1 (cr=0 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 81082)
         0          0          0     TABLE ACCESS BY INDEX ROWID T1 (cr=0 pr=0 pw=0 time=0 us cost=2 size=36 card=1)

/*+ leading(v1.t1b v1.t1a) */

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      6.09       0.00          0     613372     200000      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      6.09       0.00          0     613372     200000      100000

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  T2 (cr=613372 pr=0 pw=0 time=1 us)
    100000     100000     100000   TABLE ACCESS FULL T2 (cr=459 pr=0 pw=0 time=0 us cost=64 size=2800000 card=100000)
    100000     100000     100000   SORT AGGREGATE (cr=612913 pr=0 pw=0 time=1 us)
         0          0          0    NESTED LOOPS  (cr=612913 pr=0 pw=0 time=1 us)
         0          0          0     NESTED LOOPS  (cr=612913 pr=0 pw=0 time=1 us cost=5 size=72 card=1)
    166078     166078     166078      TABLE ACCESS BY INDEX ROWID T1 (cr=368051 pr=0 pw=0 time=0 us cost=3 size=36 card=1)
    166078     166078     166078       INDEX RANGE SCAN T1_I1 (cr=202108 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 81082)
         0          0          0      INDEX RANGE SCAN T1_I1 (cr=244862 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 81082)
         0          0          0     TABLE ACCESS BY INDEX ROWID T1 (cr=0 pr=0 pw=0 time=0 us cost=2 size=36 card=1)

There isn’t a lot of difference in the run-time (which is mostly due to the tablescan) – but there’s an obvious difference in the number of buffer visits and the amount of data found for the first t1 access.
Just like the OP my plan varied with stats – though in my case I got the better plan when I had stats, and the worse plan when I deleted stats and the optimizer used dynamic sampling.

November 6, 2012

Learning

Filed under: Oracle,Troubleshooting,Tuning — Jonathan Lewis @ 5:47 pm BST Nov 6,2012

Possibly an item to file under “philosophy”, but a question came up in the seminar I was presenting today that prompted me to blog (very briefly) about why I manage to be so good at avoiding errors and inventing workarounds to problems. You probably know that you may see an execution plan change when you add a foreign key constraint to a table – but today someone in the class asked me if this would still work if the constraint were to be disabled. The immediate response that sprang to my mind was “surely not” – but the second thought was that I didn’t really know the answer and would have to check; and the third thought was that maybe it wouldn’t if disabled, but what about novalidate; and the fourth thought was whether the setting for query_rewrite_integrity would make a difference; and the fifth thought was to wonder if there were other conditions that mattered.

So hey-ho for the weekend, when I have to set up a test case for a query that changes plan when I add a foreign key constraint – and then try (at least) eight different combinations of circumstances to check what it really takes to make the change happen.

June 4, 2012

Subpartition stats

Filed under: CBO,Execution plans,Oracle,Performance,Tuning — Jonathan Lewis @ 7:07 am BST Jun 4,2012

You might have expected the following query ought to run reasonably efficiently, after all it seems to be targeted very accurately at precisely the few rows of information I’m interested in:

select
	column_name,
	avg_col_len
from
	dba_subpart_col_statistics
where
	owner             = 'TEST_USER'
and 	table_name        = 'TEST_COMP'
and	subpartition_name = 'P_MAX_D'

(more…)

May 24, 2012

Subquery Factoring

Filed under: Execution plans,Hints,Oracle,Subquery Factoring,Tuning — Jonathan Lewis @ 6:37 pm BST May 24,2012

I have a small collection of postings where I’ve described anomalies or limitations in subquery factoring (the “with subquery”, or Common Table Expression (CTE) to give it the official ANSI name). Here’s another example of Oracle’s code not behaving consistently. You may recognise the basic query from yesterday’s example of logical tuning – so I won’t reprint the code to generate the data sets. This examples in this note were created on 11.2.0.2 – we start with a simple query and its execution plan:
(more…)

May 23, 2012

Logical tuning

Filed under: Hints,Ignoring Hints,Oracle,Performance,Tuning — Jonathan Lewis @ 6:22 pm BST May 23,2012

Here’s a model of a problem I solved quite recently at a client site. The client’s query was much more complex and the volume of data much larger, but this tiny, two table, example is sufficient to demonstrate the key principle. (Originally I thought I’d have to use three tables to model the problem, which is why you may find my choice of table names a little odd). I ran this example on 11.2.0.2 – which was the client version:
(more…)

Next Page »

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,453 other followers