Oracle Scratchpad

December 5, 2014

Closure

Filed under: CBO,Oracle,Upgrades — Jonathan Lewis @ 8:11 am GMT Dec 5,2014

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

I have three identical tables, one million rows, no indexes. The SQL to create the first table is one I supplied a couple of days ago to demonstrate changes in join cardinality dependent on Oracle version:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

Here’s a simple SQL statement that joins the three tables:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t2.n_90
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

Given the various n_400 = {constant} predicates we should expect to see close to 2,500 rows from each table participating in the join – and that is exactly what Oracle predicts in the execution plan. The question is: what is the cardinality of the final join? Before showing you the execution plan and its prediction I’m going to bring transitivity into the picture.  Note the lines numbered 6 and 7.  If t2.n_90 = t1.n_90 and t3.n_90 = t2.n_90 then t3.n_90 = t1.n_90; so I might have written my query slightly differently – note the small change at line 7 below:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t1.n_90		-- changed
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

So here’s the exciting bit. My two queries are logically equivalent, and MUST return exactly the same row set. Check the final cardinality predictions in these two execution plans (from 12.1.0.2, but you get the same results in 11.2.0.4, older versions have other differences):


First Version - note the predicate for operation 3
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      |  2554 |   139K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_90"="T2"."N_90" AND "T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

Second Version - note the predicate for operation 1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10575 |   578K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90" AND "T3"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

The a small change in the choice of presenting the predicates gives me a factor of 22 in the cardinality estimate – oops!

The actual result with my data was close to 3,000 rows – so one of the estimates in the second version was pretty good; but the point of the blog isn’t that you can “tune” the optimizer by carefully picking your way through transitive closure, the point is that a small “cosmetic” change you might make to a query could result in a significant change in the cardinality calculations which could then make a dramatic difference to the final execution plan. This example, by the way, depends on the same “multi-column sanity check” that showed up in the previous posting.

I will be expanding on this posting some time in the next couple of weeks but, again, the example should come up in my session on calculating selectivity at “Super Sunday” at UKOUG Tech 14.

 

 

3 Comments »

  1. Moral of the story seems to be: me, as an SQL programmer, should the optimizer give every condition, even though some conditions may be deducible from others, meaning we should write here

    select
        t1.*, t2.*, t3.*
    from
        t1, t2, t3
    where
        t2.n_90  = t1.n_90
    and t3.n_90  = t2.n_90
    and t3.n_90  = t1.n_90
    and t3.n_600 = t2.n_600
    and t1.n_400 = 1
    and t2.n_400 = 2
    and t3.n_400 = 3
    

    Interestingly, there are cases, where it is better not to write down every deducible condition but hide some of them from the optimizer.

    Comment by Matthias Rogel — December 5, 2014 @ 9:50 am GMT Dec 5,2014 | Reply

    • Matthias,

      Your last comment sums up the reason why it’s so difficult to produce any such rule of thumb. Depending on your version of Oracle the optimizer might take your duplicated predicates as:

      a) double count the arithmetic – to produce extremely bad estimates
      b) manage to do play some transitivie closure tricks (though not in this specific case) to rewrite your predicates, producing a duplicate which it then eliminates anyway.
      c) use several “multi-column” sanity check to produce extremely bad estimates
      d) other

      And whatever it does it might change on the next patch release.

      I think Oracle 9 may have bee the version where it was a really bad idea to include all the transitive predicates.

      Comment by Jonathan Lewis — December 6, 2014 @ 10:26 am GMT Dec 6,2014 | Reply

  2. […] 传递闭包的英文为transitive closure,Wiki上有关于其的介绍。在Oracle优化器中也存在这一现象。 Jonathan lewis在Cost Based Oracle Fundermental一书中的第六章对这传递闭包进行了详细的介绍和试验,并给出一些版本下Oracle优化器是如何处理和存在的问题。Jonathan也在博客中写了关于这一话题的文章,分别在2007年的transitive-closure和2014年的closure。 […]

    Pingback by 传递闭包(transitive closure)和约束生成的谓词(Constraint-Generated Predicates) | LEO Notes — August 18, 2016 @ 8:45 am GMT Aug 18,2016 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.