Oracle Scratchpad

October 14, 2013

Parallel Execution – 2

Filed under: Execution plans,Oracle,Parallel Execution — Jonathan Lewis @ 6:29 pm BST Oct 14,2013

Since I’m going to write a couple of articles dissecting parallel execution plans, I thought I’d put up a reference post describing the set of tables I used to generate the plan from the previous post, and the query (with serial execution plan) that produced it. The setup is a simple star schema arrangement – which I’ve generated by created by creating three identical tables and then doing a Cartesian join across the three of them.


rem
rem     Script:         c_treblehash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem
create table t1
as
select
	rownum		id,
	to_char(rownum)	small_vc,
	rpad('x',100)	padding
from
	all_objects
where
	rownum <= 70 -- > comment to bypass WordPress format issue
;

alter table t1
	add constraint t1_pk primary key(id)
;

create table t2
as
select
	rownum		id,
	to_char(rownum)	small_vc,
	rpad('x',100)	padding
from
	all_objects
where
	rownum <= 70 -- > comment to bypass WordPress format issue
;

alter table t2
	add constraint t2_pk primary key(id)
;

create table t3
as
select
	rownum		id,
	to_char(rownum)	small_vc,
	rpad('x',100)	padding
from
	all_objects
where
	rownum <= 70 -- > comment to bypass WordPress format issue
; 

create table t3
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70 -- > comment to bypass WordPress format issue
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt => 'for all columns size 1'
        );

	dbms_stats.gather_table_stats(
		user,
		't2',
		method_opt => 'for all columns size 1'
	);

	dbms_stats.gather_table_stats(
		user,
		't3',
		method_opt => 'for all columns size 1'
	);

	dbms_stats.gather_table_stats(
		user,
		't4',
		method_opt => 'for all columns size 1'
	);
end;
/

Since there are 70 rows in each of the “dimension” tables, there are 343,000 rows in the “fact” table.
Here’s the query, with serial execution plan – you may find that you have to apply a few hints to reproduce this plan, so I’ve included the minimum necessary set.

select
	/*+
		gather_plan_statistics
		leading(t4 t1 t2 t3)
		full(t4)
		use_hash(t1) full(t1) swap_join_inputs(t1)
		use_hash(t2) full(t2) swap_join_inputs(t2)
		use_hash(t3) full(t3) swap_join_inputs(t3)
	*/
	count(t1.small_vc),
	count(t2.small_vc),
	count(t3.small_vc),
	count(t4.small_vc)
from
	t4,
	t1,
	t2,
	t3
where
	t1.id = t4.id1
and	t2.id = t4.id2
and	t3.id = t4.id3
and	t1.small_vc in (1,2,3)
and	t2.small_vc in (1,2,3)
and	t3.small_vc in (1,2,3)
;

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:00.19 |   10731 |  10729 |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:00.19 |   10731 |  10729 |       |       |          |
|*  2 |   HASH JOIN           |      |      1 |     26 |     27 |00:00:00.01 |   10731 |  10729 |  2061K|  2061K|  832K (0)|
|*  3 |    TABLE ACCESS FULL  | T3   |      1 |      3 |      3 |00:00:00.01 |       3 |      3 |       |       |          |
|*  4 |    HASH JOIN          |      |      1 |    612 |    630 |00:00:00.01 |   10728 |  10726 |  2061K|  2061K|  959K (0)|
|*  5 |     TABLE ACCESS FULL | T2   |      1 |      3 |      3 |00:00:00.01 |       3 |      3 |       |       |          |
|*  6 |     HASH JOIN         |      |      1 |  14491 |  14700 |00:00:00.03 |   10725 |  10723 |  2061K|  2061K|  980K (0)|
|*  7 |      TABLE ACCESS FULL| T1   |      1 |      3 |      3 |00:00:00.01 |       3 |      3 |       |       |          |
|   8 |      TABLE ACCESS FULL| T4   |      1 |    343K|    343K|00:00:00.32 |   10722 |  10720 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   3 - filter((TO_NUMBER("T3"."SMALL_VC")=1 OR TO_NUMBER("T3"."SMALL_VC")=2 OR TO_NUMBER("T3"."SMALL_VC")=3))
   4 - access("T2"."ID"="T4"."ID2")
   5 - filter((TO_NUMBER("T2"."SMALL_VC")=1 OR TO_NUMBER("T2"."SMALL_VC")=2 OR TO_NUMBER("T2"."SMALL_VC")=3))
   6 - access("T1"."ID"="T4"."ID1")
   7 - filter((TO_NUMBER("T1"."SMALL_VC")=1 OR TO_NUMBER("T1"."SMALL_VC")=2 OR TO_NUMBER("T1"."SMALL_VC")=3))

As you can see I’ve pulled this plan from memory after enabling rowsource execution statistics – and the optimizer’s estimates are very good. (Given the way I generated the data and the predicates involved, each hash join retains 3/70ths of the incoming intermediate row set.)

The shape of the plan tells me that Oracle hashed the required rows from t3 into memory, then did the same with t2 and t1 before scanning t4 and probing t1, t2, and t3 in that order to see if a row should be allowed to survive and be counted.

There are various ways I could add weight to this claim, of course – for example flushing the buffer cache and enabling extended trace so that I could see the order of physical disk access; and then enabled event 10104 so that I could see the hash join traces and the order in which they took place. I hope you’ll take my word for it, though, that this is what happens in this serial path.

As I’ve pointed out a few times in the past, when you read the plan carefully you realise that it is following the specified join order (t4, t1, t2, t3) – even though to the casual glance the plan might suggest a join order of (t3, t2, t1, t4). This is an example of “what happens first” and “the join order” being very different from each other.

10 Comments »

  1. So, it created three distinct “in memory hash tables” for T3, T2, T1 which “persisted” (i.e. the T3 hash table was present when the T2 hash table was being created and so on….) ? But the actual joins were T4 to T1, take the result set and then join to T2, take the result set and then join to T3 ?

    Comment by Hemant K Chitale — October 15, 2013 @ 9:09 am BST Oct 15,2013 | Reply

    • Herman, You’ve got it ;)

      Comment by Pavol Babel — October 15, 2013 @ 7:21 pm BST Oct 15,2013 | Reply

    • sorry for typos in your name (writing from smartphone), Hemant

      Comment by Pavol Babel — October 15, 2013 @ 7:23 pm BST Oct 15,2013 | Reply

    • Hemant,

      As Pavol says, this is correct. See also https://jonathanlewis.wordpress.com/2010/12/10/quiz-night-10/

      If you want to check this, change the query to select the columns rather than count(columns), set pause on, set arraysize 1, and then check v$sql_workarea_active once the query has returned its first row and is waiting for you to press return.

      Update – just realised this won’t work because the memory allocated for hash tables in this case is too small to be recorded as a workarea.

      Update 2 – just realised that I shouldn’t have said “correct” about your reply without clarifying an important detail. The “result set” at each step does NOT have to be completely built; as each row from t4 survives the join to a hash table it is passed on upwards to the next join; in an example like this the hash joins are “non-blocking” operations.

      Comment by Jonathan Lewis — October 15, 2013 @ 7:36 pm BST Oct 15,2013 | Reply

  2. […] take a close look at the parallel versions of the execution plan I produced a little while ago for a four-table hash join. In this note I’ll examine the broadcast parallel distribution. First, here’s a list of […]

    Pingback by Parallel Execution – 3 | Oracle Scratchpad — December 8, 2013 @ 10:09 pm GMT Dec 8,2013 | Reply

  3. […] three dimensions to the fact on their primary key, and filtered on each dimension (SQL for the test supplied here). Stripping out the eighteen hints that I inserted to get the plans I wanted the queries both […]

    Pingback by Quiz Night | Oracle Scratchpad — March 1, 2014 @ 3:35 pm GMT Mar 1,2014 | Reply

  4. […] you can review the table definitions and plan (with execution stats) for the serial join in this posting (also opens in a separate […]

    Pingback by Parallel Execution – 5 | Oracle Scratchpad — March 10, 2014 @ 1:30 pm GMT Mar 10,2014 | Reply

  5. […] Part 2 – The table definitions for the demonstration, with serial plan […]

    Pingback by Parallel Execution | Oracle Scratchpad — May 11, 2015 @ 10:16 am BST May 11,2015 | Reply

  6. […] Parallel Execution – 2 (Oct 2013): The data creation script for the sample query and plan […]

    Pingback by Parallel Execution Catalogue | Oracle Scratchpad — November 17, 2022 @ 11:27 am GMT Nov 17,2022 | Reply

  7. […] Parallel Execution – 2 (Oct 2013): The data creation script for the sample query and plan […]

    Pingback by Execution Plans Catalogue | Oracle Scratchpad — November 17, 2022 @ 11:31 am GMT Nov 17,2022 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.