Oracle Scratchpad

March 4, 2019

Cartesian Join

Filed under: Execution plans,Oracle — Jonathan Lewis @ 1:37 pm GMT Mar 4,2019

I wrote this note a little over 4 years ago (Jan 2015) but failed to publish it for some reason. I’ve just rediscovered it and it’s got a couple of details that are worth mentioning, so I’ve decided to go ahead and publish it now.

A recent [ed: 4 year old] question on the OTN SQL forum asked for help in “multiplying up” data – producing multiple rows from a single row source. This is something I’ve done fairly often when modelling a problem, for example by generating an orders table and then generating an order_lines table from the orders table, and there are a couple of traps to consider.

The problem the OP had was that their base data was the result set from a complex query – which ran “fine”, but took 10 minutes to complete when a Cartesian join to a table holding just three rows was included. Unfortunately the OP didn’t supply, or even comment on, the execution plans. The obvious guess, of course, is that the extra table resulted in a completely different execution plan rather than the expected “do the original query then multiply by 3” plan, in which case the solution to the problem is (probably) simple – stick the original query into a non-mergeable view before doing the join.

Assume we have the following tables, t1 has 100,000 rows (generated from the SQL in this article), t2 has 4 rows where column id2 has the values from 1 to 4, t3 is empty – we can model the basic requirement with the query shown below:


SQL> desc t1
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID                               NUMBER
 C1                               CHAR(2)
 C2                               CHAR(2)
 C3                               CHAR(2)
 C4                               CHAR(2)
 PADDING                          VARCHAR2(100)

SQL> desc t2
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID2                              NUMBER

SQL> desc t3
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID                               NUMBER
 ID2                              NUMBER
 C1                               CHAR(2)
 C2                               CHAR(2)
 C3                               CHAR(2)
 C4                               CHAR(2)
 PADDING                          VARCHAR2(100)


insert into t3
select
        t1.id, t2.id2, t1.c1, t1.c2, c3, t1.c4, t1.padding
from
       (select * from t1) t1,
        t2
;

If we “lose” the plan for the original “select * from t1” (assuming t1 was really a complicated view) when we extend to the Cartesian join all we need to do is the following:


insert into t3
select
        /*+ leading(t1 t2) */
        t1.id, t2.id2, t1.c1, t1.c2, c3, t1.c4, t1.padding
from
        (select /*+ no_merge */ * from t1) t1,
        t2
;

This is where the problem starts to get a little interesting. The /*+ no_merge */ hint is (usually) a winner in situations like this – but why have I included a /*+ leading() */ hint choosing to put t2 (the small table) second in the join order? It’s because of the way that Cartesian Merge Joins work, combined with an eye to where my most important resource bottleneck is likely to be. Here’s the execution plan taken from memory after executing this statement with statistics_level set to all. (11.2.0.4):


SQL_ID  azu8ntfjg9pwj, child number 0
-------------------------------------
insert into t3 select   /*+ leading(t1 t2) */  t1.id, t2.id2, t1.c1,
t1.c2, c3, t1.c4, t1.padding from  (select /*+ no_merge */ * from t1)
t1,   t2

Plan hash value: 1055157753

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |      1 |        |      0 |00:00:10.28 |   48255 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL |      |      1 |        |      0 |00:00:10.28 |   48255 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN   |      |      1 |    400K|    400K|00:00:06.30 |    1727 |       |       |          |
|   3 |    VIEW                  |      |      1 |    100K|    100K|00:00:00.94 |    1725 |       |       |          |
|   4 |     TABLE ACCESS FULL    | T1   |      1 |    100K|    100K|00:00:00.38 |    1725 |       |       |          |
|   5 |    BUFFER SORT           |      |    100K|      4 |    400K|00:00:01.78 |       2 |  3072 |  3072 | 2048  (0)|
|   6 |     TABLE ACCESS FULL    | T2   |      1 |      4 |      4 |00:00:00.01 |       2 |       |       |          |
----------------------------------------------------------------------------------------------------------------------

Let’s try that again (building from scratch, of course) with the table order reversed in the leading() hint:


SQL_ID  52qaydutssvn5, child number 0
-------------------------------------
insert into t3 select   /*+ leading(t2 t1) */  t1.id, t2.id2, t1.c1,
t1.c2, c3, t1.c4, t1.padding from  (select /*+ no_merge */ * from t1)
t1,   t2

Plan hash value: 2126214450

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |      1 |        |      0 |00:00:12.29 |   48311 |   6352 |   1588 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL |      |      1 |        |      0 |00:00:12.29 |   48311 |   6352 |   1588 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN   |      |      1 |    400K|    400K|00:00:06.64 |    1729 |   6352 |   1588 |       |       |          |
|   3 |    TABLE ACCESS FULL     | T2   |      1 |      4 |      4 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|   4 |    BUFFER SORT           |      |      4 |    100K|    400K|00:00:04.45 |    1727 |   6352 |   1588 |    13M|  1416K| 9244K (0)|
|   5 |     VIEW                 |      |      1 |    100K|    100K|00:00:00.80 |    1725 |      0 |      0 |       |       |          |
|   6 |      TABLE ACCESS FULL   | T1   |      1 |    100K|    100K|00:00:00.28 |    1725 |      0 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

There’s one important detail that’s not explicit in the execution plans – I’ve set the workarea_size_policy to manual and the sort_area_size to 10MB to demonstrate the impact of having a dataset that is too large for the session’s workarea limit.

The results, in terms of timing, are border-line. With the “correct” choice of order the completion time is 10.28 seconds compared to 12.29 seconds, though if you look at the time for the Merge Join Cartesian operation the difference is much less significant. The critical point, though, appears at operation 4 – the Buffer Sort. I set my sort_area_size to something that I know was smaller than the data set I needed to buffer – so the operation had to spill to disc. Ignoring overheads and rounding errors the data from the 1,727 blocks I read from the table at pctfree = 10 were dumped to the temporary space in 1,588 packed blocks (sanity check: 1,727 * 0.9 = 1,554); and then those blocks were read back once for each row from the driving t2 table (sanity check: 1,588 * 4 = 6,352).

With my setup I had a choice of bottlenecks:  scan a very small data set in memory 100,000 times to burn CPU, or scan a large data set from disc 4 times. There wasn’t much difference in my case: but the difference could be significant on a full-scale production system.  By default the optimizer happened to pick the “wrong” path with my data sets.

But there’s something even more important than this difference in resource usage to generate the data: what does the data look like after it’s been generated.  Here’s a simple query to show you the first few rows of the stored result sets in the two different test:


SQL> select id, id2, c1, c2, c3, c4 from t3 where rownum <= 8;

Data from leading (t1 t2)
=========================
        ID        ID2 C1 C2 C3 C4
---------- ---------- -- -- -- --
         1          1 BV GF JB LY
         1          2 BV GF JB LY
         1          3 BV GF JB LY
         1          4 BV GF JB LY
         2          1 YV LH MT VM
         2          2 YV LH MT VM
         2          3 YV LH MT VM
         2          4 YV LH MT VM


Data from leading (t2 t1)
=========================
        ID        ID2 C1 C2 C3 C4
---------- ---------- -- -- -- --
         1          1 BV GF JB LY
         2          1 YV LH MT VM
         3          1 IE YE TS DP
         4          1 DA JY GS AW
         5          1 ZA DC KD CF
         6          1 VJ JI TC RI
         7          1 DN RY KC BE
         8          1 WP EQ UM VY

If we had been using code like this to generate an order_lines table from an orders table, with  leading(orders t2) we would have “order lines” clustered around the “order number” – which is a realistic model; when we have leading(t2 orders) the clustering disappears (technically the order numbers are clustered around the order lines). It’s this aspect of the data that might have a much more important impact on the suitability (and timing) of any testing you may be doing rather than a little time lost or gained in generating the raw data.

Footnote

If you try to repeat this test on your own system don’t expect my timing to match yours. Bear in mind, also, that with statistics_level set to all there’s going to be a CPU overhead that varies between the two options for the leading() hint – the CPU usage on rowsource execution stats could be much higher for the case where one of the operations starts 100,000 times.

 

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.