Oracle Scratchpad

December 8, 2020

Hash Joins

Filed under: Hash Join,Joins,Oracle — Jonathan Lewis @ 11:55 am GMT Dec 8,2020

This is a follow-up to a note that’s 10 years old [opens in new tab], prompted by a thread on the Oracle Developer Community forum asking about the order of operation in an execution plan, and following up with a question about PGA memory use that I had left open in the original note.

The original note pointed out that the serial execution plan for a 4 table join that used hash joins for every join and had a leading() hint dictating a single join order could still have 8 distinct execution plans (which I then corrected to 4, because half of them were eliminated by an odd little inconsistency of the leading() hint).

The source of the number 8 was the fact that when you hint a hash join with /*+ use_hash(next_alias) */ the optimizer will consider a /*+ swap_join_inputs(next_alias) */ – in other words, it will choose which of the two inputs should be the “build” table and which the “probe” table, even though the human eye will probably assume that next_alias was clearly intended as the probe table. (The “little inconsistency” in the leading() hint is that it blocks the option for a swap for the first – and only the first – join unless it’s explicitly hinted.)

In the article I showed all 8 possible plans, and described how the two plans at the extreme ends of my list would operate at run-time. I’ve reproduced these two plans below, preceded by the hints that produced them:

leading(t1 t2 t3 t4)
use_hash(t2) no_swap_join_inputs(t2)
use_hash(t3) no_swap_join_inputs(t3)
use_hash(t4) no_swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|*  3 |    HASH JOIN          |      |    70 |  1260 |     8 |
|*  4 |     HASH JOIN         |      |    70 |   840 |     5 |
|   5 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
|   6 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   7 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|   8 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
--------------------------------------------------------------

leading(t1 t2 t3 t4)
use_hash(t2)    swap_join_inputs(t2)
use_hash(t3)    swap_join_inputs(t3)
use_hash(t4)    swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|   3 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
|*  4 |    HASH JOIN          |      |    70 |  1260 |     8 |
|   5 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|*  6 |     HASH JOIN         |      |    70 |   840 |     5 |
|   7 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   8 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
--------------------------------------------------------------

Side note: one of the comments on the original post raised the point that (e.g.) the second of the plans above could have been generated by the join order represented by the hint leading(t2 t1 t3 t4), and that’s absolutely correct. In fact, if you see the plan for a 4 table join consisting of nothing but 3 hash join you cannot be certain what join order the optimizer was examining when it produced that plan unless you look at the CBO trace file (or check to see if it’s been explicitly and completely, hinted in the code).

Having set the scene, we come to the question that prompted this note. The question related to the second plan above, and said:

“However, this one is the plan that takes the most memory in the PGA because it builds 3 hash tables before joining any table, right?”

The quick answer to the question is: “Not necessarily.”

In the original article I had pointed out that plans following the pattern of the first plan above with N tables and N – 1 joins would have at most two build tables in memory at any one moment while the bottom plan would create N – 1 build tables in memory before any join results could be created. This does rather invite the inference that the bottom plan has to be the one that is going to use most PGA memory, but what I had said in the original article was (emphasis added):

“Notice: the number of in-memory hash (build) tables we have in the first of the 8 plans at any instant after the first join starts is two and (no matter how many tables are involved in this pattern) the number of in-memory hash tables will always be two. The actual size of the two hash tables is a little unpredictable and, as a very crude guideline, you might expect the size to grow as more tables are joined into the result set.

As a thought experiment, consider 3 small dimension tables and one big fact table. If Oracle were to create in-memory hash tables from the three dimension tables and then start scanning the fact table (following the pattern of the second plan above with t1 in the role of the fact table) probing each of the dimension tables in turn, it could deliver the first result row very quickly without requiring more memory to store intermediate results.

Conversely if Oracle were to create a tiny in-memory hash table from the first dimension and probe it with the fact table (following the pattern of the first plan above with t2 in the role of the fact table) Oracle would then have to build a very large in-memory hash table before before probing it with the second dimension table, and as that second join takes place it would be generating a new result set that would become the next big in-memory hash table.

In this thought experiment we would probably find that the optimizer did the right thing without prompting and constructed three tiny in-memory hash tables – but it’s not always so clear-cut, and even the “right” decision can result in very large intermediate build tables (and if those intermediate build tables spill to disc in an Exadata environment the change in performance can be huge).

To finish off, here’s a script to turn the thought experiment into a concrete example (by careful, but deliberately silly, hinting).

rem
rem     Script:         c_treblehash_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem 

create table t1 
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) swap_join_inputs(t1)
                use_hash(t2) full(t2) swap_join_inputs(t2)
                use_hash(t3) full(t3) swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) no_swap_join_inputs(t1)
                use_hash(t2) full(t2) no_swap_join_inputs(t2)
                use_hash(t3) full(t3) no_swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

All I’ve done is create three small “dimension” tables of 70 rows each then created a table which is their Cartesian join, which produces a “fact” table of 343,000 rows. Then I’ve written a simple query to join the three dimension tables to the fact table.

I’ve used swap_join_inputs() for all the joins in one version of the query, and no_swap_join_inputs() in the other versions; and I’ve reconnected to the data before running each query to make it easier to see the different impact on the PGA of the two plans.

I’ve use dbms_xplan.display_cursor() to pull the execution plans from memory after running the queries, and since I’ve not set statistics_level to all, or added the hint /*+ gather_plan_statistics */ to the queries the only rowsource execution statistics I’ll get are the PGA usage: estimates and actuals.

The final step of each test reports the maximum PGA memory usage the session reached in the course of the test.

Here are the two plans, each followed by the PGA maximum memory size.

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|  2171K|  2171K| 1684K (0)|
|   3 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
|*  4 |    HASH JOIN          |      |    343K|  2171K|  2171K| 1681K (0)|
|   5 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|*  6 |     HASH JOIN         |      |    343K|  2171K|  2171K| 1711K (0)|
|   7 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   8 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   4 - access("T2"."ID"="T4"."ID2")
   6 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 13,859,925

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|    23M|  3727K|   29M (0)|
|*  3 |    HASH JOIN          |      |    343K|    23M|  3727K|   29M (0)|
|*  4 |     HASH JOIN         |      |    343K|    23M|  3667K|   29M (0)|
|   5 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
|   6 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   7 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|   8 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   3 - access("T2"."ID"="T4"."ID2")
   4 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 52,984,917

As you can see, each of the hash joins in the first plan required roughly 1.7MB of memory. All three hash tables would have been in existence at the same time, giving a total of about 5.1MB of memory for the query. The session’s maximum PGA usage shows up as 13MB, of which 5MB was my basic “startup” PGA, leaving 3MB “unaccounted”.

In comparison, each of the hash joins in the second plan required roughly 29MB, although only two of the hash tables would have been in existence at any one moment. That’s still an allocation of 58MB for the same basic 4 table join. In fact things aren’t quite as bad as they seem in this case since the maximum PGA allocated was only about 52MB (again with 5MB of startup PGA). The apparent contradiction may be due to the way that Oracle allocates PGA in increasing chunks – the 29MB reported may have been the result of the session doing something like: “I’ve reached 23MB, my next allocation will be 4MB, oops, I only needed another 128KB)”

As a final check of activity, I’ve also run a couple of queries against V$sql_workarea – a convenient memory structure you can examine to get some “recent history” of queries that have been using large amount of memory, or spilling to disc. In this case I’ve query the structure by sql_id for the two queries, reporting just a little detail about the last execution and memory usage.

SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'dcc01q28gcbmy';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                 1724416 OPTIMAL
           4 HASH-JOIN                                 1721344 OPTIMAL
           6 HASH-JOIN                                 1752064 OPTIMAL


SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'b52uwjz07fwhk';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                30930944 OPTIMAL
           3 HASH-JOIN                                30945280 OPTIMAL
           4 HASH-JOIN                                30650368 OPTIMAL

As you can see, the view reports each plan operation (with id and type) that needed memory for an SQL workarea – and the numbers confirm the 1.7MB and 29MB reported by the execution plans. (Which is not surprising since it’s exactly these figures that are used to supply the details in the plans.)

You need to be a little cautious with this view in older versions of Oracle – it used to do a “full tablescan” of the library cache, which resulted in a lot of library cache latch activity and could cause some contention if you ran it frequently, but it’s a very useful view for finding queries that are doing unexpectedly large sorts or hash joins, and one you might query occasionally if you see any space management threats in the temporary tablespace.

October 31, 2019

IOT Hash

Filed under: Execution plans,Hash Join,Infrastructure,IOT,Joins,Oracle,Troubleshooting — Jonathan Lewis @ 2:59 pm GMT Oct 31,2019

It’s another of my double-entendre titles. The optimizer can turn a hash join involving an index-organized table into a real performance disaster (though you may have to help it along the way by using a silly definition for your primary key columns). This post was inspired by a question posted on the Oracle Developer Community forum recently so the table and column names I’ve used in my model reflect (almost, with a few corrections) the names used in the post.

We start with a simple requirement expressed through the following SQL:


rem
rem     Script:         iot_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

insert
        /*+
                qb_name(insert)
        */
into t_iot(
        id, inst_id, nr_time,
        o_time, st, null_col, svname
)
select
        /*+
                qb_name(main)
                unnest(@subq)
                leading(@sel$a93afaed apar@main ob@subq)
                use_hash(@sel$a93afaed ob@subq)
                swap_join_inputs(@sel$a93afaed ob@subq)
                index_ss_asc(@sel$a93afaed ob@subq (t_iot.st t_iot.inst_id t_iot.svname))a
        */
        apar.id,
        'UP',
        to_date('2019-10-24','yyyy-mm-dd'),
        to_date('1969-12-31','yyyy-mm-dd'),
        'IDLE',
        null,
        'tkt007.jj.bb.com'
from
        t_base apar
where
        apar.id not in (
                select
                        /*+
                                qb_name(subq)
                        */
                        id
                from
                        t_iot ob
                where
                        inst_id = 'UP'
        )
and     nvl(apar.gp_nm,'UA') = 'UA'
and     rownum <= 5000
/

The requirement is simple – insert into table t_iot a set of values dictated by a subset of the rows in table t_base if they do not already exist in t_iot. To model the issue that appeared I’ve had to hint the SQL to get the following plan (which I pulled from memory after enabling rowsource execution stats):


---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |             |      1 |        |   296 (100)|      0 |00:00:00.03 |     788 |    148 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL | T_IOT       |      1 |        |            |      0 |00:00:00.03 |     788 |    148 |       |       |          |
|*  2 |   COUNT STOPKEY          |             |      1 |        |            |    100 |00:00:00.03 |      99 |     91 |       |       |          |
|*  3 |    HASH JOIN RIGHT ANTI  |             |      1 |    100 |   296   (2)|    100 |00:00:00.03 |      99 |     91 |    14M|  1843K|   15M (0)|
|*  4 |     INDEX SKIP SCAN      | T_IOT_STATE |      1 |  12614 |   102   (0)|  10000 |00:00:00.01 |      92 |     91 |       |       |          |
|*  5 |     TABLE ACCESS FULL    | T_BASE      |      1 |    100 |     2   (0)|    100 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<=5000)
   3 - access("APAR"."ID"="ID")
   4 - access("INST_ID"='UP')
       filter("INST_ID"='UP')
   5 - filter(NVL("APAR"."GP_NM",'UA')='UA')

The optimizer has unnested (as hinted) the subquery and converted it to an anti-join using a right hash anti-join. Take a look at the Used-mem for the hash join – would it surprise you to learn that the total size of the (not compressed in any way) IOT, and all its indexes, and the t_base table together total less than 4 MB. Something dramatically awful has happened in the hash join to generated a requirement of 14MB. (In the case of the OP this appeared as an unexpected 5GB written to the temporary tablespace.)

Before I address the source of the high memory usage, take a close look at the Predicate Information, particularly operation 3, and ask yourself what the definition of index t_iot_state might be. The predicate joins t_base.id to t_iot.id, and here’s the code to create both tables and all the indexes.

create table t_iot (
        nr_time         timestamp,
        id              varchar2(1024),
        inst_id         varchar2(200),
        o_time          timestamp,
        st              varchar2(200),
        null_col        varchar2(100),
        svname          varchar2(200),
        constraint t_iot_pk primary key(nr_time, id, inst_id)
)
organization index
/

insert into t_iot
select
        sysdate,
        dbms_random.string('l',10),
        'UP',
        sysdate,
        'IDLE',
        null,
        rpad('x',25,'x')
from
        all_objects
where
        rownum <= 1e4 -- > hint to avoid wordpress format issue
/

create index t_iot_state on t_iot(st, inst_id, svname); 
create index idx2        on t_iot(id, inst_id, svname);

create table t_base(
        id              varchar2(400) not null,
        gp_nm           varchar2(200)
)
/

insert into t_base
select
        dbms_random.string('l',10),
        'UA'
from
        all_objects
where
        rownum <= 100 -- > hint to avoid wordpress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_iot',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 't_base',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/


The index t_iot_state that Oracle has used in the hash join is defined on the columns (st, inst_id, svname) – so the predicate is doing a comparison with a column that’s not in the index! At least, it’s not visibly declared in the index; but this is a secondary index on an IOT, and IOTs don’t have “normal” rowids, the rowid in a secondary index is the value of the primary key (plus a “block guess”). So the columns in the index (even though not declared in the index) are: (st, inst_id, svname, {nr_time, id, inst_id, blockguess}). So this index does supply the required id column.

Side note: you’ll see in the list of columns above that inst_id appears twice. In fact (since Oracle 9, I think) the code to handle secondary indexes has been smart enough to avoid this duplication. If the secondary index contains columns from the primary key then the “rowid” doesn’t store those columns, the code knows how to construct the primaryh key value from the stored PK columns combined with the needed columns from the index entry. This can make IOTs a very nice choice of implementation for “intersection” tables that are used to represent many-to-many joins between two other tables.

Unfortunately this “rowid” is the explanation for the massive memory demand. Take a look at the “Column Projection Information” for my execution plan:


Column Projection Information (identified by operation id):
-----------------------------------------------------------
   2 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200], ROWNUM[8]
   3 - (#keys=1) "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]
   4 - "OB".ROWID[ROWID,1249], "NR_TIME"[TIMESTAMP,11], "ID"[VARCHAR2,1024], "INST_ID"[VARCHAR2,200], "OB".ROWID[ROWID,1249]
   5 - "APAR"."ID"[VARCHAR2,400], "APAR"."GP_NM"[VARCHAR2,200]

The interesting line is operation 4. A hash join takes the rowsource from its first child (the build table) and creates an in-memory hash table (which may spill to disc, of course), so if I see an unreasonable memory allocation (or unexpected spill to disc) a good starting point is to look at what the first child is supplying. In this case the first child seems to be saying that it’s supplying (or allowing for) nearly 3,700 bytes to be passed up to the hash join.

On closer inspection we can see it’s reporting the “rowid” twice, and also reporting the three columns that make up that rowid. I think it’s reasonable to assume that it’s only supplying the rowid once, and maybe it’s not even supplying the other three columns because they are embedded in the rowid. Doing a quick arithmetic check, let’s multiply the size of the rowid by the value of A-rows: 1,249 * 10,000 = 12,490,000. That’s pretty close to the 14MB reported by the hash join in operation 3.

Hypothesis – to get at the id column, Oracle has used this index (actually a very bad choice of those available) to extract the rowid and then passed the rowid up to the parent in a (length padded) fixed format. Oracle has then created a hash table by extracting the id column from the rowid and building the hash table on it but also carrying the length-padded rowid into the hash table.  Possible variants on this theme are that some or all of the other columns in the Column Projection Information are also passed upwards so that the id doesn’t have to be extracted, but if they are they are not padded to their maximum length.

A simple test that this is broadly the right assumption is to re-run the model making the declared length of the rowid much larger to see what happens to the memory allocation. Changing the inst_id declaration from 200 bytes to 1000 bytes (note the stored value is only the 2 bytes needed for the value ‘UP’) the Used-mem jumps to 23 MB (which is an increment very similar to 800 * 10,000).  You’ll note that I chose to experiment with a column that wasn’t the column used in the join. It was a column in the secondary index definition, though, so another test would be to change the nr_time column from a timestamp (11 bytes) to a large varchar2, so I re-ran the test declaring the nr_time as a varchar2(1000) – reverting the inst_id to varchar2(200) – and the Used-mem increased to 25MB.

Preliminary Conclusion

If Oracle uses the contents of the rowid of a secondary index on an IOT in a join then it constructs a fixed format version for the rowid by padding every column in the primary key to its maximum length and concatenating the results. This can have catastrophic side effects on performance if you’ve declared some very long columns “just in case”. Any time you use index organized tables you should remember to check the Column Projection Information in any execution plans that use secondary indexes in case they are passing a large, padded, primary key through the plan to a point where a blocking operation (such as a hash join or merge join) has to accumulate a large number of rows.

Footnote

In my test case I had to hint the query heavily to force Oracle into the path I wanted to demonstrate.

It’s surprising that the optimizer should have chosen this path in the OP’s system, given that there’s another secondary index that contains the necessary columns in its definition. (So one thought is that there’s a statistics problem to address, or possibly the “good” index is subject to updates that make it become very inefficient (large) very quickly.)

Another oddity of the OP’s system was that Oracle should have chosen to do a right hash anti-join when it looked as if joining the tables in the opposite order would produce a much smaller memory demand and lower cost – there was an explict swap_join_inputs() hint in the Outline Information (so copying the outline into the query and changing that to no_swap_join_inputs() might have been abother viable workaround.) In the end the OP hinted the query to use a nested loop (anti-)join from t_base to t_iot – which is another way to avoid the hash table threat with padded rowids.

 

December 10, 2010

Quiz Night

Filed under: Execution plans,Hash Join,Hints,Joins,Oracle — Jonathan Lewis @ 6:19 pm GMT Dec 10,2010

I have four simple (non-partitioned, non-clustered, not views, not object type – really I’m not trying to be cunning or devious here) heap tables, and write a query that joins them:

rem
rem     Script:         c_treblehash_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2010
rem

select
	/*+
		leading(t1 t2 t3 t4)
		use_hash(t2) use_hash(t3) use_hash(t4)
	*/
	count(t1.small_vc),
	count(t2.small_vc),
	count(t3.small_vc),
	count(t4.small_vc)
from
	t1,
	t2,
	t3,
	t4
where
	t2.id2 = t1.id1
and	t3.id3 = t2.id2
and	t4.id4 = t3.id3
;

(more…)

Website Powered by WordPress.com.