Oracle Scratchpad

August 9, 2010

Joins – NLJ

Filed under: Execution plans,Infrastructure — Jonathan Lewis @ 11:57 am BST Aug 9,2010

This is the first of three articles on my thesis that “all joins are nested loop joins – it’s just the startup overheads that vary”; there will be further notes with the titles “Joins – HJ” and “Joins – MJ” to follow. (For a quick reference list of URLs to all three articles in turn, see: Joins.)

In some ways, the claim is trivially obvious – a join simply takes two row sources and compares rows from one row source with rows from the other row source, reporting cases where some test of column values is true. Until the age of quantum computing when everything will happen everywhere at once we are stuck with Turing machines, and hidden somewhere in the process of making the comparisons we are bound to see a sequence of steps similar to:

for each interesting row in rowsource X loop
    for each related row in rowsource Y loop
        report required columns from X and required columns from Y
    end loop
end loop

This, of course, is the basic theme of the nested loop join – we have two loop constructs, one inside the other (hence nested), and we can understand intuitively what we mean by the outer loop and the inner loop and therefore extend the concept to the “outer table” and “inner table” of the traditional nested loop join.

Looking at this from an Oracle perspective we typically think of a nested loop join as a mechanism for examining a small volume of data using high-precision access methods, so the loop logic above might turn into an execution plan such as:

| Id  | Operation                    | Name  | Rows  |
|   0 | SELECT STATEMENT             |       |     6 |
|   1 |  NESTED LOOPS                |       |     6 |
|   2 |   TABLE ACCESS BY INDEX ROWID| TABX  |     3 |
|*  3 |    INDEX RANGE SCAN          | TX_I1 |     3 |
|   4 |   TABLE ACCESS BY INDEX ROWID| TABY  |     2 |
|*  5 |    INDEX RANGE SCAN          | TY_I1 |     2 |

In this case we use an accurate index to pick up just a few rows from table TABX, and for each row use an accurate index to pick up the matching rows from table TABY. When thinking about the suitability of this (or any) join method we need to look at the startup costs and the potential for wasted efforts.

By “startup costs” I mean the work we have to do before we can produce the first item in the result rowsource – and in this case the startup costs are effectively non-existent: there is no preparatory work we do before we start generating results. We fetch a row from TABX and we are immediately ready to fetch a row from TABY, combine, and deliver.

What about wasted efforts? This example has been engineered to be very efficient but in a more general case we might have multi-column indexes and predicates involving several (but not all) columns in those indexes; we might have predicates involving columns in the tables that are not in the indexes and, of course, we might have other users accessing the database at the same time. So we should consider the possibility that we visit some blocks that don’t hold data that we’re interested in, visit some blocks many times rather than just once, and have to compete with other processes to acquire latches, then pin, and unpin, (some of) the blocks we examine. Given sufficiently poor precision in our indexing we may also have to think about the number of blocks we will have to read from disk, and how many times we might have to re-read them if we don’t have a large enough cache to keep them in memory between visits. It is considerations like these that can make us look for alternative strategies for acquiring the data we need: can we find a way to invest resources to “prime” the nested loop join before we actually run the loops ?

I’ll answer that question in the next two notes – but before then I’d like to leave you with a concrete example of a nested loop join. This was run on with an 8KB blocksize in a tablespace using freelist management and 1MB uniform extents.

rem     Script:         nl_hash_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2010

create cluster hash_cluster(
        hash_col number(6)      -- the hash key is numeric
single table                    -- promise to hold just one table
hashkeys 1000                   -- we want 1000 different hash values
size 150                        -- each key, with its data, needs 150 bytes
hash is hash_col                -- the table column will supply the hash value

create table hashed_table(
        id              number(6)       not null,
        small_vc        varchar2(10),
        padding         varchar2(100) default(rpad('X',100,'X'))
cluster hash_cluster(id)

alter table hashed_table
add constraint ht_pk primary key(id)

        for r1 in 1..1000 loop
                insert into hashed_table values(
        end loop;


create table t1
        rownum                          id,
        dbms_random.value(1,1000)       n1,
        rpad('x',50)                    padding
        rownum <= 10000 -- > comment to avoid wordpress format issue

alter table t1 add constraint t1_pk primary key(id);

                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1',
                cascade          => true

                ownname          => user,
                tabname          =>'hashed_table',
                method_opt       => 'for all columns size 1',
                cascade          => true


set autotrace traceonly explain

        hashed_table    ht
where between 1 and 2
and = t1.n1
and     ht.small_vc = 'a'

set autotrace off

I’ve created one of my two tables (hashed_table) in a single-table hash cluster, given it a primary key which is also the hash key, and ensured that I get no hash collisions between rows in the table (i.e. no two rows in the table hash to the same hash value). With my particular setup the optimizer has decided to access this table by hash key rather than by primary key index. Here’s the execution path.

| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |              |     1 |   191 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |              |     1 |   191 |     3   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |     2 |   152 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_PK        |     2 |       |     2   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS HASH          | HASHED_TABLE |     1 |   115 |            |          |

Predicate Information (identified by operation id):
   3 - access("T1"."ID">=1 AND "T1"."ID" .LE. 2)
   4 - access("HT"."ID"="T1"."N1")

This is an important lead in to hash joins and a different way of thinking about WHY we might want to use a hash join rather than a nested loop join.


  1. Hi Jonathan,

    I’ve a question regarding NULL in logical operations, although its off the topic but its driving me crazy, so i posted it here.

    My Problem is that why below two queries return different results:

    SELECT * FROM test WHERE sal > 100 AND sal > NULL;
    SELECT * FROM test WHERE sal > NULL AND sal > 100;

    Supporting Code:
    CREATE TABLE test (sal NUMBER);
    INSERT INTO test VALUES( 100);
    INSERT INTO test VALUES( 200);
    INSERT INTO test VALUES( 300);

    Comment by Manish — August 10, 2010 @ 6:40 am BST Aug 10,2010 | Reply

  2. Jonathan,

    As you have written specifically about Nested Loop Joins, would it be possible for you to mention how oracle processes NL join, especially in following releases?
    a) 8i (plan as mentioned above)
    b) 9i and 10g (introduced table prefetch)
    c) 11g (don’t know what it is called but plan changes a bit)
    I have not managed to find these details in a single document/post anywhere else.

    Comment by Narendra — August 10, 2010 @ 8:25 am BST Aug 10,2010 | Reply

  3. Narenda,

    I may find some time to comment on this one day. In the meantime, I thought Christian Antognini and Tanel Poder had made various comments on the changes. Search for “nlj_batching” on their web sites, or on the “Oak Table Safe Search”.

    Comment by Jonathan Lewis — August 10, 2010 @ 9:19 am BST Aug 10,2010 | Reply

  4. […] Joins – HJ Filed under: CBO,Execution plans,Performance — Jonathan Lewis @ 6:43 pm UTC Aug 10,2010 In the second note on my thesis that “all joins are nested loop joins with different startup costs” I want to look at hash joins, and I’ll start by going back to the execution plan I posted on “Joins – NLJ”. […]

    Pingback by Joins – HJ « Oracle Scratchpad — August 10, 2010 @ 6:46 pm BST Aug 10,2010 | Reply

  5. […] Lewis started a series about joins. Jonathan is the master of building clear and excellent test cases, and this post is a good example […]

    Pingback by Log Buffer #199, A Carnival of the Vanities for DBAs | The Pythian Blog — August 14, 2010 @ 8:53 pm BST Aug 14,2010 | Reply

  6. […] this post i am writing about Nested loop joins based on the blog article from Jonathan Lewis – NLJ Typical psedocode of a nested loop is similar to […]

    Pingback by Nested Loop Joins | jagdeepsangwan — June 4, 2014 @ 10:33 am BST Jun 4,2014 | Reply

  7. […] There are only three join mechanisms used by Oracle: merge join, hash join and nested loop join. […]

    Pingback by Joins | Oracle Scratchpad — June 5, 2014 @ 8:02 am BST Jun 5,2014 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: