Oracle Scratchpad

October 7, 2010

Distributed Pipelines

Filed under: distributed,Oracle,Performance — Jonathan Lewis @ 6:06 pm BST Oct 7,2010

In an article that I wrote about the  driving_site() hint a few months ago I pointed out that the hint was not supposed to work with “create table as select” (CTAS) and “insert as select”. One of the people commenting on the note mentioned pipelined functions as a workaround to this limitation – and I’ve finally got around to writing a note about the method.

The idea is simple. If you can write a distributed select statement that takes advantage of the /*+ driving_site(alias) */ hint to work efficiently you can wrap the statement in a PL/SQL cursor loop and stick that loop into a pipelined function to maximise the efficiency of create/insert as select. Here’s some sample code (tested on 11.1.0.6) to demonstrate the principle:

rem
rem     Script:         dist_pipe.sql
rem     Author:         Jonathan Lewis 
rem     Purpose:        
rem     Dated:          Sept 2010
rem
rem     Last tested 
rem             11.1.0.6
rem

-- create public database link d11g@loopback using 'd11g';

create table t1 as
select
        rownum                  id,
        lpad(rownum,15,'x')     val
from
        all_objects
where
        rownum <= 1000
;

create table t2 as
select
        rownum                  id,
        lpad(rownum,15,'y')     val
from
        all_objects
where
        rownum <= 1000
;

create table t3(
        t1_val  varchar2(15),
        t2_val  varchar2(15)
)
;

--      Collect stats on the tables at this point.

create or replace type myScalarType as object (
        t1_val  varchar2(15),
        t2_val  varchar2(15)
)
/

create or replace type myArrayType as table of myScalarType
/

create or replace function pipe_fun
return myArrayType
pipelined
as
begin
        for r1 in (
                select
                        /*+ driving_site(t2) */
                        t1.val  t1_val,
                        t2.val  t2_val
                from
                        t1                      t1,
                        t2@d11g@loopback        t2
                where
                        t2.id = t1.id
        ) loop
                pipe row (myScalarType(r1.t1_val, r1.t2_val));
        end loop;
        return;
end;
/

I’ve set up my tnsnames.ora and created a database link that I can use to query the “remote” database. If you’re familiar with database links you’ll probably guess that I’ve actually created a loopback link, so the same database is acting as both the local and remote databases.

After creating and populating a couple of source tables and creating an empty target table for the results of a query, I’ve created the infrastructure to support a pipelined function matching the target table.

I need two things:

  • a scalar object type that looks like a row from the table
  • a table type which is a table of that scalar object type.

Once I’ve created these two types I can create a function that can return the table type, which it does by “piping” items of the scalar type. Note, particularly, that a pipelined function doesn’t “return” anything, instead it pipes rows  of the supporting scalar type. (It would be legal, by the way, to use the “pipe row” more than once in the definition of the function – which you might do if, for example, you wanted to use a pipelined function for normalising some data that you were reading from a flat file that had been declared as an external table.)

At this point you might be wondering why I’ve created the pipelined function at all, and why I haven’t simply created a procedure that inserts the data into the target table as it selects it. The reason becomes clear when I demonstrate how I can use the pipelined function:


alter table t3 nologging;

insert /*+ append */ into t3
select
        *
from
        table(pipe_fun)
;

commit;

alter table t3 logging;

The function can behave like a table and I can get array processing speeds (plus the benefits of direct path and nologging actions) by selecting from that “table”. So not only have I bypassed the limitation of the /*+ driving_site(alias) */ hint, I’ve done it in a way that gives me a number of options for making the process as efficient as possible.

I’ve been a little lazy here, of course. My function is using a single cursor “row by row” select before I call it to supply data for the  insert so, arguably, I ought to enhance the function to use the pl/sql “bulk collect” option, perhaps with a “limit” clause. I haven’t done so because my client at the time was using 11g and (from 10g onwards) the pl/sql compiler will automatically turn my cursor loop code into the “bulk collect” equivalent with a limit of 100 rows. As the first draft of this code reduced the client’s run time from an estimated 8 days to 288 seconds (and that was without using the nologging/append options on the target table) we didn’t feel there was any need to squeeze out the extra bit of performance.

If you’re wondering why the performance benefit could be so large – it’s the sort of thing that happens when you’ve got 60,000 rows in the local table and 250,000,000 in the remote and need to join and aggregate. The last thing you want to do a nested loop with 60,000 indexed probes across the network, and the second to last thing is to pull the entire 250,000,000 rows across the network to join and aggregate locally. The pipelined function gave us a method of pushing the 60,000 rows to the remote instance to join and aggregate there, then pull a small (60,000 row) result set back.

If you want to see the effects on execution plans – here are the two plans I pulled from memory. The first is the SQL run by the function before I added the /*+ driving_site(t2) */ hint to the query; the second is the SQL that arrived at the remote database when the hint was in place.


SQL_ID  3v0qf8uajpsc8, child number 0
-------------------------------------
SELECT T1.VAL T1_VAL, T2.VAL T2_VAL FROM T1, T2@D11G@LOOPBACK WHERE
T2.ID = T1.ID

Plan hash value: 1770389500

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     6 (100)|          |        |      |
|*  1 |  HASH JOIN         |      |  1000 | 38000 |     6  (17)| 00:00:01 |        |      |
|   2 |   TABLE ACCESS FULL| T1   |  1000 | 19000 |     3   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |  1000 | 19000 |     2   (0)| 00:00:01 | D11G@~ | R->S |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."ID"="T1"."ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "ID","VAL" FROM "T2" "T2" (accessing 'D11G@LOOPBACK' )

SQL_ID  csgj6635jzj0u, child number 0
-------------------------------------
SELECT "A2"."VAL","A1"."VAL" FROM "T1"@! "A2","T2" "A1" WHERE
"A1"."ID"="A2"."ID"

Plan hash value: 1827050725

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     6 (100)|          |        |      |
|*  1 |  HASH JOIN         |      |  1000 | 38000 |     6  (17)| 00:00:01 |        |      |
|   2 |   REMOTE           | T1   |  1000 | 19000 |     2   (0)| 00:00:01 |      ! | R->S |
|   3 |   TABLE ACCESS FULL| T2   |  1000 | 19000 |     3   (0)| 00:00:01 |        |      |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("A1"."ID"="A2"."ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------
   2 - SELECT "ID","VAL" FROM "T1" "A2" (accessing '!' )

You’ll notice that the distributed SQL is in capitals with lots of double-quote marks and table aliases like A1, A2 – If you ever see SQL matching that pattern there’s a reasonable chance that you’re seeing incoming distributed SQL or (on older versions of Oracle) the decomposed text of parallel execution.

Footnote: I was quite amused by the fact that Peter Scott was working in another department at the same client at the time, and the team that  needed to solve this particular performance problem had sent members to both of us for help – and got the same suggested solution from both of us.

 

14 Comments »

  1. It is not easy to figure out what is behind this ” t2@d11g@loopback t2″.
    can you print the relevant part in the tnsnames.ora for ‘d11g’ and ‘loopback’ ?

    Comment by bernard polarski — October 8, 2010 @ 2:01 pm BST Oct 8,2010 | Reply

    • Bernard,

      The key expression is: “connection qualifier”.

      As a matter of routine I have the parameter “global_names” set to true, which has the description: “enforce that database links have same name as remote database”

      But to allow more than one db_link to point to the same database the links can be given a qualifiers – which is the extra “‘@” part.

      tnsnames.ora entry:

      D11G = 
      	(DESCRIPTION = 
      		(ADDRESS = 
      			(PROTOCOL = TCP)
      			(HOST = HP-LaptopV1)
      			(PORT = 1521)
      		)
      		(CONNECT_DATA =
      			(SERVER = DEDICATED)
      			(SERVICE_NAME = d11g)
      		)
      	)
      
      

      Code to define of database link:

      create public database link d11g@loopback using 'd11g';
      

      Comment by Jonathan Lewis — October 9, 2010 @ 8:35 am BST Oct 9,2010 | Reply

  2. if we use direct path insert with append hint, this result in:
    ORA-12840: cannot access a remote table after parallel/insert direct load txn.

    can we avoid this?

    Comment by msosar — October 8, 2010 @ 10:06 pm BST Oct 8,2010 | Reply

  3. I’m probably missing something but wouldn’t it have been just as easy to copy the small table to the remote node and run the query there?

    Comment by John Seaman — October 13, 2010 @ 3:37 am BST Oct 13,2010 | Reply

    • John,

      That would work – but sometimes you’re not allowed to push data into “someone else’s” database, even if you all work for the same company.

      A similar, but less “invasive” strategy would have been to create a distributed join view at the remote database and insert the contents of the view.

      Sometimes the choice isn’t about technology, it’s about perceived risks (or politics, or standards).

      Comment by Jonathan Lewis — October 13, 2010 @ 7:47 am BST Oct 13,2010 | Reply

      • I fully appreciate the role office politics can play! I just wanted to make sure I hadn’t missed something from the technology point of view. I hadn’t thought of a distributed view on the remote database either so I did miss something :-)

        Comment by John Seaman — October 14, 2010 @ 3:50 am BST Oct 14,2010 | Reply

  4. […] it would be pretty easy to write some sort of solution using pl/sql and pipelined functions – perhaps a function that takes a table_name loops through each partition of the table in turn […]

    Pingback by Partitioned Bitmaps « Oracle Scratchpad — July 1, 2011 @ 5:21 pm BST Jul 1,2011 | Reply

  5. It is possible to avoid ORA-12840: cannot access a remote table after parallel/insert direct load txn.

    But we have to include pragma for autonomous_transaction and to commit inside pipeline function.

    Comment by sky_lt — April 17, 2012 @ 1:58 pm BST Apr 17,2012 | Reply

  6. […] site hints are ignored on insert statements. We end up a horrible execution plan as mentioned here by Jonathan Lewis. Yes there are solutions, such as the pipelined table approach that I developed […]

    Pingback by A thought on ODI variables | DW_PETE Talks Data — February 18, 2016 @ 10:38 am GMT Feb 18,2016 | Reply

  7. This quick select slow insert has been annoying us for a long time. We are dealing with data quality issues, in most of cases we don’t have lots of bad data to the inserted into local table from remote database. Thanks Jonathon for this good solution!

    Comment by Anonymous — May 15, 2019 @ 11:51 pm BST May 15,2019 | Reply

  8. […] complexity of the subquery structure. The answer is yes, and for a long time we could have used a “pipelined function” to do this – though we have to create a simple object table and an object table type to do […]

    Pingback by SQL Macro | Oracle Scratchpad — July 22, 2021 @ 6:21 pm BST Jul 22,2021 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a reply to Anonymous Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.