Oracle Scratchpad

November 22, 2016

Delete/Insert

Filed under: Oracle,Performance,Tuning — Jonathan Lewis @ 12:59 pm GMT Nov 22,2016

Many of the questions that appear on OTN are deceptively simple until you start thinking carefully about the implications; one such showed up a little while ago:

What i want to do is to delete rows from table where it matches condition upper(CATEGORY_DESCRIPTION) like ‘%BOOK%’.

At the same time i want these rows to be inserted into other table.

The first problem is this: how carefully does the requirement need to be stated before you can decide how to address it? Trying to imagine awkward scenarios, or boundary conditions, can help to clarify the issue.

If you delete before you insert, how do you find the data to insert ?

If you insert before you delete, what happens if someone updates a row you’ve copied so that it no longer matches the condition. Would it matter if the update changes the row in a way that leaves it matching the condition (what you’ve inserted is not totally consistent with what you’ve deleted).

If you insert before you delete, and someone executes some DML that makes another row match the requirement should you delete it (how do you avoid deleting it) or leave it in place.

Once you start kicking the problem about you’ll probably come to the conclusion that the requirement is for the delete and insert to be self-consistent – in other words what you delete has to be an exact match for what you insert as at the time you inserted it. You’ll ignore rows that come into scope in mid-process due to other activity, and you’ll have to stop people changing rows that are being transferred (in case there’s an audit trail that subsequently says that there was, at some point in time, a row that matched the condition but never arrived – and a row that has arrived that didn’t match the final condition of the rows that disappeared).

Somehow your code needs to lock the set of rows to be transferred and then transfer those rows and eliminate them. There are two “obvious” and simple strategies – readers are invited to propose others (or criticise the two I – or any of the comments – suggest). I’ll start with a simple data setup for testing:


create table t1
as
select  object_id, object_name, owner
from    all_objects
;

alter table t1 add constraint t1_pk primary key(object_id);

create table t2
as
select  * from t1
where   rownum = 0
;

execute dbms_stats.gather_table_stats(user,'t1')
execute dbms_stats.gather_table_stats(user,'t2')

Option 1:

The simplest approach is often the best – until, perhaps, you spot the drawbacks – do a basic delete of the data to be transferred (which handles the locking) but wrap the statement in a PL/SQL block that captures the data (using the returning clause) and then inserts it into the target table as efficiently as possible. With thanks to Andrew Sayer who prompted this blog post:

declare
        type t1_rows is table of t1%rowtype;
        t1_deleted t1_rows;

begin
        delete from t1 where owner = 'SYSTEM'
        returning object_id, object_name, owner bulk collect into t1_deleted;

        forall i in 1..t1_deleted.count
                insert into t2 values t1_deleted(i);

        commit;
end;
/

The drawback to this, of course, is that if the volume to be transferred is large (where “large” is probably a fairly subjective measure) then you might not want to risk the volume of memory (PGA) it takes to gather all the data with the bulk collect.

Option 2:

For large volumes of data we could reduce the threat to the PGA by gathering only the rowids of the rows to be transferred (locking the rows as we do so) then do the insert and delete based on the rowids:

declare
        type rid_type is table of rowid;
        r rid_type;

        cursor c1 is select rowid from t1 where owner = 'SYSTEM' for update;

begin
        open c1;
        fetch c1 bulk collect into r;
        close c1;

        forall i in 1..r.count
                insert into t2 select * from t1 where rowid = r(i);

        forall i in 1..r.count
                delete from t1 where rowid = r(i);

        commit;
end;
/

Note, particularly, the “for update” in the driving select.

Inevitably there is a drawback to this strategy as well (on top of the threat that the requirement for memory might still be very large even when the return set is restricted to just rowids). We visit the source data (possibly through a convenient index and avoid visiting the table, [silly error deleted – see comment 8]) to collect rowids; then we visit the data again by rowid (which is usually quite efficient) to copy it, then we visit it again (by rowid) to delete it. That’s potentially a significant increase in buffer cache activity (especially latching) over the simple “delete returning” strategy; moreover the first strategy gives Oracle the option to use the index-driven optimisation for maintaining indexes and this method doesn’t. You might note, by the way, that you could include an “order by rowid” clause on the select; depending on your data distribution and indexes this might reduce the volume of random I/O you have to do as Oracle re-visits the table for the inserts and deletes.

We can address the PGA threat, of course, by fetching the rowids with a limit:


declare
        type rid_type is table of rowid;
        r rid_type;

        cursor c1 is select rowid from t1 where owner = 'SYSTEM' for update;

begin
        open c1;

--      dbms_lock.sleep(60);

        loop
                fetch c1 bulk collect into r limit 5;

                forall i in 1..r.count
                        insert into t2 select * from t1 where rowid = r(i);

                forall i in 1..r.count
                        delete from t1 where rowid = r(i);

                exit when r.count != 5;
        end loop;

        close c1; 

        commit;
end;
/

One thing to be aware of is that even though we fetch the rowids in small batches we lock  all the relevant rows when we open the cursor, so we don’t run into the problem of inserting thousands of rows into t2 and then finding that the next batch we select from t1 has been changed or deleted by another session. (The commented out call to dbms_lock.sleep() was something I included as a way of checking that this claim was true.) This doesn’t stop us running into a locking (or deadlocking) problem, of course; if it takes us 10 seconds to lock 1M rows in our select for update another user might manage to lock our millionth row before we get there; if, a few seconds later, it then gets stuck in a TX/6 wait trying to lock one of our locked rows after we start waiting in a TX/6 wait for our millionth row our session will time out after 3 further seconds with an ORA-00060 deadlock error.

The limit of 5 is just for demonstration purposes, of course – there were 9 rows in all_objects that matched the select predicate; in a production system I’d probably raise the limit as high as 255 (which seems to be the limit of Oracle’s internal array-processing).

You’ll notice, of course, that we can’t use this limited fetch approach with the delete command – the entire delete would take place as we opened the equivalent cursor and, though we can use the bulk collect with the returning clause, there is no syntax that allows something like the fetch with limit to take place.

Discarded option

My first thought was to play around with the AS OF SCN clause.  Select the current SCN from v$database and then do things like delete “as of scn”, or “select for update as of scn” – there were ways of getting close, but invariably I ended up running into Oracle error: “ORA-08187: snapshot expression not allowed here”. But maybe someone else can come up with a way of doing this that doesn’t add significant overheads and doesn’t allow for inconsistent results.

28 Comments »

  1. What about a Delete trigger on the source table writing to the other table?

    Comment by Jeff — November 22, 2016 @ 1:11 pm GMT Nov 22,2016 | Reply

    • Jeff,

      Excellent.
      Clean, simple, and obvious.

      Thinking about possible drawbacks:

      If the requirement is to move certain batches of data only the trigger may copy some rows that are deleted for other reasons – a temporary trigger might, or might not, be safe.

      If the data volume is large the “row by row” processing effect might have a significant impact on the time required to delete.

      Comment by Jonathan Lewis — November 22, 2016 @ 1:16 pm GMT Nov 22,2016 | Reply

      • We have control of the session and trigger so we can build the trigger to check client_identifier or other session attributes. Not __MIGRATE__? Return. That will address the concern about other sessions doing the insert.

        Large data volume could present problems but that’s true for most (all?) solutions. Delete/insert 10% of 100 billion rows will require some compromises.

        Comment by Steven Patenaude — November 22, 2016 @ 4:45 pm GMT Nov 22,2016 | Reply

        • Steven,

          Thanks for the comment. The requirement for some sort of hand-shake between the procedure that does the delete and the trigger that does the insert is a sound strategy (you can see something like that in Oracle’s materialized view refreshes where certain actions can only work because the call to refresh has set the “I am a refresh” flag).

          The comment about the time to delete was simply highlighting a potential problem that would need particular testing against the specific requirements. I don’t think there’s a thought experiment one could do to compare the differences between (say) the “select for update / limit” approach that gets the benefit of array processing and the trigger approach but I would be a little more cautious about the relative impact of trigger approach than the others as volumes got larger.

          Comment by Jonathan Lewis — November 22, 2016 @ 4:58 pm GMT Nov 22,2016

  2. A variant on option 2 might be to bulk collect a DELETE … INTO into a %rowtype array, and then insert from the array (I have to admit I haven’t tried it). That would avoid the FOR UPDATE.
    Also, the problem statement doesn’t indicate how many rows you might expect to delete. Should the bulk collects be limited to a maximum number of rows, and then commit between bulk fetches? Should the driving query by limited by a ROWNUM <= condition?

    Comment by davidkurtz — November 22, 2016 @ 1:42 pm GMT Nov 22,2016 | Reply

  3. Another option would to add a ‘delete’ column to T1 which, in 12c, can be invisible:

    ALTER TABLE t1 ADD ( delete_ind VARCHAR2(1) INVISIBLE ) ; 
    CREATE INDEX t1_delete_ix ON t1 ( delete_ind ) ; 
    

    The DELETE / INSERT becomes:

    UPDATE t1 SET delete_ind = 'Y' WHERE owner = 'SYSTEM' ;
    INSERT INTO t2 SELECT * FROM t1 WHERE delete_ind = 'Y' ;
    DELETE t1 WHERE delete_ind = 'Y' ;
    

    It’s fast and there’s row locking too.

    Comment by Kevan Gelling — November 22, 2016 @ 3:19 pm GMT Nov 22,2016 | Reply

    • Kevan,

      Yes, nice one, and potentially very efficient – especially since it gives you the option for having a (relatively small) index on the flag,

      As ever, it’s possible to find the case with the potential for causing a problem. (The implications that one overlooks until it’s too late.)

      What if the table has been carefully designed with the commonly null columns at the ends of the rows ? Worse still what if the table has been defined with more than 255 columns but the columns past 190 (say) are “mostly null” ? Adding the column and then populating it will add N-190 bytes to the length of a row, split it into two pieces of (probably) N – 255, and 255 columns and (probably) migrate the second of those two pieces into another block. The overheads could be dramatic.

      In this case the problem will be self-correcting on the delete, of course – but it’s possible that the temporary disruption could be more expensive than you might expect (and have other side effects if you’re unlucky).

      Comment by Jonathan Lewis — November 22, 2016 @ 5:08 pm GMT Nov 22,2016 | Reply

  4. I would like to see solution from PosgreSQL in Oracle:

    with ret as (delete *
                 from t1
                 returning *)
    insert into t2
    select *
    from ret;
    

    As for Oracle and triggers what about compound triggers?
    We can use nested function and flush data using bulk for all.

    create or replace t1_ct 
    
    
    create or replace trigger t1_ct 
    for delete on t1
    compound trigger
      
      type t_table_t1 is table of t1%rowtype index by binary_integer;
      
      t_t1  t_table_t1;
      
      procedure flush_data as
      begin
        forall i in 1 .. t_t1.count
          insert into t2
          values t_t1(i);
        
        t_t1.delete;
      end flush_data;
      
      after each row is
        v_count pls_integer;
      begin
        v_count := t_t1.count + 1;
        t_t1(v_count).object_id   := :old.object_id;
        t_t1(v_count).object_name := :old.object_name;
        t_t1(v_count).owner       := :old.owner;
        
        if v_count = 1000 then
          flush_data();
        end if;
      end after each row;
      
      after statement is
      begin
        if t_t1.count != 0 then
          flush_data();
        end if;
      end after statement;
      
    end;
    /
    

    PS. Why ‘*’ can’t be used in returning and triggers (:new.* and :old.*)? It would simplify many PL/SQL codes and make them more resistant to table DDL changes.

    Comment by Paweł — November 22, 2016 @ 6:40 pm GMT Nov 22,2016 | Reply

    • Pawel,

      That Postgres syntax is wonderfully compact – much more convenient than the PL/SQL of option 1 which is probably the closest you could get to it in Oracle.
      Nice example of using compound triggers – I don’t see those very much in the wild, I wonder how much they are used.

      It would be interesting to experiment some time to see how much difference the array processing mechanism of the flush_data() made in a test of around the 100K rows scale when compared to the simple “row by row” delete trigger.

      As for the P.S. I’ll have to give my standard answer: “I don’t know, I didn’t have a hand in writing the requirements, the spec or the code.”

      Comment by Jonathan Lewis — November 22, 2016 @ 6:52 pm GMT Nov 22,2016 | Reply

      • Jonathan,
        Yes I know. PosgreSQL has many nice functions in SQL but this is only one that I know.
        Compound triggest works very fast (in my example additional inserts are reduced 1k times) but there are also issues – there is a bug (a least in 11.2.0.4) involved with delete cascade constraint.
        I used them often if I see the reason. This is simper than solution before 11g with few triggers and package variables to share data. Also for me was easier to manage one trigger then few before/after for each row/statement.

        PSS. I know that You don’t know the answer. It was just a scream ‘WHY’ ;) Especially when others RDBMS has such functionality but this is probably topic for another discussion.

        Comment by Paweł — November 22, 2016 @ 7:41 pm GMT Nov 22,2016 | Reply

        • Pawel,

          The comment about not writing the spec etc. wasn’t to be taken seriously (though I didn’t write the spec).

          I had guessed your compound trigger reduced the insert (execution) count by a factor of 1,000 (though I hadn’t tested and traced it) but that’s not necessarily the same as reducing the time of the process by a factor of 1,000 – hence the comment about comparing. It’s always nice to know WHERE the time goes as well as how much time gets used.

          Comment by Jonathan Lewis — November 22, 2016 @ 9:10 pm GMT Nov 22,2016

  5. All you need….

    BEGIN
      LOCK TABLE T1 IN EXCLUSIVE MODE; 
      --
      INSERT INTO T2
      SELECT *
      FROM   T1
      WHERE  OWNER = 'SYSTEM';
      --
      DELETE FROM T1
      WHERE  OWNER = 'SYSTEM';
      --
      COMMIT;
    END;
    /
    

    Comment by Martin Rose — November 22, 2016 @ 7:25 pm GMT Nov 22,2016 | Reply

    • Martin,

      Excellent solution – potentially – though with the drawback that it limits concurrency dramatically.
      But it’s probably the most efficient solution and the original specification said nothing about concurrency, so the option should be raised (and the consequences explained to the OP).

      Update: The lock should probably be in a tight pl/sql loop with a “lock … wait N seconds” option, and a bail-out for not inserting and deleting if the lock can’t be acquired, of course. There’s (almost) always some drawback to any proposed solution that requires a little extra refinement.

      Comment by Jonathan Lewis — November 22, 2016 @ 9:14 pm GMT Nov 22,2016 | Reply

  6. Without knowing the details of the nature or volume of data to be deleted, would it be possible to define a partitioning strategy that puts the deleted rows into one partition? If the volume and other considerations allow this, using dbms_redefinition to create the partition and then swapping the partition with another table might be feasible. dbms_redefinition did a pretty good job of managing updates on the fly when I last used it.

    Comment by Joseph Cornell — November 22, 2016 @ 9:30 pm GMT Nov 22,2016 | Reply

    • Joseph,

      That’s another strategy to consider – especially when heading back towards Steven’s “10% of 100 Billion rows”. At that sort of scale drop or truncate or something involving exchange of partitions is more or less a necessity. And there might even have to be some preparatory messing about setting up a partition view first in order to make it possible for the “finish redef” to be able to complete without causing a long wait if we want the whole process to run online.

      Comment by Jonathan Lewis — November 23, 2016 @ 12:05 am GMT Nov 23,2016 | Reply

  7. Two options.

    Firstly, riffing off your discarded option but using flashback versions between:

    select current_scn from v$database;
    delete from t1 where object_name like 'V%';
    commit;
    select current_scn from v$database;
    
    insert into t2
    select r.*
    from t1  versions between scn 616645129171 and 616645460414 r
    where VERSIONS_OPERATION = 'D'
    and object_name like 'V%';
    

    Secondly, how about SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    If you do the insert then the delete then the delete should only pick up rows that were included in the insert.
    It may miss some if they were altered between the insert and delete, but if the counts match then it would be fine.

    Worst case is the statement/transaction fails with a cannot serialize and needs to be redone from the start.

    Comment by sydoracle1 — November 22, 2016 @ 9:56 pm GMT Nov 22,2016 | Reply

    • Gary,

      Nice idea. I didn’t think of using flashback versions – I stopped at “as of SCN”.

      I can’t think of any unintended consequences that would break it (apart from snapshot too old – but that’s predictable).
      I think you’d have to check your own transaction id before the commit and include a test of versions_transaction to make sure you didn’t insert rows that another session had deleted in the interval.

      I ran into problems with serializable transactions – the ORA-08177 “cannot serialize” error kept appearing – possibly due to recursive transactions or side effects of delayed block cleanout. I didn’t check it out in detail after a couple of quick tests.

      Comment by Jonathan Lewis — November 23, 2016 @ 12:13 am GMT Nov 23,2016 | Reply

  8. Hello Jonathan,
    Could you please clarify this statement “We visit the source data (possibly through a convenient index and avoid visiting the table, of course) to collect rowids;”. Since it is about “select for update”, it has to go to table block and set lock flag in it, does not it?

    Thank you.

    Comment by Yuriy Pedan — November 23, 2016 @ 7:47 am GMT Nov 23,2016 | Reply

    • Yuriy,

      Thanks for that – you’re absolutely right, the comment about not visiting the table was a silly error. I’ve not crossed it out.

      Comment by Jonathan Lewis — November 23, 2016 @ 12:59 pm GMT Nov 23,2016 | Reply

  9. Another strategy is to use Oracle Streams to capture delete operations on t1 using synchronous capture and propagate them to t2 .

    Comment by Pavel Trishin — November 26, 2016 @ 8:42 am GMT Nov 26,2016 | Reply

    • Pavel,

      Interesting variation. WIthout trying to go into all the detail I guess you’d be intercepting the redo records for the delete and generating an insert from them.

      Comment by Jonathan Lewis — November 29, 2016 @ 2:43 pm GMT Nov 29,2016 | Reply

  10. Hello Jonathan,

    taking this into extremes – in high user-volume and transactional DBs, yet another strategy is to deploy a producer-consumer metodology utilizing DBMS_PIPE. One session (producer) executes the select to fetch the rows to be deleted and feeds them into a pipe (a message in the pipe may contain only the rowids, or it may have more info depending on the case at hand). Other sessions (the total number of them may depend on the underlying HW) read the messages from the pipe and do the actual work (delete the row in the original table, update some other table, insert the message content into the log table, etc…). In this way the heavy processing is parallelized amongst various sessions, but the select to identify the rows to be processed is executed from just one session, thus limiting the possibility of ORA-1555 and avoiding various other problems (performance,contention,…).

    Stano.

    Comment by Stano Vnuk — November 28, 2016 @ 8:56 pm GMT Nov 28,2016 | Reply

    • Stano,

      I think the problem with that approach is consistency. If the processes at the opposite end of the pipe will be operating at a different SCN from the original process. It looks as if it would be quite easy to slip some DML in between the driving process selecting and a slave processing deleting.

      Comment by Jonathan Lewis — November 29, 2016 @ 2:46 pm GMT Nov 29,2016 | Reply

  11. […] A comment by SydOracle1 picked up on my failure to get Oracle working with the “as of SCN” syntax because it kept reporting ORA-08187 and suggested a straightforward use of the VERSIONS strategy. I thought it was quite surprising that this could work given that “as of SCN” didn’t, so I whipped up a quick test to check it – adding a couple of little refinements to the supplied sample – and it worked. […]

    Pingback by Delete/Insert #2 | Oracle Scratchpad — November 29, 2016 @ 4:33 pm GMT Nov 29,2016 | Reply

  12. Could you please clarify this part?

    The limit of 5 is just for demonstration purposes, of course – there were 9 rows in all_objects that matched the select predicate; in a production system I’d probably raise the limit as high as 255 (which seems to be the limit of Oracle’s internal array-processing).

    I’ve used values much higher than 255 in a LIMIT clause. What internal array-processing limit are you referring to?

    Regards
    Steve

    Comment by Steve — December 5, 2016 @ 9:56 pm GMT Dec 5,2016 | Reply

    • Steve,

      There are indications that when Oracle operates something like an “insert as select” there is a limit of 255 on the number of rows it processes in any batch.

      I don’t think there are any great reasons to condemn values larger than 255, though, (or even “much larger”) but I tend to find that there’s not a great deal of extra benefit to be gained once “end-user” code has got away from single row processing to something more like tens of row of the odd couple of hundred rows, so sticking to a limit that Oracle themselves use seems like an (arbitrarily) reasonable strategy.

      I’ll do a little post to show where I picked up the clue about 255 in a couple of days time.

      Comment by Jonathan Lewis — December 7, 2016 @ 6:57 pm GMT Dec 7,2016 | Reply

    • Steve,

      Just discovered that I don’t need to write a new post; I wrote about it a couple of years ago: https://jonathanlewis.wordpress.com/2014/11/07/quiz-night-27/

      Comment by Jonathan Lewis — December 7, 2016 @ 8:47 pm GMT Dec 7,2016 | Reply

  13. Here’s an approach that I managed to cobble together that locks the rows via a pipelined table function, visits the source data twice by using a multitable insert to collect the rowids for the delete and avoids PGA issues by using a global temporary table. I have to confess that I’ve rarely if ever used a pipelined table function or a multitable insert in production so I suspect there are drawbacks I haven’t thought of.

    create type t1_obj as object (
    ridc char(18),
    object_id number,
    object_name varchar2(128),
    owner varchar2(128));
    /

    create type t1_tab is table of t1_obj;
    /

    create or replace function get_data (p_owner in varchar)
    return t1_tab pipelined as
    begin
    for r in (select rowidtochar(rowid) rid, object_id, object_name, owner
    from t1
    where owner = p_owner
    for update) loop
    pipe row(t1_obj(r.rid, r.object_id, r.object_name, r.owner));
    end loop;
    return;
    end;
    /

    create global temporary table gtt
    (rid rowid)
    /

    insert all
    into t2 (object_id, object_name, owner)
    values (object_id, object_name, owner)
    into gtt (rid)
    values (chartorowid(ridc))
    select ridc, object_id, object_name, owner
    from table(get_data(‘SYSTEM’))
    /

    delete
    from t1
    where rowid in
    (select rid from gtt)
    /

    Comment by Mike Madland — December 10, 2016 @ 9:57 pm GMT Dec 10,2016 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.