Oracle Scratchpad

November 22, 2006

Tuning Updates

Filed under: Indexing,Infrastructure,Oracle,redo,Tuning,undo — Jonathan Lewis @ 8:45 pm BST Nov 22,2006

Consider the following update statement:

        /*+ index(t1 t1_n1) */
        n1 = 0,
        n2 = n2 + 1,
        status = 'X'
        n1 > 100
and     status = 'A'

Did you realise that the access method that Oracle uses for an update can make a difference to the type of work done by your system ? And if the type of work is different, this could make a big difference to the amount of work done by your system ?

In recent tests on 8i, 9i, and 10g, the work done by an update depends on whether the update operates through a tablescan or an index. [Edit: the same is also true of a delete – see note 15 below]

If Oracle uses a (B-tree) index in ascending order (index_desc() doesn’t co-operate) to find the data to be updated, it postpones any (B-tree) index updates needed until the end of the update, then sorts the index keys (with their rowids) for the before and after values before applying bulk updates to the indexes. This bulk update strategy may reduce undo and redo generation – along with logical I/O and latch activity – but will expend extra resources on sorting.  For example, if your update affects three indexes, and you update 100 rows, you will see sorts = 3, rows sorted = 600 (3 * 2 * 100) in your session statistics (v$mystat).

If Oracle uses a tablescan to acquire the data to be updated, then it updates each index entry as it goes. This eliminates the need for sorting but doesn’t get any benefits of the undo and redo optimisations that pack multiple changes into a single record. I’ve also noticed from 10g onwards (and possibly it’s a bug since it doesn’t seem to happen in 8i or 9i) that a large tablescan update can generate a surprising number of redundant block cleanout records for the table blocks.

In general it seems that an indexed mechanism is likely to be more efficient than the tablescan – even when a tablescan would be more efficient for the equivalent query.

As always, though, you ought to test the impact a change in access method might have on your more interesting updates. Depending on your bottleneck, either option might be the more appropriate for your particular circumstances, notwithstanding any abstract argument about which is theoretically “more efficient”.

Footnote: I’ve included specific references to B-tree indexes in the above. I haven’t yet checked what might happen with bitmap indexes – but normally you wouldn’t expect to be doing this type of update with bitmap indexes enabled anyway.

Footnote 2: See comments 12 – 15, the same index-driven optimisation applies to delete statements.

Footnote 3: If you have any B-tree indexes supporting foreign key constraints the delayed maintenance optimisation will not apply to them (though it can still be used on any other B-tree indexes on the table).


  1. I am curious, what triggered your research on this ?

    Comment by Christo Kutrovsky — November 22, 2006 @ 9:30 pm BST Nov 22,2006 | Reply

  2. Christo, the usual – a client complaining about a performance problem.

    Comment by Jonathan Lewis — November 22, 2006 @ 9:41 pm BST Nov 22,2006 | Reply

  3. it postpones any (B-tree) index updates needed until the end of the update

    Clarification : are all index updates postponed, or perhaps the index used to find the data (t1_n1 in your hinted example) is updated immediately, then the table rows, and then the remaining indices in bulk ?

    May you also kindly sketch briefly how you investigate the internal workings in this scenarios – do you use logminer, or some events ?

    Thanks in advance of course – as always, even when I don’t follow up with “Thanks” (I don’t only to avoid cluttering your blog with my thank-you messages).

    Comment by Alberto Dell'Era — November 22, 2006 @ 9:41 pm BST Nov 22,2006 | Reply

  4. Alberto, all indexes. In this case I started by creating a couple of test cases like the above, with index() hints, full() hints, and varying numbers of row, checking session stats for the different cases.

    The stats suggested that it was every index, and that the number of “rows” sorted was twice the number of rows updated. I think I then dumped a couple of undo blocks to see what was in them; and looked at the 10032 trace to check whether the sort stats reported by the session stats matched the sorting reported by the 10032.

    What I did not get around to in my test cases, was to force the sorting to spill to the temp tablespace so I have assumed – but not yet proved – that the items sorted are of the form:
    (delete/insert flag, old/new key value, rowid)

    Comment by Jonathan Lewis — November 22, 2006 @ 9:51 pm BST Nov 22,2006 | Reply

  5. Curious by the way that the index updates are optimized for an index-driven access, since if the CBO chooses an index, it means that only “a few” rows are estimated to be inspected, and so “less than a few” updated, whereas for a FTS, the converse is true. Why optimize for the least resource intensive operation ?

    Unless this is done, mainly, to avoid mixing read (consistent gets) and write (current gets) operations on t1_n1, which may be cumbersome if block splits are needed, and could mean (just thinking out loud) a lot of visits to the rollback segment to (re) build the CR version of the index blocks. Just an idea.

    Comment by Alberto Dell'Era — November 22, 2006 @ 10:51 pm BST Nov 22,2006 | Reply

  6. Jonathan – great!

    About a month ago we came over similar issue. Including “dummy” update of indexed column “automagically” resulted in less latches. Unfortunately we didn’t have much time for tracking done this issue. You did that for us – thank you.

    Comment by Alexander Fatkulin — November 23, 2006 @ 12:07 am BST Nov 23,2006 | Reply

  7. Jonathan,

    With reference to your observation that sorts=3 when the optimizer chooses the index update access path – I also noticed that sorts=2 for the full tablescan path when I tried to repeat your expirement. At first I thought this might be due to resorting of the index (which needs to happen irrespective of the access path used) but when I removed the index altogher and just ran a table update using your query I still get sorts=2 by diffing the v$mystat before and after. So my question is: would you be able to enlighten me on what the 3 sorts are doing and why they appear for a full tablescan when no index is present ? Thanks and Regards.

    Comment by Jeroen — November 23, 2006 @ 3:17 am BST Nov 23,2006 | Reply

  8. Very nice!

    I’m wondering if this may apply to merge.
    This is the situation:
    A table with no index join with an external table, in the explain plan for a merge there are 2 sort operation (they got the highest number in the plan).

    One for the insert and one for the update? Is it possible?

    Comment by Antonio — November 23, 2006 @ 9:22 am BST Nov 23,2006 | Reply

  9. Antonio – from your description of seeing two sorts in the exection plan for a merge command, then I suspect you are seeing a merge join (outer). The merge command always requires an outer join between the new data and the old data – and that join can be a hash, merge, or nested loop join.
    Jeroen, I can’t tell you why you are seeing those two sorts; but I would guess it is either some (version-dependent) detail of recursive SQL; but it may be related to the SQL you are using to “diff” the results from v$mystat. Check the number of rows sorted – does it seem to vary with the number of rows updated, or is it basically constant.

    Comment by Jonathan Lewis — November 23, 2006 @ 7:01 pm BST Nov 23,2006 | Reply

  10. Hmm that’s very interesting. What about something like that:

    UPDATE table SET col1=val1
    WHERE colX in (SELECT rowid FROM tmptable)

    I think I will try that out. No FTS, no Index access, just access by rowid.
    I’ll post my results here.

    Comment by Wolfgang — November 24, 2006 @ 10:26 pm BST Nov 24,2006 | Reply

  11. […] to find the data, the bitmap example uses the indexed access path – and I’ve written a little note about the different update costs you get using different access mechanisms. This explains the […]

    Pingback by Mything in action « Oracle Scratchpad — December 19, 2006 @ 11:05 pm BST Dec 19,2006 | Reply

  12. Hello Mr.Lewis

    Is this true when we do a delete. I have a batch program which does an average of two million rows every day but it does thru index scan, so I am not worried. But I am curious.


    Comment by CT — January 24, 2007 @ 2:27 pm BST Jan 24,2007 | Reply

  13. CT – if you are curious, build a test case and try it. Write a script to create a table with a lot of data and delete with a full() hint, then repeat with an index() hint. Check the execution plan to make sure you’ve got the hint working. Disconnect and reconnect before the delete, and check v$mystat after the delete.

    Comment by Jonathan Lewis — January 25, 2007 @ 12:13 am BST Jan 25,2007 | Reply

  14. A recent question on OTN prompted me to check what would happen if the indexes that needed to be updated were bitmap indexes (and the driving index was still a b-tree).

    The answer was that in both cases delayed index maintenance took place. The index entries were sorted and updated after the table update was complete – whether the driving access path was a tablescan or index.

    Comment by Jonathan Lewis — October 17, 2008 @ 9:42 pm BST Oct 17,2008 | Reply

  15. CT,
    You never told us if you had followed my suggestions for a test – but the answer to your original question is yes.

    I did a quick test just before supplying the suggestions in this thread of the Oracle Forum:

    The results seemed to have been quite beneficial

    Update: This thread has since been removed from OTN thanks to the comments made by one of the posters – unfortunately I don’t have a backup, but the point was that the strategy works for deletes, not just updates.

    Comment by Jonathan Lewis — January 3, 2009 @ 9:18 am BST Jan 3,2009 | Reply

  16. Can you suggest something for an update statement without a where clause. Since I need to update all rows in the table, how can I get better performance?

    UPDATE test_table
    SET col1 = newvalue1,
    col2 = newvalue2;

    Any help is appreciated.


    Comment by Chris C. — March 23, 2009 @ 3:22 pm BST Mar 23,2009 | Reply

  17. Chris,

    I think there are several questions we could ask before coming up with the best solution. For example – since you’re updating every row would you be allowed to do something like:

    create new_table nologging as
    select newvalue1, newvalue2, {rest of columns} from old table;
    create new_indexes
    rename old_table ...
    rename new_table ...
    rename new_indexes
    drop old table

    If you can’t play games with dropping and recreating, have you tried something like:

    update /*+ index(pk index) */ table set ... where pk is not null;

    Could you do an update in parallel ?
    Can you disable and recreate (nologging) any indexes that would otherwise be affected by the update ?

    Comment by Jonathan Lewis — March 23, 2009 @ 5:18 pm BST Mar 23,2009 | Reply

  18. Hi Jonathan,

    I have tested update on similer test tables in 9i and 10g. Buffer gets was alarmingly high on 11g compared to 10g.

    Here is my test.

    Oracle 10g. (8K Block size, buffer cached flushed before test executed, no other operation running while this testing).

    SQL> update pateld.table_on_network_storage set segmentation='TEST';
    1119998 rows updated.
    Elapsed: 00:01:58.25
    system@WFLDEV> SELECT
      2  executions,
      3 disk_reads*8/1024/executions                avg_disk_read_MB,
      4  buffer_gets*8/1024/executions                   avg_buffer_MB,
      5  direct_writes*8/1024                                      avg_write_MB,
      6  (disk_reads*8/1024 +direct_writes*8/1024)/ (elapsed_time/1000000) /executions AVG_IO_MB_SECOND,
      7  (elapsed_time/1000000)/executions                         avg_elapsed,
      8  cpu_time/1000000/executions                                      avg_cpu
      9  FROM
     10   v$sql a
     11  WHERE
     12              sql_id= '01s561a8yytpu';
    ---------- ---------------- ------------- ------------ ---------------- ----------- ----------
             1       1003.35156    10021.0234            0       8.69134954  115.442551       77.5

    Here I can see 10021 MB Buffer gets.

    Oracle 11g: (Block size 16K, Buffer cache flushed before running this query, no other job/sql running while this test)

    SQL> update pateld.table_on_network_storage set segmentation='TEST';
    1119998 rows updated.
    Elapsed: 00:05:24.88
    system@DWODSU01> SELECT
      2  executions,
      3  disk_reads*16/1024/executions                avg_disk_read_MB,
      4  buffer_gets*16/1024/executions               avg_buffer_MB,
      5  direct_writes*16/1024                             avg_write_MB,
      6  (disk_reads*16/1024 +direct_writes*16/1024)/ (elapsed_time/1000000) /executions AVG_IO_MB_SECOND,
      7  (elapsed_time/1000000)/executions                 avg_elapsed,
      8  cpu_time/1000000/executions                        avg_cpu
      9  FROM
     10   v$sql a
     11  WHERE
     12              sql_id= 'b8qv4rxv2c30d';
    ---------- ---------------- ------------- ------------ ---------------- ----------- ----------
             1          1997.25    39049.8125            0       6.29627302  317.211467     105.21

    I can see very high 39049 MB buffer gets. Execution time also 5 times more compared to 10g database.

    Both 10g and 11g servers are similer CPU and memory (also SGA) confuguration.


    Dilip Patel.

    Comment by Dilip Patel — April 7, 2010 @ 12:32 pm BST Apr 7,2010 | Reply

    • Dilip,
      There’s not enough information there to comment – and it may simply be that the execution plans have changed. But I note you’ve changed from 8KB blocks to 16KB blocks and you’re doing an update. If that update is causing a lot of row migration and you’re using ASSM you might be seeing an example of at this bug.

      In passing – if you’re trying to track down a problem, then it’s best to start tailoring your queries to give yourself the information you need to see. For example, it makes things harder to grasp if you report “AVG_BUFFER_MB” when (a) you’re looking at one query, and (b) you’re interested in the number of buffer visits.

      Comment by Jonathan Lewis — April 12, 2010 @ 1:53 pm BST Apr 12,2010 | Reply

  19. […] pointed out before now that choosing a different execution plan for an update can have a big impact on performance – this is just another example demonstrating the […]

    Pingback by More CR « Oracle Scratchpad — April 19, 2011 @ 6:35 pm BST Apr 19,2011 | Reply

  20. […] may also recall, by the way, that when you have B-tree indexes on a table you can choose the optimal update or delete strategy by selecting a tablescan or index range scan as the execution path.  If you update or delete […]

    Pingback by Bitmap loading | Oracle Scratchpad — April 22, 2014 @ 2:14 pm BST Apr 22,2014 | Reply

  21. […] That 20M rows sorted is 5M rows x 4 indexes. With my memory sizing the sorting spilled to disc (as 4 separate sort operations, though I think I used to see the entire operation handled as a single combined sort in earlier versions of Oracle). So you have a choice of mechanism for doing large deletes, and a choice of HOW you use resources that may impact significantly on where the work is done and how long it takes. (Note: The same variation in strategy is available on an update.)  […]

    Pingback by Massive Deletes – Part 3 – All Things Oracle — April 5, 2016 @ 8:58 am BST Apr 5,2016 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at


Get every new post delivered to your Inbox.

Join 6,320 other followers