Oracle Scratchpad

May 27, 2013

CUR Gets

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 6:16 pm BST May 27,2013

Here’s an interesting anomaly from the OTN database forum – PL/SQL track. It’s a single row delete from a table that does a huge number of db block gets and (in the first example shown) physical reads. Here’s the supplied output with autotrace enabled:

SQL> delete from messages2 where id = 11004240718;

1 row deleted.

Elapsed: 00:00:03.79

Execution Plan
----------------------------------------------------------
Plan hash value: 140582024

-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT   |              |     1 |    17 |     2   (0)| 00:00:01 |
|   1 |  DELETE            | MESSAGES2    |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| SYS_C0016189 |     1 |    17 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=11004240718)

Statistics
----------------------------------------------------------
         61  recursive calls
       7381  db block gets
         92  consistent gets
       5802  physical reads
     303368  redo size
        839  bytes sent via SQL*Net to client
        801  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
          1  rows processed

After a little chit-chat, the obvious guess turns out to be relevant – thought the OP doesn’t quite tell us what we need to know – there is a constraint on another table which, when enabled, results in a large amount of work, and when disabled reduces the workload to the expected level. The excess work is (probably) Oracle checking a referential integrity (foreign key) constraint to ensure that there are no child rows blocking the delete (or, possibly, operating an ON DELETE CASCADE operation – although the OP tells us that there is no such constraint).

Here’s a quick and dirty, approximate, demo on 11.2.0.3 with an 8KB block size:

create table parent (
	id		number(8),
	description	varchar2(10),
	constraint par_pk primary key (id)
)
;

create table child(
	id_p	number(8)
		constraint chi_fk_par
		references parent,
--		on delete cascade
	id	number(8),
	description	varchar2(10),
	constraint chi_pk primary key (id_p, id)
)
;

insert into parent values (10000001,'one');
insert into child values(10000001, 1,'one')

insert into parent values (10000002,'two');
begin
	for i in reverse 1..20000 loop
		insert into child values(10000002,i,'two');
	end loop;
end;
/

insert into parent values (10000003,'three');
insert into child values(10000003, 1,'three')

commit;

begin
	dbms_stats.gather_table_stats(user,'child',cascade=>true);
	dbms_stats.gather_table_stats(user,'parent');
end;
/

delete from child;
commit;

select	index_name, leaf_blocks
from	user_indexes
where	table_name = 'CHILD'
;

The output from the query on the child index reported 103 leaf blocks for the index.
So here’s the code to prime the test:

variable b1 number
exec :b1 := 10000001
delete from parent where id = :b1;

set autotrace on

exec :b1 := 10000002
delete from parent where id = :b1;

set autotrace off

Note how I set up my data so that I could use a bind variable from SQL*Plus to do a quick and cheap delete to get the optimisation sorted out on the first run. This doesn’t really help in the version of the run that I’m going to show because the autotrace with the implicit explain re-optimises anyway, but here’s the plan and stats:

Execution Plan
----------------------------------------------------------
Plan hash value: 3366423708

-------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | DELETE STATEMENT   |        |     1 |     6 |     1 |
|   1 |  DELETE            | PARENT |       |       |       |
|*  2 |   INDEX UNIQUE SCAN| PAR_PK |     1 |     6 |       |
-------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=TO_NUMBER(:B1))

Statistics
----------------------------------------------------------
          0  recursive calls
        107  db block gets
          1  consistent gets
          0  physical reads
       7416  redo size
        918  bytes sent via SQL*Net to client
       1038  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

Note the db block gets statistics – 107, compared to the 103 leaf blocks in the index – that’s a pretty good indication by itself that the workload that the OP is seeing is related to the prior massive delete from the child table with a large scan (possibly including a high volume of delayed block cleanout in his case) of the now empty section of the foreign key index.

If one quick test isn’t enough to convince you that the explanation is probably relevant then you can run the test a few times with different numbers of rows in the child table, and with the referential integrity constraint changed to on delete cascade to see what happens if you don’t do the child delete but leave it up to Oracle to handle the cascade. You might also want to check the complete set of session stats, and generate a trace file – as I did, but there’s already to much Oracle output in the posting already, so I haven’t included the results.

17 Comments »

  1. Hi Jonathan,
    Nice post as usual, btw don’t you think high volume of delayed block cleanout is a bug ?

    Is there any parameter which can be set at instance level/session level so that delayed block cleanout does not occur , if parameter is set to yes ?

    Is it wise to collect statistics after huge volume of delete ?

    Comment by Atul Gupta — May 28, 2013 @ 7:26 am BST May 28,2013 | Reply

    • Atul,
      The direct driver is the requirement for current block gets on the entire range of the index that might hold child rows. The fact that there were a lot of index entries that had been deleted is the thing that requires some action – so you shouldn’t view any cleanout effects as a bug, just a side effect.

      The whole “cleanout” thing is very complicated, and Oracle keeps changing the code to try to minimise the work done – for example, introducing cached SCNs so that a session that acquires the commit SCN for a given transaction id when cleaning out a block may avoid doing cleanout on a block with an ITL showing the same transaction in other blocks that it reads. In a simple run of my example the cleanout effects relate to “delayed logging block cleanout” related to “commit cleanouts” and there is actually a hidden parameter that will cause the commit cleanout changes to be logged – but the last time I experimented with it the delayed logging still occurred anyway. (Parameter: _log_committime_block_cleanout )

      Any large volumes of data change may require you to think carefully the state of the statistics – but collection isn’t automatically a requirement. Bulk deletes (or inserts, or updates) might require you to think about other aspects of the processing, anyway – for example strategies for dropping and recreating some indexes.

      Comment by Jonathan Lewis — May 28, 2013 @ 9:38 am BST May 28,2013 | Reply

  2. Jonathan,

    you write “possibly including a high volume of delayed block cleanout in his case”. When I repeat your test I get 102 “immediate (CURRENT) block cleanout applications” – even if I add an additional “alter system flush buffer_cache;” before the delete of id 10000002. So I assume the statistic also represents a delayed block cleanout – even though it is called “immediate”. And if that’s correct I think we have here a delayed block cleanout in every case – or am I missing the point?

    Martin

    Comment by Martin Preiss — May 28, 2013 @ 7:28 am BST May 28,2013 | Reply

    • Martin,

      The results vary with the specific actions. which is why I hedged my bets a little when describing what the OP might be seeing.
      In the example I showed above, I got commit cleanouts on the commit after the large delete, with “delayed logging block cleanout” effects appearing on the foreign key check. With your “flush buffer cache” you would have seen the same.

      If you want a different result, do the “flush buffer cache” before committing the child delete. This will result in “commit cleanout failures” on the commit, and the FK check will do “delayed block cleanout” when reading the blocks back. If you then do a test which flushes the cache between the delete and the commit, then does an index fast full scan of the child index before the parent delete, the fast full scan will do the “delayed block cleanout” on the index leaf blocks, and you won’t see any stats about cleanout during the FK check.

      For some reference notes on the variations on cleanouts: http://jonathanlewis.wordpress.com/2009/06/16/clean-it-up/

      Comment by Jonathan Lewis — May 28, 2013 @ 9:48 am BST May 28,2013 | Reply

  3. Martin,
    I think you have to collect statistics manually after such operation as if there is no extra human created cron job/task then statistics are not collected automatically by oracle.

    The job of statistics collection is to update oracle object’s meta data correctly which cannot be achieved by flushing buffer_cache as it just clears the objects stored in memory….

    br
    atul

    Comment by Atul Gupta — May 28, 2013 @ 7:42 am BST May 28,2013 | Reply

    • Atul,

      I think the purpose of Martin’s “flush buffer cache” was to investigate the variation in the nature of the block cleanout, not to do anything about metadata.

      Comment by Jonathan Lewis — May 28, 2013 @ 9:40 am BST May 28,2013 | Reply

  4. Hi Jonathan,

    I hope things are going well with you. Excellent example as always.

    Does this mean if I have this situation where I do have several FK constraints and several FK constraints with CASCADE DELETE, then I’ll just have to pay the cost and expect to see high LIOs?

    I have a situation where for each DELETE of the parent, I’m incurring 59K LIOs.

    Is there anyway to reduce this cost?

    Regards,
    -Peter

    Comment by Peter Tran — December 19, 2013 @ 6:36 am BST Dec 19,2013 | Reply

    • Would removing the FK cascade delete constraint and implementing a DELETE trigger work to reduce the CU-Get cost?

      Comment by Peter Tran — December 19, 2013 @ 6:51 am BST Dec 19,2013 | Reply

    • Peter,

      First you have to find out where it’s coming from . You can expect increased work because of the FK constraints, especially if the parent delete has to delete child rows beforehand. This column, though, seems excessive, so perhaps you have some FKs which don’t have indexes protecting them; perhaps some index has become very inefficient.

      I’d be doing tests like: check session stats before and after a single delete to see if the stats showed excessive cleanout; check segment stats before and after a delete to see if they showed which object was subject to most LIOs; flush the buffer cache and trace a delete.

      Changing to a trigger won’t help if you have to do the same amount of work that Oracle is doing internally anyway.

      Comment by Jonathan Lewis — December 19, 2013 @ 7:46 am BST Dec 19,2013 | Reply

      • Hi Jonathan,

        Thank you for the prompt reply.

        I didn’t think the trigger would help either after sleeping on it.

        The reason why the LIOs are excessive is we have hundreds of threads doing DELETE/INSERT into this parent table. The FKs (with cascade deletes) compounds the issue when you have such high concurrency activity. I’ve validated that all FKs have indexes to support them. Otherwise, I’d see worse performance due to table scans and higher db file scattered reads.

        Can you enlighten me why Oracle is paying the cost due to the delayed block cleanout? I’m not clear on this part “…the workload that the OP is seeing is related to the prior massive delete from the child table with a large scan [(possibly including a high volume of delayed block cleanout in his case) of the now empty section of the foreign key index]“.

        I’m going to have to bring this back with the development team and show them your example and explain to them why the way they designed this solution will not scale.

        Have you tried your test above on Oracle 12c and see if the large # of CU-Gets still manifests itself?

        Thanks,
        -Peter

        Comment by Peter Tran — December 19, 2013 @ 6:31 pm BST Dec 19,2013 | Reply

        • Ps. Here’s a snapshot from an AWR report for the DELETE. Note, the second lower case delete is coming from Oracle due to the FK cascade. https://drive.google.com/file/d/0B2rXfnXSYG1VX3poNVI4RmFoZ2s/edit?usp=sharing

          Comment by Peter Tran — December 19, 2013 @ 7:02 pm BST Dec 19,2013

        • One detail to consider – if the delete has been captured then the execution plan is available so you could check if it’s showing the “right” plan. You can also see the number of rows deleted per execution, and do various other cross-checks for “reasonable” behaviour.

          Is the first delete statement the parent delete ? If so I note it must be deleting a lot of rows per execution given that the child delete executes 1.8M times and other statement executes only a few thousand times.

          excessive_lios_from_delete

          I’ve embedded the image for easy reference – but it may make a mess of some screens.

          Comment by Jonathan Lewis — December 22, 2013 @ 10:24 am BST Dec 22,2013

        • Peter,

          The answer to the simple question is that I get the same results in 12c.

          I don’t think the delayed block cleanout is going to result in any extra “db block gets” – but it’s likely to be responsible for some of the “db block changes” and “redo entries”. When doing the FK check,

          I think Oracle has to see the child index leaf blocks blocks in consistent read mode as at the time of the start of delete, as well as in current mode to check for uncommitted transactions that could cause an integrity error – cleanout is just another complication that requires a little extra work to be thrown in at the same time.

          Comment by Jonathan Lewis — December 22, 2013 @ 9:42 am BST Dec 22,2013

  5. Hi Jonathan,

    Yes, the first DELETE against the PM_WK_PRICE_RECORD is the parent – DELETE FROM PM_WK_PRICE_RECORD WHERE PRICE_RECORD_ID = :1 . PRICE_RECORD_ID is the only column in the PK.

    I did a quick litmus test where I dropped all the FK constraints and the Buffer Gets dropped considerably.

    -Peter

    Comment by Peter Tran — December 23, 2013 @ 7:28 pm BST Dec 23,2013 | Reply

    • Peter,

      One last thought – given the volume of data you’re deleting in a short space of time, it’s possible that the nature of your code leaves lots of index leaf blocks nearly empty (or empty, but still in the index structure), forcing Oracle to read through an unreasonably large number of leaf blocks to find a relatively small number of rowids. You might want to use one of my scripts to check the index space efficiency, and if it’s bad, think about ways of minimising the damage. (That might mean regular use of “shrink space compact” – which might introduce a different contention problem – or maybe hash partitioning the foreign key indexes many ways; there may be other alternatives.)

      Comment by Jonathan Lewis — December 31, 2013 @ 1:00 pm BST Dec 31,2013 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,873 other followers