Oracle Scratchpad

November 2, 2006

But it’s in the manual!

Filed under: Oracle,Performance,Tuning — Jonathan Lewis @ 9:27 am GMT Nov 2,2006

I’ve just seen a note on the news group advising someone to check the 9iR2 Database Performance Tuning Guide and Reference manual for a piece of code to report which objects are using how much space in the buffer cache. This is the specific reference and the suggested code is as follows:

SELECT o.object_name, COUNT(1) number_of_blocks 
 WHERE o.object_id  = bh.objd 
   AND o.owner     != 'SYS' 
 GROUP BY o.object_name 
 ORDER BY count(1); 

There are two flaws with this code – it gets the wrong results, and it’s inefficient.

The column objd in v$bh is the data_object_id from dba_objects, not the object_id.  The data object id caters for the effects of clusters, rebuilding indexes, moving tables, etc..  Look at the sample below where I’ve created a partitioned table with a primary key index, then executed various bits of DDL, including a partition exchange and one index partition rebuild.

SQL> l 
  1  select 
  2     object_id, data_object_id, 
  3     object_name, subobject_name 
  4  from 
  5*    user_objects 
SQL> /              

---------- -------------- -------------------- -------------------- 
     44797          44802 PT_PK                P400 
     44801          44791 PT_PK                P600 
     44795                PT_PK 
     44794          44794 PT_RANGE             P400 
     44800          44790 PT_RANGE             P600 
     44792                PT_RANGE 
     44790          44800 T1 
     44791          44801 T1_PK 
     44798                V_PT 
SQL> spool off              

Note particularly how the partitioned table has a ‘logical’ object id, but it is only the physical data segments of the underlying partitions that have a ‘physical’ data_object id.

Note also that the ‘logical’ object id of the table t1 matches the ‘physical’ data object id of partition p600 and vice versa (with a matching cross-over on the primary keys). This is because I exchanged that table with that partition.  And while we’re talking about partitioning, shouldn’t the suggested query also aggregate by subobject_name to separate the partitions out? So this code is going to get funny results, and miss objects, for all sorts of reasons.


Here are a couple of standard guidelines for writing efficient SQL:

  • eliminate early
  • aggregate before extending

So let’s apply them here.

The join to dba_objects is not going to work for rollback segments or global temporary tables, so eliminate them – the predicate “objd < power(2,22)” will do this. (It’s an interesting point that we can now create partitioned tables with millions of partitions – but once you get above roughly 4,000,000 the data_object_id is going to overlap with the objd values used for temporary table segments).

Then we might want to eliminate the buffers which are free – they may still have the object information in them but (after a truncate, for example) the information is no longer valid. For v$bh an appropriate predicate would be status != ‘free’.

We then see that the code joins to dba_objects before aggregating by object_name. So if big_table_X has 20,000 blocks in the buffer we join 20,000 times to extend 20,000 times to aggregate by object_name – when we could have aggregated by the (short) objd and joined just once.

So take out the reference to v$bh, and replace it with a view that does the aggregation; something like the following, for example :

SELECT  o.owner, o.object_name, subobject_name, ct number_of_blocks 
        DBA_OBJECTS o, 
                /*+ no_merge */ 
                objd, count(*) ct 
        from    v$bh 
        where   objd < power(2,22) 
        and     status != 'free' 
        group by 
        )       bh 
 WHERE  o.data_object_id  = bh.objd 
   AND  o.owner          != 'SYS' 
 ORDER BY ct desc, o.owner, o.object_name, o.subobject_name 

Of course, you could then argue that if you’re going to play with v$bh, you should be brave enough to use obj$ and user$, rather than joining to an over-burdened dba_objects – eliminating ‘SYS’ on the understanding that ‘SYS’ has a user_id of zero.

But that’s left as an exercise to the reader (and while you’re at it, you might as well use x$bh so that you don’t have to waste resoures doing the decode() that gets you the status; and I wonder if there’s any efficient point where you can adjust the code to avoid reporting objects which have been dropped but are still in the recyclebin if you’re on 10g).

Just as an afterthought: even if you do make this query produce the correct results efficiently, what is it telling you? After all, there may be multiple copies of each data block in the buffer**, so do you want to know about distinct blocks, or all copies of blocks, and is there any significance in the two possible approaches ?

** In principle the hidden parameter _db_block_max_cr_dba suggests that there should be a limit of 6 copies per block in the buffer cache. I believe the parameter first appeared in version 6 of Oracle but there still seem to be cases where it doesn’t quite work, even in the very latest versions of Oracle.



  1. But… But… But.. gurgle… choke… :-)

    If you are referring to a couple of posts I made, fine, you know I appreciate your level of detail and exposition, especially regarding documented, um, misguidance.

    What I was really intending was to point the original posters towards some more general knowledge about what they need to do to decide if multiple buffer pools might help them, fairly explicitly assuming they hadn’t thought of RTFM that particular subject in reference to their issues. If they happen to be on some huge system that gets bought to its knees by the v$bh queries and they aren’t yet qualified to be brave – well, they are in over their heads anyways and hopefully will see this blog. If I’ve made bad assumptions about their experience and knowledge – that’s ok, the assumption may be not so bad for lurkers. If I’ve given bad advice – correct me (as you did, in your usual polite manner that I wish everyone could adopt, including myself)! It’s usenet, a major attribute is iterative and repetitive correction of information in threads.

    So, I don’t feel comfortable giving RTFM advice that’s qualified to the point of unadvice. Writing a big exposition which might be considered tangental doesn’t work for me either in the usenet context. That particular thread worked out just fine, but this could easily be considered a FAQ, and I would like somewhere to point people for it. Pointing at old threads just isn’t the same. Pointing at this blog entry out of context doesn’t do it. Gee, if someone had a cooperative faq… :-)

    Comment by joel garry — November 2, 2006 @ 3:32 pm GMT Nov 2,2006 | Reply

  2. Joel, Thank you for the smiley.
    The note wasn’t intended as criticism of your advice – which was apt and to the point, and only unlucky in that you didn’t realise that the manuals were letting you down.
    Come back tomorrow and I’m afraid you’ll find a note about another piece of Metalink SQL you’ve just posted – again, not because of the fact that you posted it, but because it’s a good case study in how to do things better.

    Comment by Jonathan Lewis — November 2, 2006 @ 4:17 pm GMT Nov 2,2006 | Reply

  3. Joel,

    You should see the speakers at Hotsos, they are usually nervous to have Jonathan in the audience. You are in good company, be of good cheer!

    – Ethan

    Comment by Ethan — November 2, 2006 @ 5:52 pm GMT Nov 2,2006 | Reply

  4. I just want to say.. I am THRILLED you have this new blog. I will read it every day.

    Comment by Doug C — November 3, 2006 @ 12:52 am GMT Nov 3,2006 | Reply

  5. Jonathan, thanks for adding a new page to my everyday reading routine.I am looking for articles about the general principles of writing SQL. Do you have any site or article that might help me with that?

    Comment by Mladen Gogala — November 3, 2006 @ 3:46 am GMT Nov 3,2006 | Reply

  6. Mladen,
    I think the book by Dan Tow is worth reading. I don’t agree 100% with some of his side-comments, but the analytical method he describes is very useful.

    Comment by Jonathan Lewis — November 3, 2006 @ 8:18 am GMT Nov 3,2006 | Reply

  7. Jonathan,

    Your post reminded me on something I’ve developed a while ago. So I made a post about it.

    Basically same idea, but enable it for RAC + some cache fusion efficiency numbers that you can extract:

    Comment by Christo Kutrovsky — November 3, 2006 @ 9:11 pm GMT Nov 3,2006 | Reply

  8. Following up a comment to the Clarity, Clarity, Clarity post – I might also point out that:
    (a) I would normally swap the order of the objects in the where clause to indicate the expected order of execution, and
    (b) I don’t normally write keywords in capitals and right-align them.

    Comment by Jonathan Lewis — November 4, 2006 @ 4:24 pm GMT Nov 4,2006 | Reply

  9. […] But it’s in the manual! […]

    Pingback by Cache Buffers Chains and Latch Spelunking : Ardent Performance Computing — September 14, 2007 @ 2:59 pm BST Sep 14,2007 | Reply

  10. Jonathan,

    _db_block_max_cr_dba is literally killing our database performance, too much CPU is being burned to identify the right buffer , waiting on cache buffer chains latch. We are on Oracle 8i. Even though the default value of _db_block_max_cr_dba is 6, it is not going up after 8 in 10g and in Oracle 8i it is limiting at around 30. How do we get rid of the cloned buffers, setting SGA to a lower value would help but it causes physical I/O to go up. Appreciate your thoughts on this.


    Comment by Saibabu Devabhaktuni — October 16, 2007 @ 8:17 am BST Oct 16,2007 | Reply

  11. By taking a snapshot of v$bh, you get what is being read into memory at this moment. So v$bh and dba_objects can be used to identify what is in memory, and what is doing disk reads. This can be very handy to see what is running at this moment. In the following example the system is almost idle in I/O, reading in only 68 blocks.

    SQL > @v$bh2

    ------------------------ ----------------------------------------
                          2 IDX_LINKAGES_RELATION_NAME
                          66 ALAN9

    COLUMN OBJECT_in_memory FORMAT A40
    column buffers_read_into_memory format 9999999999

    break on report
    compute sum of buffers_read_into_memory on report

    set termout off
    drop table alan9;
    drop table alan10;

    create table alan9 nologging as
    select OBJD,FILE#,BLOCK# from v$bh;

    create table alan10 nologging as
    select OBJD,FILE#,BLOCK# from v$bh
    select OBJD,FILE#,BLOCK# from alan9;

    set termout on

    SELECT COUNT(*) buffers_read_into_memory,
    o.OBJECT_NAME Object_in_Memory
        FROM DBA_OBJECTS o, alan10 bh
          AND o.OWNER        != 'SYS'
        having count(*)>0
        ORDER BY COUNT(*);

    set termout off
    drop table alan9;
    drop table alan10;
    set termout on

    Comment by Alan Kendall — March 20, 2008 @ 11:23 pm GMT Mar 20,2008 | Reply

  12. Alan,
    From 9i onwards, you might want to look at v$segstat rather than v$bh, it’s a more direct representation of the information you want.

    You might want to look at what I’ve done in this post to avoid using tables for holding intermediate results – you may be able to use a /*+ no_merge */ union all view with a delay built in to do the job of your two scratch tables.

    Comment by Jonathan Lewis — March 21, 2008 @ 6:41 am GMT Mar 21,2008 | Reply

  13. […] Tuning Manual (with a small, but notable correction), but that SQL statement potentially produces misleading information. (pages […]

    Pingback by Book Review: Oracle Database 11g Performance Tuning Recipes « Charles Hooper's Oracle Notes — September 10, 2011 @ 9:13 pm BST Sep 10,2011 | Reply

    clear breaks
    break on report
    Compute sum of mb on report
    compute sum of pct on report
    prompt Blocks > 100K
    SELECT  o.object_name, ct number_of_blocks, ct*vp.value/1024/1024 MB,
       (ct*vp.value/1024/1024)/(vs.value/1024/1024)*100 pct
        v$parameter vp,
        v$sga vs,
        DBA_OBJECTS o,
            /*+ no_merge */
            obj, count(*) ct
        from    x$bh
        group by obj
        having count(*) > 100000)   bh
    WHERE 1=1
    and'Database Buffers'
    and o.data_object_id  = bh.obj
    AND o.owner = 'MY_USER'
    ORDER BY ct desc;

    Comment by brian somers — November 27, 2018 @ 4:31 pm GMT Nov 27,2018 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: