Oracle Scratchpad

November 2, 2015


Filed under: Indexing,Oracle,RAC,Statistics — Jonathan Lewis @ 10:27 am GMT Nov 2,2015

I had a recent conversation at Oracle OpenWorld 2015 about a locking anomaly in a 3-node RAC system which was causing unexpected deadlocks. Coincidentally, this conversation came about shortly after I had been listening to Martin Widlake talking about using the procedure dbms_stats.set_table_prefs() to adjust the way that Oracle calculates the clustering_factor for indexes. The juxtaposition of these two topics made me realise that the advice I had given in “Cost Based Oracle – Fundamentals” 10 years ago was (probably) incomplete, and needed some verification. The sticking point was RAC.

In my original comments about setting the table_cached_blocks preference (as it is now known) I had pointed out that the effect of ASSM (with its bitmap space management blocks) was to introduce a small amount of random scattering as rows were inserted by concurrent sessions and this would adversely affect the clustering_factor of any indexes on the table, so a reasonable default value for the table_cached_blocks preference would be 16.

I had overlooked the fact that in RAC each instance tries to acquire ownership of its own level 1 (L1) bitmap block in an attempt to minimise the amount of global cache contention.  If each instance uses a different L1 bitmap block to allocate data blocks then (for tables and their partitions) they won’t be using the same data blocks for inserts and they won’t even have to pass the bitmap blocks between instances when searching for free space. The consequence of this, though, is that if N separate instances are inserting data into a single table there are typically 16 * N different blocks into which sessions could be inserting concurrently so the “most recent” data could be scattered across 16N blocks, which means the appropriate value table_cached_blocks is 16N.

To demonstrate the effect of RAC and multiple L1 blocks here’s a little demonstration code from a 12c RAC database with 3 active instances.

create tablespace test_8k_assm_auto
datafile size 67108864
logging online permanent
blocksize 8192
extent management local autoallocate default
nocompress segment space management auto

create table t1 (n1 number, c1 char(1000)) storage (initial 8M next 8M);

The code above simply creates a tablespace using locally managed extents with system allocated extent sizes then creates a table in that tablespace with a starting requirement of 8MB. Without this specification of initial the first few extents for the table would have been 64KB thanks to the system allocation algorithm and that would have spoiled the demonstration because the table would have started by allocating a single extent of 64KB with just one L1 bitmap block. Slightly different effects would also have appeared with an extent size of 1MB – with 2 L1 bitmap blocks – which is the second possible extent size for system allocation.

Having created the table I connected one session to each of the three instances and inserted one row, with commit, from each instance. Then I ran a simple SQL statement to show me the file and block numbers of the rows inserted:

        dbms_rowid.rowid_relative_fno(rowid)    file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_in_block
group by
order by

---------- ---------- -------------
        19        518             1
        19        745             1
        19       2157             1

As you can see, each row has gone into a separate block – more significantly, though, those blocks are a long way apart from each other – they are in completely different sets of 16 block – each instance is working with its own L1 block (there are 16 of them to choose from in an 8MB extent) and has formatted 16 blocks associated with that L1 for its own use.

In fact this simple test highlighted an anomaly that I need to investigate further. In my first test, after inserting just 3 rows into the table I found that Oracle had formatted 288 blocks (18 groups of 16) across 2 extent – far more than seems reasonable. The effect looks hugely wasteful but that’s mainly because I’ve implied that I have a “large” table into which I’ve then inserted very little data – nevertheless something a little odd has happened. In my second test it got worse because Oracle formatted 16 blocks on the first insert,  took that up to 288 blocks on the second insert, then went up to 816 blocks (using a third extent) on the third insert; then in my third test Oracle behaved as I had assumed it ought to, formatting 3 chunks of 16 blocks each in a single extent – but that might have been because I did a truncate rather than a drop and recreate.


Whatever else is going on, the key point of this note is that if you’re trying to get Oracle to give you a better estimate for the clustering_factor in a RAC system then “16 * instance-count” is probably a good starting point for setting the table preference known as table_cached_blocks.

The anomaly of data being scattered extremely widely with more extents being allocated than you might expect is probably a boundary condition that you don’t have to worry about – but I may change my mind about that when I’ve had time to look at it a little more closely.



  1. So old good trick is gone when we are inserting by ordering data (the table data is ordered to favor one particular index …)
    I’ll try with bulk processing, huge insert) and find out the best ratio between “number of rows in blocks” and “number of rows in one bulk insert”


    Comment by Andjelko Miovcic — November 6, 2015 @ 5:15 pm GMT Nov 6,2015 | Reply

    • Andjelko,

      That’s more of a trick to get run-time efficiency for one particular access path.

      The equivalent “trick” to this new preference was anything that would change the clustering_factor (and avg_data_blocks_per_key) column on user_indexes to override the values produced by gathering index stats. Using dbms_stats.get_index_stats / dbms_stats.set_index_stats was the easy way.

      Comment by Jonathan Lewis — November 9, 2015 @ 9:18 am GMT Nov 9,2015 | Reply

  2. Hi Jonathan,
    Statistics (CF of Index) is very important for CBO, thanks.
    But to improve CF for particular index we have to rebuild table (data ordering, partitioning, storage parameters) as I know any action on index will not help.
    And each time when I’m trying to improve CF there are some new bits to worry about (multiple indexes on table, locally managed TS,…) after reading this post even more worry.
    Thanks for sharing this post, very simple and useful explanation.


    Comment by Andjelko Miovcic — November 12, 2015 @ 5:56 pm GMT Nov 12,2015 | Reply

    • Andjelko,

      I think it’s always important to distinguish very clearly between “the clustering_factor” and the way the data is clustered.

      If we really need rows that are related in some specific way to be physically stored in the same small number of blocks then we may choose to re-arrange the data in a particular order to achieve that physical arrangement. We would only do this if we had some particularly important queries that would be able to run faster because of this arrangement – but we would recognise that as we arranged the data to make one set of queries go faster there might be other queries that go slower because we have scattered the data more widely with respect to other indexes that we may have been using.

      We can tell Oracle that an index is more effective than Oracle thinks by adjusting the clustering_factor of that index without doing anything to re-arrange the data. Historically we would do this calling dbms_stats.set_index_stats, now we can do it by calling dbms_stats.set_table_prefs() before gathering index stats.

      Comment by Jonathan Lewis — November 12, 2015 @ 7:41 pm GMT Nov 12,2015 | Reply

  3. thanks,

    Comment by Andjelko Miovcic — November 13, 2015 @ 5:13 pm GMT Nov 13,2015 | Reply

  4. […] walks an index to generate the clustering_factor. Although there is no official word, I think that 16 is a good default value for this setting in single instance Oracle, and 16 * {number of instances} might be appropriate for […]

    Pingback by Upgrades | Oracle Scratchpad — December 11, 2015 @ 9:16 pm GMT Dec 11,2015 | Reply

  5. […] well ordered fashion then setting the table preference table_cached_blocks to a value around 16 (or 16 x N for an N-node RAC cluster) and re-gathering stats on the indexes would probably produce a much more realistic […]

    Pingback by Merge Precision | Oracle Scratchpad — June 6, 2016 @ 12:40 pm BST Jun 6,2016 | Reply

  6. […] for calculating the clustering_factor of an index. The default is 1, which often means the clustering_factor is much higher than it ought to be from a humanly visible perspective and leads to Oracle not using an index that could be a very […]

    Pingback by Clustering_Factor | Oracle Scratchpad — July 2, 2018 @ 1:24 pm BST Jul 2,2018 | Reply

  7. […] An additional thought for RAC systems. […]

    Pingback by CBO Oddities – 1 | Oracle Scratchpad — October 18, 2019 @ 6:10 pm BST Oct 18,2019 | Reply

  8. […] Update 3 – table_cache_history = 16. This suggestions doesn’t allow for systems running RAC. […]

    Pingback by Optimizer Tip | Oracle Scratchpad — September 20, 2021 @ 9:04 am BST Sep 20,2021 | Reply

  9. […] table_cached_blocks and RAC (Nov 2015): if you set the table preference to adjust the index clustering_factor, should you allow for the effects of RAC? […]

    Pingback by Statistics catalogue | Oracle Scratchpad — August 17, 2022 @ 11:03 am BST Aug 17,2022 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: