Oracle Scratchpad

September 26, 2008

Index analysis

Filed under: Infrastructure,Performance,Troubleshooting,Tuning — Jonathan Lewis @ 7:23 pm BST Sep 26,2008

Have you ever created an index on a column with a name like “last_update_date” – or maybe even a function-based index on “trunc(last_update_date)” ?

You can probably guess the purpose of the column from its name – but could you also guess what state that index is going to be in a few weeks after you’ve created it.

Most B-tree indexes will tend to operate at about 70% efficiency – by which I mean that on average an index leaf block will be about 30% free space. Of course, if you did a statistical analysis of the usage of all the blocks in the index, you’d see pattern (roughly a Normal distribution) peaking around the 70% usage mark, trailing out to 50% at one extreme and 100% at the other.

That pattern is dependent on the general use of B-tree indexes which assumes that key values are inserted and deleted in a fairly random order with the values distributed fairly evenly across the range covered by the index, and that the number of entries inserted or deleted in a single transaction is “very small” compared to the total number of entries in the index. (i.e. basic pattern of OLTP usage.)

But if the arrival isn’t random, or the distribution isn’t even, then you may find that a few interesting indexes start to behave very badly – an index on “trunc(last_update_date)” is a classic example. (In fact, any index that is based on a timestamp or sequence number, and any index based on a small number of “status” values, may display some interesting anomalies – though perhaps not as extreme as the one I’m about to show you).


The pattern of usage for this column is, of course, both business dependent and application dependent, and the pattern of usage affects the storage efficiency. Imagine a fairly typical scenario:

  • When data is first created (and we created 30,000 rows per day, say) the “last_update_date” is set to the date of creation.
  • Over the next few weeks, each row is updated four or five times (and, at steady state, we will be updating about 135,000 rows per day).
  • We don’t delete any data (until 2012, when we start doing annual archiving).

What’s going to go on inside the index ?

Every day, we insert 165,000 entries at the right hand end of the index (30,000 from the new data, 135,000 from the updates) all for the same date. Further down the index – spread over a date range of four or five weeks – we are deleting 135,000 entries.

The new data is fairly likely to cause lots of 50/50 “leaf node splits”  at the high-value end of the index (especially if we are using ASSM for the tablespaces). This is assuming the new data will generally have “higher” rowids than the old data – which is often the case – but also assumes you have multiple concurrent processes so that the newly inserted rows is spread randomly across several table blocks. (Note – you will get similar effects if you have defined multiple freelists or freelist groups on a table that isn’t in an ASSM tablespace).

The effect on the old data that gets updated to today’s date can’t be predicted quite so easily. We can be confident that, in general, it will be inserted “to the left of” or “lower down” the index than the new data because the rowids are likely to be “lower”.

We could be unlucky and find that the order in which the updates are done (oldest data first, for example) means we end up with 50/50 leaf node splits happening all the way through that section of the index with no back-fill. However, rather than take the worst case scenario, let’s take the best (and least likely) scenario – assume that by the end of the day there have been lots of 50/50 block splits, but a lot of back-fill has taken place (because of a random order of updates) and every block just happens to get very close to 100% full.

So at the end of the day we’ve got some index blocks with ‘old’ data that are packing their blocks very well, and we’ve got some index blocks with ‘new’ data that are running at 50% empty space. Because sysdate has moved on, no further data will get into those blocks. So let’s think about the future of those leaf blocks.

Over the next few weeks, the blocks that were the “new” data are slowly going to empty out – because every row gets updated four or five times – and those blocks will eventually end up completely empty (hence reusable elsewhere in the index).

At the same time, some of the blocks which were the “old” data are going to start emptying out as well – but they’re not going to end up empty. In fact, with out work pattern, they’ll end up with an average utilisation of 20% to 25%.

Remember that we said we would be updating each row four or five times – let’s say it’s four, for convenience. When we updated 135,000 rows in one day that would, at steady state, have been the first update for 25% of the rows, the second update for another 25%, the third update for another 25%, and the last update for the final 25%.

Over the next few weeks, 75% of those rows are going to be updated again and move out of those leaf blocks.

So what’s the index going to look like after a few months – there are three main patterns to super-impose:

  • The oldest data after all updates: A huge number of blocks running at 25% utilisation – with a Normal distribution spreading from 0% to 50% utilisation.
  • The most recent 4 weeks of data: A relatively small cluster of blocks running at about 50% – with a long tail running all the way down to 0% utilisation.
  • Data between its first and last update: A little cluster of blocks running at 70% – 100% (probably biased towards 70 – with a long tail running down to 25% utilisation.


So our index has a huge amount of empty space in many of its blocks. Is this going to make any noticeable difference to performance ? In our case, it probably will. We do actually visit the poorly packed blocks that cover the last few weeks fairly regularly, using random I/Os; remember, we update 135,000 “older” rows per day, which means finding and deleting entries from those sparsely packed index leaf blocks.

It’s a case where better packing of the index probably would lead to better caching, hence fewer I/O requests for the index leaf blocks, and a reduced risk of other, more useful, data being kicked out of the cache. An index rebuild could lead to fewer disk i/o wait events.

I came across an example of this type of index quite recently and used a script I wrote a long time ago (and have now now posted on the blog) to analyze the content of the index leaf blocks. This is what the results look like after a quick charting job in Excel:

The largest number of rows in any leaf block was 384 (slightly over 90% of the maximum possible – the index had been created with the default pctfree of 10) and the size of the index was 14,700 blocks.

After a call to ‘alter index XXX coalesce;’ – which took about 2 minutes to complete – the index dropped to 4,100 blocks. Of these, 3,250 held 377 index entries, and the remaining 750 blocks had the following distribution pattern:

You might note that in 10g, the coalesce command is able to reduce the number of rows in a leaf block that has filled past the pctfree limit – this is an enhancement over the 9i implementation.

So this looks like a case where a coalesce command, run every Saturday night perhaps, could have a small beneficial effect on a couple of key areas of the system, and may reduce a little of the general I/O load of the system. Unfortunately there are a number of other gains that could be made on the system at the same time – so it probably won’t be realistic to test this one in isolation to see exactly how much benefit it offers. It’s going to be a case of putting it in the basket, then keeping an eye out in case we pay for our reduced disk I/O with an increase in buffer busy waits (or the global cache equivalents) on that particular object.


Although there aren’t many cases where a regular index rebuild is likely to be beneficial it is possible to recognise patterns of use that are likely to cause some indexes to degenerate quite dramatically.

Even when you’ve spotted such a case, you could choose to ignore it, especially if the inefficient parts of the index aren’t used regularly by the business; but if there’s a lot of activity in the areas where the index has degenerated a regular coalesce may be all it takes to address the issue cleanly, tidily, and safely.

A few other items I’ve posted about coalescing or rebuidling indexes


  1. Spooky. I just wrote a blog entry last week about a very similar situation:

    My situation was about INSERT contention on an index on last_update_time, caused by multiple concurrent processes loading data at the same time. But the underlying database details seem to be the same, if not identical. Even down to the number of index entries per block – 377.

    Although of course you could argue that any index on a single DATE column would end up with that number of entries per block, with an 8 KB database block size – 377. So there is a reasonable possibility that we were not looking at the same application.

    But I came to the same conclusion – the results of the INSERTs would end up with a 50% utilisation of the index leaf blocks. It would be weird if the application was the same though. Do all the tables start with 2 or 3 letter prefixes? Such as MS or PR or PT?


    Comment by John — September 29, 2008 @ 1:57 pm BST Sep 29,2008 | Reply

  2. John,

    Different application, I think – even so two and three letter prefixes are relatively common; some design tools do that sort of thing automatically. Then you get columns like (PR_ID) as the primary key, then (PR_PR_ID) as related foreign keys.

    Good article, by the way. There is some variation between our examples because of different usage and implementation – yet they both demonstrate the same threat.

    Comment by Jonathan Lewis — September 29, 2008 @ 2:49 pm BST Sep 29,2008 | Reply

  3. […] Troubleshooting — Jonathan Lewis @ 5:33 pm UTC Sep 29,2008 Shortly after I published the previous post, I received an email from an old client who reminded me of a problem he had had with empty index […]

    Pingback by Index Analysis 2 « Oracle Scratchpad — September 29, 2008 @ 5:35 pm BST Sep 29,2008 | Reply

  4. Jonathan,

    this is interesting reading! However, since Richard inoculated me against index rebuilds I wonder how long the weekend job will keep the index in a state which is better for performance. I am assuming, the larger the table has grown the longer the effect of the rebuild lasts because the daily change amounts to a smaller fraction of overall size…

    You should probably put your conclusion in bold that this is not an automatism and even if such an index is spotted the effort might not be worthwhile.

    Thanks again!

    Comment by Robert Klemme — October 8, 2008 @ 3:25 pm BST Oct 8,2008 | Reply

  5. Robert,

    Inoculation is good – an overdose isn’t. My example suggests coalescing, but there are still cases (as Richard points out) where rebuilding can be a good idea.

    Your comment about the size of the index is important. In this (very specific) case, there is a rolling wedge of the index covering four to six week period that is constantly getting deletes at one end and inserts at the other – the total size of the index is irrelevant to the process of deleting from the earlier weeks and inserting into “today” – after seven years, we will still be looking at just the wedge for the most recent six weeks. Any resource wastage in this process will be due to the emptiness of the “older” leaf blocks in that wedge.

    I think the image you have in mind is the index rebuild that has a packing effect on the entire index – bringing it back to its original pctfree – followed by random inserts all over the place that end up seeing an increased rate of block splits because the blocks are (temporarily) well packed. There’s a demonstration of this in an article I wrote a few years ago:

    In the example we don’t have to worry about that phenomenon. At the start of each new day, our inserts go past the high end of the index and (apart from the current high-value block in the index) the timing of block splits will have nothing to do with the state of the rest of the index.

    Comment by Jonathan Lewis — October 9, 2008 @ 9:16 am BST Oct 9,2008 | Reply

  6. Jonathan,

    thanks for the additional explanation! I am afraid, I was not as clear as I would have liked to be. First, I don’t think I got an overdose from Richard – it’s just a reinforcement of my natural scepticism. :-)

    Then, about the size issue – my reasoning behind this was like this: the coalesce will reduce the wasted space of the index. During the week a certain amount of wasted space is added in the way you described. This volume of “waste” is roughly the same per week because you assumed a steady rate of inserts and updates. The _percentage_ of waste thus decreases over time because the total space increases (deletion will only start in 2012). The effect (i.e. achieving minimal wasted %) of a coalesce at the beginning will be reduced by insert / update activity much faster than after a year when the table and index have filled significantly. This could be translated to : with increasing size of the index you can increase the interval of coalesce without wasting a too high _percentage_ of the space.

    Few additional thoughts: if the table is expected to grow large maybe partitioning with local indexes can help limit the effort needed for coalesce because “older” partitions do not need to be coalesced any more. Another idea I had was to use a FBI to reduce the number of indexed rows but unfortunately you can’t use SYSDATE in there which you’d have to to automatically “expire” entries. After automatic partitioning in 11g maybe a rolling window for DATE and TIMESTAMP indexes would be a nice idea for Oracle 12. :-)


    Comment by Robert Klemme — October 9, 2008 @ 11:04 am BST Oct 9,2008 | Reply

  7. Hi Jonathan:
    Would a reverse key index help in this scenario?

    Comment by Kumar — October 18, 2008 @ 3:51 pm BST Oct 18,2008 | Reply

  8. Kumar,

    It’s always a good idea to consider reverse key indexes when you see an odd pattern of filling and emptying on an index – but I don’t think it would really have much effect in this case.

    Part of the problem with this index is that there are a lot of rows for each value – around 100 leaf blocks per value. If you reverse the key value, I think you might get some benefit at the high end block for a previously loaded date (in terms of re-filling the block “earlier”) but that wouldn’t have impact overall.

    If this were a really big table, though, it would be an idea that could be modelled fairly easily – maybe 3 or 4 hours of work – if you wanted to check the idea.

    Comment by Jonathan Lewis — October 18, 2008 @ 5:02 pm BST Oct 18,2008 | Reply

  9. […] two birds with one stone without realising it. If you want an example of this, take another look at this article on index analysis and ask yourself how much of the space wastage the article describes was due to deleting from the […]

    Pingback by Index Explosion – 4 « Oracle Scratchpad — September 19, 2009 @ 3:42 pm BST Sep 19,2009 | Reply

  10. Respected Sir,

    Nice presentation

    “However, rather than take the worst case scenario, let’s take the best (and least likely) scenario – assume that by the end of the
    day there have been lots of 50/50 block splits, but a lot of back-fill has taken place (because of a random order of updates) and
    every block just happens to get very close to 100% full.”

    since the column last_update_date always reflect to the future date when the update happen,
    how will it back-fill

    could you please explain?


    Comment by henish — November 10, 2009 @ 10:25 pm BST Nov 10,2009 | Reply

    • Henish,

      Remember that the index is on trunc(last_update_date) – so the virtual column value is a constant for 24 hours. Since each index entry (non-unique index, remember) is then (column_value, rowid), the order in which we update the old value could affect the rows in an order that allowed earlier rowids to be updated after later rowids, filling in gaps in the index leaf blocks for the current date.

      Comment by Jonathan Lewis — November 10, 2009 @ 10:45 pm BST Nov 10,2009 | Reply

  11. […] The last two cases, in particular, need a little more thought than just the simple “it’s two / four times the size”- and this is what the script index_efficiency_3.sql is about. Given the schema, table, and index name, it produces a report which tells you about the distribution of rows across index leaf blocks. The script is quite expensive to run – it reads every index leaf block calls an undocumented function, and then runs a large aggregation process. Make sure you read the notes before you use the script.  (For an example of use, showing some excel charts from the output, see the note at this URL.) […]

    Pingback by Index Efficiency 3 « Oracle Scratchpad — March 3, 2010 @ 9:21 pm BST Mar 3,2010 | Reply

  12. […] a spesific schema. By the way – DO READ the notes in the script. There is also a link to a index analysis on FIFO columns. This article uses a LAST_UPDATE_DATE as an example. Such indexes might be candidates for […]

    Pingback by Oracle Blog - When to rebuild an Oracle index? — January 5, 2012 @ 10:28 am BST Jan 5,2012 | Reply

  13. […] As we update data, we are going to delete entries from blocks in the left hand end of the index and insert them at the right hand end of the index. Depending on the rate and pattern of updates it is possible that a large number of blocks at the left hand end of the index will become close to empty –  this could have a significant impact on the effectiveness of the buffer cache and might encourage us to use the coalesce command on the index every few days. (If this sounds familiar, I have written about this particular index in the past.) […]

    Pingback by Over-indexing « Oracle Scratchpad — January 10, 2013 @ 6:43 pm BST Jan 10,2013 | Reply

  14. Jonathan,

    I have last_update indexes which are only needed for looking at recent data, e.g. reconciling the day’s changes with the data warehouse. There are lots of updates on the tables so the some of the indexes are quite big – bigger than their tables in some cases. As no-one should be using the index to look at data that is a few months old. I am thinking of creating a partitioned index on the tables (which are not partitioned). And periodically dropping the old partitions, to save space.

    Does this sound sensible (version

    Thank you for your time,


    Comment by Ben Collier — October 9, 2014 @ 8:59 am BST Oct 9,2014 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by