Have you ever created an index on a column with a name like “last_update_date” – or maybe even a function-based index on “trunc(last_update_date)” ?
You can probably guess the purpose of the column from its name – but could you also guess what state that index is going to be in a few weeks after you’ve created it.
Most B-tree indexes will tend to operate at about 70% efficiency – by which I mean that on average an index leaf block will be about 30% free space. Of course, if you did a statistical analysis of the usage of all the blocks in the index, you’d see pattern (roughly a Normal distribution) peaking around the 70% usage mark, trailing out to 50% at one extreme and 100% at the other.
That pattern is dependent on the general use of B-tree indexes which assumes that key values are inserted and deleted in a fairly random order with the values distributed fairly evenly across the range covered by the index, and that the number of entries inserted or deleted in a single transaction is “very small” compared to the total number of entries in the index. (i.e. basic pattern of OLTP usage.)
But if the arrival isn’t random, or the distribution isn’t even, then you may find that a few interesting indexes start to behave very badly – an index on “trunc(last_update_date)” is a classic example. (In fact, any index that is based on a timestamp or sequence number, and any index based on a small number of “status” values, may display some interesting anomalies – though perhaps not as extreme as the one I’m about to show you).
The pattern of usage for this column is, of course, both business dependent and application dependent, and the pattern of usage affects the storage efficiency. Imagine a fairly typical scenario:
- When data is first created (and we created 30,000 rows per day, say) the “last_update_date” is set to the date of creation.
- Over the next few weeks, each row is updated four or five times (and, at steady state, we will be updating about 135,000 rows per day).
- We don’t delete any data (until 2012, when we start doing annual archiving).
What’s going to go on inside the index ?
Every day, we insert 165,000 entries at the right hand end of the index (30,000 from the new data, 135,000 from the updates) all for the same date. Further down the index – spread over a date range of four or five weeks – we are deleting 135,000 entries.
The new data is fairly likely to cause lots of 50/50 “leaf node splits” at the high-value end of the index (especially if we are using ASSM for the tablespaces). This is assuming the new data will generally have “higher” rowids than the old data – which is often the case – but also assumes you have multiple concurrent processes so that the newly inserted rows is spread randomly across several table blocks. (Note – you will get similar effects if you have defined multiple freelists or freelist groups on a table that isn’t in an ASSM tablespace).
The effect on the old data that gets updated to today’s date can’t be predicted quite so easily. We can be confident that, in general, it will be inserted “to the left of” or “lower down” the index than the new data because the rowids are likely to be “lower”.
We could be unlucky and find that the order in which the updates are done (oldest data first, for example) means we end up with 50/50 leaf node splits happening all the way through that section of the index with no back-fill. However, rather than take the worst case scenario, let’s take the best (and least likely) scenario – assume that by the end of the day there have been lots of 50/50 block splits, but a lot of back-fill has taken place (because of a random order of updates) and every block just happens to get very close to 100% full.
So at the end of the day we’ve got some index blocks with ‘old’ data that are packing their blocks very well, and we’ve got some index blocks with ‘new’ data that are running at 50% empty space. Because sysdate has moved on, no further data will get into those blocks. So let’s think about the future of those leaf blocks.
Over the next few weeks, the blocks that were the “new” data are slowly going to empty out – because every row gets updated four or five times – and those blocks will eventually end up completely empty (hence reusable elsewhere in the index).
At the same time, some of the blocks which were the “old” data are going to start emptying out as well – but they’re not going to end up empty. In fact, with out work pattern, they’ll end up with an average utilisation of 20% to 25%.
Remember that we said we would be updating each row four or five times – let’s say it’s four, for convenience. When we updated 135,000 rows in one day that would, at steady state, have been the first update for 25% of the rows, the second update for another 25%, the third update for another 25%, and the last update for the final 25%.
Over the next few weeks, 75% of those rows are going to be updated again and move out of those leaf blocks.
So what’s the index going to look like after a few months – there are three main patterns to super-impose:
- The oldest data after all updates: A huge number of blocks running at 25% utilisation – with a Normal distribution spreading from 0% to 50% utilisation.
- The most recent 4 weeks of data: A relatively small cluster of blocks running at about 50% – with a long tail running all the way down to 0% utilisation.
- Data between its first and last update: A little cluster of blocks running at 70% – 100% (probably biased towards 70 – with a long tail running down to 25% utilisation.
So our index has a huge amount of empty space in many of its blocks. Is this going to make any noticeable difference to performance ? In our case, it probably will. We do actually visit the poorly packed blocks that cover the last few weeks fairly regularly, using random I/Os; remember, we update 135,000 “older” rows per day, which means finding and deleting entries from those sparsely packed index leaf blocks.
It’s a case where better packing of the index probably would lead to better caching, hence fewer I/O requests for the index leaf blocks, and a reduced risk of other, more useful, data being kicked out of the cache. An index rebuild could lead to fewer disk i/o wait events.
I came across an example of this type of index quite recently and used a script I wrote some time ago to do a block level analysis (an example of the script is available on my website). This is what the results look like after a quick charting job in Excel:
The largest number of rows in any leaf block was 384 (slightly over 90% of the maximum possible – the index had been created with the default pctfree of 10) and the size of the index was 14,700 blocks.
After a call to ‘alter index XXX coalesce;’ – which took about 2 minutes to complete – the index dropped to 4,100 blocks. Of these, 3,250 held 377 index entries, and the remaining 750 blocks had the following distribution pattern:
You might note that in 10g, the coalesce command is able to reduce the number of rows in a leaf block that has filled past the pctfree limit – this is an enhancement over the 9i implementation.
So this looks like a case where a coalesce command, run every Saturday night perhaps, could have a small beneficial effect on a couple of key areas of the system, and may reduce a little of the general I/O load of the system. Unfortunately there are a number of other gains that could be made on the system at the same time – so it probably won’t be realistic to test this one in isolation to see exactly how much benefit it offers. It’s going to be a case of putting it in the basket, then keeping an eye out in case we pay for our reduced disk I/O with an increase in buffer busy waits (or the global cache equivalents) on that particular object.
Although there aren’t many cases where a regular index rebuild is likely to be beneficial it is possible to recognise patterns of use that are likely to cause some indexes to degenerate quite dramatically.
Even when you’ve spotted such a case, you could choose to ignore it, especially if the inefficient parts of the index aren’t used regularly by the business; but if there’s a lot of activity in the areas where the index has degenerated a regular coalesce may be all it takes to address the issue cleanly, tidily, and safely.
A few other items I’ve posted about coalescing or rebuidling indexes