There are many suggestions floating around the internet about identifying which Oracle indexes to rebuild. One of these involves running the validate (or analyze index validate) command on an index and checking the resulting figures in view index_stats to determine whether or not del_lf_rows exceeds some percentage (usually 20%) of lf_rows and rebuilding the index if so. Since this suggestion came up, yet again, in a recent OTN thread about Oracle index rebuilding I thought I’d write up a quick note highlighting one of the obvious flaws in the argument.
I’m not going to bother pointing out the threat inherent in the table-locking when you use the “validate index” command (or the “analyze index validate” equivalent) but I think it’s worth making some comments about the misunderstanding built into this policy. So let’s start by building some data, creating an index on it, then deleting 90% of the data:
create table t1 as with generator as ( select --+ materialize rownum id from dual connect by rownum <= 10000 ) select 1000000 + rownum id, lpad(rownum,10,'0') small_vc from generator v1, generator v2 where rownum <= 40000 ; create index t1_i1 on t1(id); validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ; delete from t1 where mod(id,100) >= 10 ; commit; validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ;
I haven’t bothered to collect statistics in this code as I’m not interested in execution plans, only in the amount of data deleted and what this does to the physical structure of the index. Here’s the the output of my script starting from the moment just after I’ve created the index:
Index analyzed. DELETION_PERCENT ---------------- 0 1 row selected. 36000 rows deleted. Commit complete. Index analyzed. DELETION_PERCENT ---------------- 90 1 row selected.
According to the general advice, this index is now in need of a rebuild since del_lf_rows is far more than 20% of lf_rows – but before we rebuild the index let’s delete a little more data.
delete from t1 where mod(id,100) = 0 ; commit; validate index t1_i1; select 100 * del_lf_rows/lf_rows deletion_percent from index_stats ;
My first delete statement got rid of 90% of the data leaving the 4,000 rows where mod(id,100) was between zero and nine. So my second delete has eliminated 10% of the remaining 4,000 rows. Let’s see what we get when we validate the index:
400 rows deleted. Commit complete. Index analyzed. DELETION_PERCENT ---------------- 10 1 row selected.
How wonderful – by deleting a few more rows we’ve got to a state where we don’t need to rebuild the index after all !
This note is not intended to tell you when you should, or should not, rebuild an index. It is simply trying to highlight the fact that anyone who thinks that a value exceeding 20% for del_lf_rows / lf_rows is a sensible indicator does not have a proper understanding of how indexes work, and is basing their assumption on an over-simplistic model. In this case the obvious error in this model is that it allows you to miss indexes which might actually benefit from some action; but there’s also the possibility that a little random slippage in timing may persuade you to rebuild an index because you happened to check it in the window between deletion and re-use.
The problem is based on a fundamental misunderstanding, which is this: if you believe that an index entry reserves a spot in an index and leave a little hole that can never be reused when it is deleted (unless you re-insert exactly the same value) then inevitably you will think that the del_lf_rows figure is some sort of measure of actual space that could be reclaimed.
But, as the Oracle myth busters like Richard Foote have been saying for years, that’s not how Oracle’s B-tree indexes work. When you delete an index entry, Oracle marks it as deleted but leaves it in place. When you commit your transaction Oracle does nothing to the index entry – but other processes now know that the entry can be wiped from the block allowing the space to be re-used.
This is what del_lf_rows is about – it’s the number of rows that are marked as deleted by transactions that have committed; and since the validate command can only run if it has locked the table in exclusive mode, any index entries marked for deletion will inevitably be committed deletes. So after I had deleted (and commited) 36,000 rows there were 36,000 entries in the index marked as deleted and committed; when I deleted a few more entries my second transaction wiped the deleted entries from any leaf blocks it visited, tidying the blocks (with a “heap block compress”) before marking a few more rows as deleted.
The upshot of this is that many systems (especially OLTP systems) will see del_lf_rows as a fairly small fraction of lf_rows because most systems tend to do little updates scattered randomly across lots of leaf blocks – constantly wiping out the rows marked as deleted by earlier transactions.
It’s really only in the case where a single large delete has taken place in the recent past that you’re likely to see a lot of deleted rows still in the index when you validate it and – as most people are now aware – a process that is supposed to do a very large delete is a process that should be considered in the design phase as a candidate for special treatment such as dropping/disabling some indexes before the delete then rebuilding afterwards. It won’t be a process where you will have to validate the index to decide roughly how much data you’ve deleted, and where in the index that data was, it’s a process where you’ll know what’s going to happen before it happens.