Oracle Scratchpad

March 3, 2010

Index Efficiency 3

Filed under: Indexing,Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 9:20 pm GMT Mar 3,2010

My last posting about indexes was an introduction to a pl/sql script that estimated the sizes that your indexes would be if you rebuilt them at a given percentage of utilisation for leaf blocks. By comparing these estimates with the actual size of the indexes you can get a quick report of indexes that are probably significantly larger than they really need to be. If you find that you have some indexes that look suspect the next step is to work out why; and whether or not it matters.

I can think of four classes of reasons why an index is larger than the sum of its data:

There is a bug somewhere in Oracle’s code to handle leaf node splits that can appear even at fairly low levels of concurrency . This results in excess ITL entries being allocated in the new leaf nodes. Once the ITL entires have been allocated they are always propagated across future leaf block splits – so you can easily end up with an index where half the space in most leaf blocks is wasted on unnecessary ITL entries.

The table has been subject to a massive delete (e.g. purging a lot of history) and the index hasn’t yet had time to refill the resulting empty space.

The index is a “FIFO” or “queue” index (such as the Advanced Queueing index organized tables (IOTs)). Data is inserted with a constantly increasing (time-based or sequence-based) key and is subsequently deleted; but if the deletion can be delayed or is not 100% perfect, the index can end up with a long near-empty “tail”.

Indexes with a large number of rows per key (by which I mean enough to fill at least one or two leaf blocks per key) can regularly cause 50/50 block splits as new rows are inserted “out of sequence” into an existing key range. Because of the typical nature of extent allocation such indexes are likely to be slower refilling the empty space than a “high-precision” B-tree index where you can expect a fairly small number of rows for each keys and a random arrival pattern for the key values.

If you can get some idea of how the data is spread across an index, and combine this with your knowledge of the application, you should be able to understand why an index has become larger than it should be and have a better idea of how to address the situation sensibly (ideally with a one-fix and subsequent minimum ongoing maintenance).

To start with you can try to answer a few simple questions, and the first question is always – can I see that the space wastage is having an effect on performance. Assuming you think it probably is then you can go on to the following:

Has someone deleted a large fraction of the table in the recent past. If so then any apparent space wastage may be the natural side effect, and it may be a good idea to rebuild the index. (In fact it may be a good idea to rebuild the table using the dbms_redefinition package first. Rebuilding tables is easy to do – sometimes too easy – but can have undesirable side effects in certain circumstances – which I’ll have to describe in another posting some time.)

Does your knowledge of the application tell you that this is a FIFO/Queue index – if so you probably need to include a call to coalesce the index regularly somewhere in the application code. It’s probably a good idea to rebuild the index once before you implement the regular coalesce to release any excess space back to the tablespace.

If the index appears to be about twice the size you expect it might be the concurrency bug or the unlucky 50/50 split problem (are you likely to have hot spots in the index, or a few very popular values) . Dump a few blocks (see the end of this note) from one of the higher numbered extents in the index to see if they have a lot of ITL entries, or if they have roughly 50% free space. If they have a lot of ITL entries (significantly more than your expected level of real concurrent change) then rebuild the index and set maxtrans  (possibly getting approval from Oracle support first for using the workaround described in one of my earlier articles). If you see half-empty blocks, then you may have to work out what it is about your application that makes that happen. Having thought about it, you may decide that the index shouldn’t even exist, or should be changed to a small collection of function-based indexes, or turned into a partitioned index. There are many variations on a theme – but the symptoms suggest an opportunity for improving the design and efficiency of your database,

If the index appears to be about four times the size you expect it’s possible that you’ve run into a variant of the 50/50 split problem combined with the concurrency bug. This is most likely to happen with a time-based or sequence-based index where (a) all inserts are taking place at the right hand block of the index and (b) the new data doesn’t arrive in absolutely perfect sequence order (after all, concurrent processes can get randomly pre-empted by the operating system). In this case you probably need to think about your options for recreating the as a reverse-key index as this generally spreads the inserts randomly across the index so you end up a lower degree of concurrent activity on any one block, and a more typical 70% utilisation because of the random arrival effect. (This is likely to increase the amount of random I/O suffered by that index, though.)

The last two cases, in particular, need a little more thought than just the simple “it’s two / four times the size”- and this is what the script index_efficiency_3.sql is about. Given the schema, table, and index name, it produces a report which tells you about the distribution of rows across index leaf blocks.

The script is quite expensive to run – it reads every index leaf block, calls an undocumented function for every non-deleted entry it finds, and then performs a large aggregation. Make sure you read the notes before you use the script.  (For an example of use, showing some excel charts from the output, see the note at this URL.)

Update:

As an example of the output from the code, here’s an index that I was looking at quite recently. The index is a classic example of the FIFO problem. Before a call to coalesce it totalled about 33,000 used leaf blocks and the report – – which took just over two minutes to produce – looked like this (with a couple of hundred lines removed, of course):

ROWS_PER_BLOCK     BLOCKS TOTAL_ROWS RUNNING_TOTAL
-------------- ---------- ---------- -------------
             1       8706       8706          8706
             2       4830       9660         13536
             3       2953       8859         16489
             4       1648       6592         18137
             5        997       4985         19134
             6        628       3768         19762
             7        300       2100         20062
             8        162       1296         20224
             9         87        783         20311
            10         52        520         20363
...
           227        100      22700         31871
           228        111      25308         31982
           229         84      19236         32066
           230         97      22310         32163
           231         77      17787         32240
           232         61      14152         32301
           233         54      12582         32355
           234        529     123786         32884
               ---------- ----------
sum                 32884    1887595

Note, particularly, that about 20,000 of the blocks have seven entries or fewer, while the mostly heavily used blocks manage to hold 234 entries.

After the coalesce (which also took just over two minutes), the report took 34 seconds to run, and looked like this:


ROWS_PER_BLOCK     BLOCKS TOTAL_ROWS RUNNING_TOTAL
-------------- ---------- ---------- -------------
             2          1          2             1
             7          1          7             2
            10          1         10             3
            11          1         11             4
            13          1         13             5
            14          1         14             6
            23          1         23             7
            29          1         29             8
            33          1         33             9
            34          1         34            10
...
           418         47      19646           730
           419         96      40224           826
           420        154      64680           980
           421        193      81253          1173
           422       3731    1574482          4904
               ---------- ----------
sum                  4904    1910921

We’ve dropped to less than 5,000 blocks, and only a scattering of blocks with a small number of index entries, and these are probably a result of the fact that coalesce doesn’t work across branch block boundaries.

The result shows us something else, though. Each leaf block is capable of holding 422 index entries rather than the limit of 234 that we saw before, this tells us that the index was probably also suffering from the concurrency bug (a problem that isn’t a great surprise when a sequence is used for a key and the levels of concurrency are quite high).

After the coalesce we still have 28,000 blocks in the index segment on its freelist (or marked as free in its space management bitmap – I can’t remember whether or not this index was in an ASSM tablespace) so at some stage we should rebuild the index, then change the initrans and fake in a sensible maxtrans, and introduce a regular coalesce.

There’s one more thing we could do to understand the state of a problem index – but I’ll have to leave that for another blog post.

14 Comments »

  1. Hi Jonathan, I am a regular reader to all of your forums on your website as well as on Asktom or others. I really appreciate the help you are providing to oracle community.

    We are a OLTP shop and use sequences for primary keys so most of our indexes are right hand. Our environment is 2 node RAC on Solaris and we use ASM/ASSM. Recently we deleted some data from one of our OLTP tables and started seeing huge drop in website response time. I see lots of node splits, ITL waits and Free buffer waits on the indexes for this table. The inittans is set to 100 and maxtrans is set to 255 with pctfree 10%. We did rebuild the indexes after the deletion. One of the transaction having a problem includes insert into this table followed by update. The avg row size is around 25 bytes for most of the indexes on this table. So my question is – what else can we do to reduce the no. of node splits. Would rebuilding the index with higher pctfree would help.

    Thanks
    Parvesh

    Comment by Parvesh Kumar — March 4, 2010 @ 12:29 pm GMT Mar 4,2010 | Reply

    • Parvesh.

      There’s not really enough detail in your comments to make a good diagnosis, and it would be easy to mis-interpret what you’ve said.

      What version of Oracle ?

      Is the drop in response time intermittent, completely random, or consistent ?

      Are the free buffer waits only on the primary key indexes, or on other indexes on the table ?

      Are you not seeing “gc buffer busy waits”, or even “buffer busy waits” ?

      Were the deletes for old (i.e. low key value) data, or were they across the entire range of sequence values ?

      Are any of the indexes reverse key (which some people do with sequence-based indexes)?

      Are the sequences cache/nocache, order/noorder. If they are cached is it the default cache or a large cache ?

      Can you confirm you really did mean all your indexes are set with INITRANS = 100 ? (Just in case you meant the tables had initrans = 100, or only the sequence-based indexes)?

      Did you do the index rebuilds immediately after the deletes, or did you rebuild the indexes because performance got worse and you thought a rebuild would help ?

      Do you have any idea which indexes are subject to most (or longest) FB waits and ITL waits ?

      Have you checked v$segstat for object level statistics ?

      Are you using a the keep and recycle caches ?

      Comment by Jonathan Lewis — March 5, 2010 @ 10:37 am GMT Mar 5,2010 | Reply

      • Here are the details :-

        What version of Oracle ? 10.2.0.4 on solaris 10.

        Is the drop in response time intermittent, completely random, or consistent ?
        It is consistent since we deleted data from the table.

        Are the free buffer waits only on the primary key indexes, or on other indexes on the table ? Primary key table is on sequence but we don’t see any waits on it rather the waits are on other big indexes.

        Are you not seeing “gc buffer busy waits”, or even “buffer busy waits” ?
        we do see both of these waits on the indexes.

        Were the deletes for old (i.e. low key value) data, or were they across the entire range of sequence values ?
        Deletes were to purge old data. That data won’t be inserted any time soon or may be never at all.

        Are any of the indexes reverse key (which some people do with sequence-based indexes)?
        No we don’t use reverse key indexes at all.

        Are the sequences cache/nocache, order/noorder. If they are cached is it the default cache or a large cache ?
        Sequences are cached with cahce value more than 10000.

        Can you confirm you really did mean all your indexes are set with INITRANS = 100 ? (Just in case you meant the tables had initrans = 100, or only the sequence-based indexes)?
        Please see below:-

        stats after index rebuild:-

        SQL> select INDEX_NAME,INI_TRANS,MAX_TRANS,NUM_ROWS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY from dba_indexes where table_name='CENRESPONSEDETAIL_1';
        
        INDEX_NAME                      INI_TRANS  MAX_TRANS   NUM_ROWS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY
        ------------------------------ ---------- ---------- ---------- ----------------------- -----------------------
        PK_CENRESPONSEDETAIL_1                100        255 3451932737                       1                       1
        IDX_CRD1_1                            100        255 3506713096                      88                   19759
        IDX_CRD1_2                            100        255 2157529757                      19                    2736
        IDX_CRD1_3                            100        255 3471412663                       1                       1
        IDX_CRD1_4                            100        255 3503166835                       1                       1
        IDX_CRD1_5                            100        255 3498456992                       1                       1
        IDX_CRD1_6                            100        255 3504149465                       1                       1
        
        7 rows selected.
        
        SQL> select TABLE_NAME,PCT_FREE,INI_TRANS,MAX_TRANS,NUM_ROWS,BLOCKS ,EMPTY_BLOCKS,AVG_ROW_LEN from dba_tables where table_name='CENRESPONSEDETAIL_1';
        
        TABLE_NAME                       PCT_FREE  INI_TRANS  MAX_TRANS   NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
        ------------------------------ ---------- ---------- ---------- ---------- ---------- ------------ -----------
        CENRESPONSEDETAIL_1                    10        100        255 3475661637   36810214        42559          51
        
        SQL> select segment_name,bytes/(1048576*1024) from dba_segments where segment_name='CENRESPONSEDETAIL_1';
        
        SEGMENT_NAME                                                                      BYTES/(1048576*1024)
        --------------------------------------------------------------------------------- --------------------
        CENRESPONSEDETAIL_1                                                                          285.15625
        
        SQL> select segment_name,bytes/(1048576*1024) from dba_segments where segment_name in (select INDEX_NAME from dba_indexes where table_name='CENRESPONSEDETAIL_1');
        
        SEGMENT_NAME                                                                      BYTES/(1048576*1024)
        --------------------------------------------------------------------------------- --------------------
        PK_CENRESPONSEDETAIL_1                                                                      113.769531
        IDX_CRD1_1                                                                                  90.8203125
        IDX_CRD1_2                                                                                  102.539063
        IDX_CRD1_3                                                                                  167.480469
        IDX_CRD1_4                                                                                  170.410156
        IDX_CRD1_5                                                                                  167.480469
        IDX_CRD1_6                                                                                  181.152344
        
        7 rows selected.
        =======================================
        Same stats before the index rebuild
        ========================================
        SQL> select segment_name,bytes/(1048576*1024) from dba_segments where segment_name in (select INDEX_NAME from dba_indexes where table_name='CENRESPONSEDETAIL_1');
        SEGMENT_NAME			BYTES/(1048576*1024)    
        ------------------------        --------------------    
        PK_CENRESPONSEDETAIL_1          112.792969
        IDX_CRD1_1		        163.085938
        IDX_CRD1_2		        101.5625
        IDX_CRD1_3                      165.527344
        IDX_CRD1_4		        257.8125
        IDX_CRD1_5	                166.992188
        IDX_CRD1_6                      186.035156
        
        7 rows selected.
        
        SQL> select index_name, LEAF_BLOCKS,DISTINCT_KEYS,AVG_LEAF_BLOCKS_PER_KEY,AVG_DATA_BLOCKS_PER_KEY,CLUSTERING_FACTOR,NUM_ROWS from dba_indexes where table_name='CENRESPONSEDETAIL_1';
        
        INDEX_NAME                LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR   NUM_ROWS
        ------------------------- ----------- ------------- ----------------------- ----------------------- ----------------- ----------
        PK_CENRESPONSEDETAIL_1       14575087    3451932737                       1                       1         505733620 3451932737
        IDX_CRD1_1                   21283897        130738                     162                   20111        2629293623 3482095807
        IDX_CRD1_2                   13245500        685024                      19                    2736        1874251820 2157529757
        IDX_CRD1_3                   21507130     282013977                       1                       1         480053723 3471412663
        IDX_CRD1_4                   33518250    1833751750                       1                       1         661135787 3468067940
        IDX_CRD1_5                   21789717    2745574118                       1                       1        1419234050 3481621950
        IDX_CRD1_6                   24245970    1857577218                       1                       1        1372502833 3482156097
        
        7 rows selected.   
        

        Did you do the index rebuilds immediately after the deletes, or did you rebuild the indexes because performance got worse and you thought a rebuild would help ?
        We did the index rebuild after 2 weeks of deleting the data when performance didn’t improve.

        Do you have any idea which indexes are subject to most (or longest) FB waits and ITL waits ?
        Yes, IDX_CRD1_4 ,IDX_CRD1_5 and IDX_CRD1_6

        Have you checked v$segstat for object level statistics ?

        I guess you mean v$segment_statistics. Yes that’s where we found ITL and other waits.

        Are you using a the keep and recycle caches ?

        no we are not using keep or recycle pool.

        Comment by Parvesh — March 5, 2010 @ 4:04 pm GMT Mar 5,2010 | Reply

  2. […] Troubleshooting — Jonathan Lewis @ 6:32 pm UTC Mar 7,2010 In an earlier article about investigating the state of an index in detail I supplied a piece of SQL that would analyse an index (no, not using the Analyze command) […]

    Pingback by Treedump – 2 « Oracle Scratchpad — March 7, 2010 @ 6:34 pm GMT Mar 7,2010 | Reply

  3. […] Estimating the correct index siz […]

    Pingback by Index analysis « Oracle Scratchpad — March 10, 2010 @ 10:44 pm GMT Mar 10,2010 | Reply

  4. […] 13-How to identify index efficiency? Jonathan Lewis-Index Efficiency-3 […]

    Pingback by Blogroll Report 26/02/2010 – 05/03/2010 « Coskan’s Approach to Oracle — March 25, 2010 @ 1:49 pm GMT Mar 25,2010 | Reply

  5. […] queue tables are, of course, a perfect example of what I call the “FIFO” index so it’s not a surprise that they might need special consideration. Rather than rewrite the […]

    Pingback by Index Rebuilds « Oracle Scratchpad — March 3, 2011 @ 6:47 pm GMT Mar 3,2011 | Reply

  6. Hi Jonathan,

    Excellent article.We have a DSS application where we run few selects and huge number of inserts.Deletes are far few in between or none.We keep seeing huge ENQ TX index contention on our global indexes(non partitioned).
    I am not sure if the indexes are right side heavy as the application inserts random request_ids on the index.in AWR reports we see Branch and leaf node splits happening on the index.Do you think coaelescing the index can be one of the approaches we can take.

    Vik

    Comment by vik — January 31, 2012 @ 8:39 pm GMT Jan 31,2012 | Reply

  7. Jonathan,

    A single row insert on a table taking ~90 secs at times with more “db file sequential read” on table’s indexes.I don’t see leaf block splits and the indexes are greatly compacted as shown below.
    I don’t see “ITL” wait events during that time frame either.

    unique index with 2 columns
    ===============================
    
    ROWS_PER_BLOCK     BLOCKS     ROW_CT CUMULATIVE_BLOCKS
    -------------- ---------- ---------- -----------------
                24          1         24                 1
                94     159396   14983224            159397
                95        484      45980            159881
                   ---------- ----------
    sum                159881   15029228
    
    PK index 1 column
    =================
    
    ROWS_PER_BLOCK     BLOCKS     ROW_CT CUMULATIVE_BLOCKS
    -------------- ---------- ---------- -----------------
               100          1        100                 1
               166      90050   14948300             90051
               167        484      80828             90535
                   ---------- ----------
    sum                 90535   15029228
    
    

    Comment by dbabibleantony — June 25, 2014 @ 5:19 pm BST Jun 25,2014 | Reply

    • You didn’t mention the version – which might be relevant; and it would be nice to know whether the indexes have been created by nothing more than small OLTP-style inserts, if they’ve been rebuilt recently, or been subject to large direct path inserts …. etc. There may be something about the previous growth of the indexes that is significant.

      I’ll take a guess that you’re using ASSM (automatic segment space management) and something is going wrong after bulk inserts that leave a lot of space management blocks claiming that a lot of leaf blocks are empty and ready for use when actually they’re full. This would leave Oracle reading many bitmap blocks and then 64 or 128 leaf blocks at a time trying to find a block that really was formatted and empty.

      Check the trace file – if have one – or the the ASH data – if you capture it in time and are licensed – to see if you can spot intermittent bitmap block addresses in a long chain of leaf block addresses.

      Comment by Jonathan Lewis — June 25, 2014 @ 6:19 pm BST Jun 25,2014 | Reply

      • RDBMS version is 10.2.0.5. The unique keys are generated by Java code. I paseted some sample values of the columns. we use ASSM.The info after rebuilt was taken from another database which is getting refreshed every week using import.There were no bulk inserts done till now.

        Col1                                                                  col2
        -------------------------------------------------- ---------------------------------
        00000269b8f3458fa78315cca29aea5b                   97d40bb3b8ec4ff086706c1a9a8c388c
        000004ee94434688831e710dfdefa13c                   920809b70f114632aa7e2ef6ef83be97
        
        

        Comment by dbabibleantony — June 25, 2014 @ 6:45 pm BST Jun 25,2014 | Reply

    • Jonathan, you are spot on. I got the information from the development team that they are doing bulk inserts on this table. From the ASH data, I could see only wait times on the event “db file sequential read” on the indexes (listed in current_obj#) of the table.

      Comment by dbabibleantony — June 25, 2014 @ 7:51 pm BST Jun 25,2014 | Reply

  8. I am confused with this report. Could you explain the difference and the efficiency of the index before and after rebuild?

    After rebuild
    =========
    
    ROWS_PER_BLOCK     BLOCKS     ROW_CT CUMULATIVE_BLOCKS
    -------------- ---------- ---------- -----------------
                24          1         24                 1
                94     159396   14983224            159397
                95        484      45980            159881
                   ---------- ----------
    sum                159881   15029228
    
    Before Rebuild
    ===========
    
    ROWS_PER_BLOCK     BLOCKS     ROW_CT CUMULATIVE_BLOCKS
    -------------- ---------- ---------- -----------------
                42          1         42                 1
                43          1         43                 2
                46          6        276                 8
                47         63       2961                71
                48       1411      67728              1482
                49       2430     119070              3912
                50       2998     149900              6910
                51       3523     179673             10433
                52       3966     206232             14399
                53       4508     238924             18907
                54       4928     266112             23835
                55       5346     294030             29181
                56       6140     343840             35321
                57       6976     397632             42297
                58       7267     421486             49564
                59       6990     412410             56554
                60       6698     401880             63252
                61       6525     398025             69777
                62       6117     379254             75894
                63       5853     368739             81747
                64       5459     349376             87206
                65       5242     340730             92448
                66       4920     324720             97368
                67       4833     323811            102201
                68       4495     305660            106696
                69       4448     306912            111144
                70       4108     287560            115252
                71       4016     285136            119268
                72       3891     280152            123159
                73       3800     277400            126959
                74       3617     267658            130576
                75       3564     267300            134140
                76       3511     266836            137651
                77       3420     263340            141071
                78       3202     249756            144273
                79       3158     249482            147431
                80       3046     243680            150477
                81       3150     255150            153627
                82       2922     239604            156549
                83       2902     240866            159451
                84       2750     231000            162201
                85       2824     240040            165025
                86       2736     235296            167761
                87       2614     227418            170375
                88       2608     229504            172983
                89       2610     232290            175593
                90       2522     226980            178115
                91       2379     216489            180494
                92       2447     225124            182941
                93       2416     224688            185357
                94       2512     236128            187869
                95       2346     222870            190215
                96       2229     213984            192444
                97       2233     216601            194677
                98       2251     220598            196928
                99       2294     227106            199222
               100       2160     216000            201382
               101       2137     215837            203519
               102       2208     225216            205727
               103       2109     217227            207836
               104       2128     221312            209964
               105        938      98490            210902
                   ---------- ----------
    sum                210902   15093584
    

    Comment by dbabibleantony — June 25, 2014 @ 6:40 pm BST Jun 25,2014 | Reply

  9. […] queue tables are, of course, a perfect example of what I call the “FIFO” index so it’s not a surprise that they might need special consideration. Rather than rewrite the […]

    Pingback by Index Rebuilds | Oracle Scratchpad — February 11, 2021 @ 9:56 pm GMT Feb 11,2021 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.