Oracle Scratchpad

September 15, 2009

Index Explosion – 4

Filed under: Index Explosion,Indexing,Infrastructure,Performance,Troubleshooting — Jonathan Lewis @ 7:30 pm BST Sep 15,2009

After describing how to deal most effectively – but only after approval from Oracle Support – with the problem of indexes wasting space on unnecessary ITL entries, I left you with a short list of “supportable” options for addressing the problem. In this note I’m going to outline a few pros and cons of each of those options. The list was as follows:

  • Use a coalesce when necessary – possibly on a very regular basis
  • Create the index as a reverse-key index
  • Create the index as global hash partitioned index
  • Get rid of (or change) the index

Coalesce

The command “alter index XXX coalesce” will make Oracle walk through the index leaf blocks in index order attempting to pack the data from two or more adjacent leaf blocks into a smaller number of leaf blocks using the current settings of initrans and  pctfree to work out the space to use in each block. It won’t coalesce leaf blocks under different branch blocks, though, so can’t reduce the height of an index. The blocks that are emptied out by the coalesce are removed from the index structure and put on the freelist.

The coalesce command operates as a series of small isolated transactions (so could, in theory, increase the risk of Oracle error ORA-01555: snapshot too old – although in practise this is a little unlikely). The coalesce command also skips blocks holding uncommitted transactions.

On the plus side the coalesce command may not have to do much work, and it is an online command. On the minus side it does have to read the entire index and can generate quite a lot of undo and redo if the concurrency issue has affected blocks scattered all the way through the index (in this case, though, you may decide not to do anything about the waste space – or perhaps only fix it once every few weeks or months). 

An alternative to the coalesce command is to rebuild the index – and you may want to do this anyway the first time you realise that you have an index suffering from this concurrency problem. But the rebuild command will lock the table for the duration of the rebuild unless you use the online option – in which case the rebuild has to do a tablescan and sort (as well as maintain, then apply, a journal of changes, and lock the table briefly at the start and end of processing). There are ways to minimise the resources used in a rebuild – but in general you probably don’t want to schedule a regular rebuild for this problem.

Update 27th April 2010: Timur Akhmadeev has written a nice article discussing the limitations of using the coalesce to address the problem of an exploding ITL.

Reverse Key

In a reverse key index each column in the index has its bytes reversed, so key values that were originally adjacent (tend to) become randomly scattered throughout the index. As a side effect, you are likely to spread the pattern of inserts more randomly through the index, which means each block probably sees less concurrent action, and the ITL issue “accidentally” disappears as a side effect.

There are several undesirable side effects to reverse key indexes, though.

You’ve effectively randomised the key insertion point, so an index which used to show a nice time-dependent clustering pattern suddenly loses that pattern and may appear undesirable to the optimizer unless you use calls to the dbms_stats package to adjust the index statistics by supplying a nice clustering_factor.

If you were previously inserting, and subsequently using, data at the “top” (high-values) end of the index you could have been getting a very good buffering effect on that part of the index. By recreating the index with reversed keys you are now inserting keys randomly throughout the index, so you may have to allow more space in your db cache to keep the recently used index blocks buffered. Reversing an index could cause an increase in disc I/O, even if the data usage and execution plans don’t change.

Finally, the optimizer cannot apply range-based predicates to reverse key indexes. (Note – this is not the same as using index range scans on the index: if you have a multi-column index that has been reversed the optimizer can still do a range scan for an equality predicate such as “COL1 = constant” on the leading columns). As a result of this, some of your execution plans might change dramatically.

Global Hash Partitioned

This often looks like an easy winner. Create the index as a globally hash partitioned index, partitioned on the first column of the index. The number of partitions should take account of the degree of concurrency you have to deal with, and could easily be somewhere between 16 and 128 in extreme cases. (The number should always be a power of two because of the strategy that Oracle uses for hash partitioning).

This works because you introduce far more insertion points in the index for data items that looks similar. Where a single block of the index was being hit by (say) 20 concurrent inserts, those inserts are likely to be spread over many different blocks scattered across the N partitions.

There are downsides, of course. For a start the partitioning option is only available for Enterprise Edition, and it’s a licensed extra option, and it’s not cheap. On a technical note you’ve also introduced a problem similar to one of the problems with reverse key indexes. The hash-partitioned index is fine for predicates like “COLX = constant” – where the optimizer can pick a single hash partition and the run-time task can be efficient; but if you start using range-based predicates on the hash column then your query will have to work through every single index partition as it runs the query, and if you’ve created the index with a large number of partitions the overhead of the extra index probes may be significant if the query is supposed to be a light-weight (small number of rows) query.

Get rid of the index

This may not be the first thing that crosses your mind when you see the problem – but it is a possibility to be considered. The indexes that are most likely to run into the ITL explosion are “meaningless key” indexes (which you may need to keep and reverse) and the “timestamp” index, and I have seen cases where the latter type of index is simply a waste of space. When examining problem indexes always remember Cary Milsapp’s comment: “the fastest way to do something is to not do it at all”. Sometimes the best way to fix a troublesome index is to drop it.

And finally:

The other question I left unanswered in the previous note in this series was: “Why haven’t I noticed the space-wasting phenomenon more often despite seeing indexes where it could have been happening.” There are several answers to this question, for example:

  • I’ve been called in to address more significant problems, and the side effects of an index that was two or three times larger than it should have been simply didn’t show up in comparison to the other performance issue.
  • The DBAs may have been doing regular (and usually pointless) index rebuilds so the issue is never visible when I’m on site
  • (This is one I really like). I’ve seen the problem but attributed the symptoms to something that is often the underlying cause of the ITL issue – the time-based index with a slowly deleted tail (also known as the FIFO index or, in Tom Kyte’s terminology, the “sweeper”). The solution for one problem also fixes the other – so there have probably been times when I’ve killed two birds with one stone without realising it. If you want an example of this, take another look at this article on index analysis and ask yourself how much of the space wastage the article describes was due to deleting from the tail of the index, and how much of it was due to the ITL concurrency problem as users were inserting the data. I knew that the index was going to be a disaster area before I looked at it, so I didn’t look at it closely enough or I would probably have seen the ITL problem as well as the FIFO problem.

Indexes are fundamentally different from (heap) tables, and there are several interesting things that can happen to them because of the difference. Sometimes you need to look at them very closely before you understand how to make best use of them and how to avoid the performance problems that they can introduce.

Footnote: Just in case you haven’t come to this article by way of the earlier articles in the series I should mention for completeness that in earlier versions of Oracle you could avoid the ITL problem by setting maxtrans. The need for workarounds to the problem only become necessary in 10g where Oracle started to ignore maxtrans.

[Further reading on Index ITL Explosion]

15 Comments »

  1. Thanks Jonathan, a very good summary of options to consider there. And not just for those with “space problems”, but IMHO, for everyone with index problems of any nature (buffer-busy-waits, locking, skew..). A lot of your relatively simple alternatives are often overlooked or not used because of fear-of-unknown. And the need for range-scan is often over-estimated: most accesses are likely to be “equal”, opening the options for reverse- and hash-partition.

    As for the get-rid-of-the-index option, there are nowadays too many tables with two unique keys: a meaningless sequence and an often sligtly more meaningfull Actual Key. Can we please get rid of at least one of those. And I mean: consider to remove the column, please.

    As for a timestamp-index, if it is only used for purgeing or ETL/archiveing, there are alternatives. Firstly, there is purgeing via the parent-id (which will likely also have a timestamp). Or purging per partition (hash- or otherwise), either with or without a local index. You can loop over every partition, and do 128 small purges instead of one index-based large purge that takes forever.

    Comment by PdV — September 15, 2009 @ 10:04 pm BST Sep 15,2009 | Reply

  2. Nice as always.

    I like the reverse-key part most because now I’m working on a project that work with telephone numbers and I suggested to index them in reverse-keys to mitigate the right-hand index problem.

    The project is also very insert intensive, so, maybe, the ITL problem is also addressed.

    Now an eye open on the clustering_factor.

    Thank you. :)
    Antonio

    Comment by lascoltodelvenerdi — September 16, 2009 @ 7:13 am BST Sep 16,2009 | Reply

  3. Hi Jonathan,

    Could you elaborate on “(The number should always be a power of two because of the strategy that Oracle uses for hash partitioning).” ?

    Also do you know what function Oracle uses to distribute data in hash table partitions? For example, I would like to see the would-be hash distribution of a table without actually hash-ing it. Looking for something like part_hash([any_data_type], [number_of_hash]) and returning the partition number.

    Comment by Christo Kutrovsky — September 17, 2009 @ 8:26 pm BST Sep 17,2009 | Reply

    • Christo,

      This was a topic I covered in “Practical Oracle 8i” Chapter 12. The “power of 2″ implementation makes it relatively painless to increase or decrease the number of partitions in a hash-partitioned table as you don’t have to redistribute all the data in the table, only the data of one (to split) or two (to merge) partitions.

      When I wrote the book, 8i had a packaged procedure dbms_utility.get_hash_value() that could be used to return the hash partition position for a string value; but things may have changed since then. The ora_hash() function returns the internal partition if used correctly.

      Example: ora_hash(38454,7) will return the internal partition number (0 to 7, rather than 1 to 8 ) of the hash partition that should hold the value 38454 if you have created a table with eight hash partitions.

      Comment by Jonathan Lewis — September 19, 2009 @ 4:19 pm BST Sep 19,2009 | Reply

      • Thanks Jonathan !

        So I had the usage wrong, I was using ora_hash(id,8) for 8 partitions instead of ora_hash(id,7). Just tested it out, seems to work for both strings and numbers!

        I’ll review that chapter again. In the case of an index however, wouldn’t it be easier to rebuild it, rather than re-partition? Not that I don’t like power of 2, just trying to understand the logic behind it. To avoid having rules of thumb without knowing where the “thumb” comes from :)

        Comment by Christo Kutrovsky — September 21, 2009 @ 1:55 pm BST Sep 21,2009 | Reply

        • Christo,

          The reason for considering the partition strategy is two-fold – first you hope that after partitioning the index once you don’t have to rebuild; and second even if you never see the ITL problem, there are indexes of this type that see buffer busy waits and other contention and partitioning can deal with those waits. But as I said, it is important to consider how you’re going to use the index in queries or you may lose more than you gain.

          Comment by Jonathan Lewis — September 21, 2009 @ 7:00 pm BST Sep 21,2009

  4. Good summary of all the options to ponder. Thanks Jonathan.

    We have timestamp based index as it name says lastmodified_col, how effective the index would get if i turn them to reverse key index from btree and if i say they were created only to be used for sorting purposes used in queries like order by lastmodified_col desc.

    Comment by Steven Andrew — September 17, 2009 @ 9:52 pm BST Sep 17,2009 | Reply

  5. “How effective would the index be … ?”

    The only way to address that question is to work out exactly what it’s used for – but from your comments about using it for the “order by” clause I don’t think it would work as walking through the index in leaf block order would not return the data in timestamp order.

    Comment by Jonathan Lewis — September 19, 2009 @ 3:56 pm BST Sep 19,2009 | Reply

  6. [...] posting about using powers of two for the number of partitions when using hash partitioning. The article in question was talking ab0ut globally partitioned indexes, but the principal was first associated with [...]

    Pingback by Hash Partitions « Oracle Scratchpad — September 21, 2009 @ 5:58 pm BST Sep 21,2009 | Reply

  7. I have, for the first time, run coalesce index against 406Gb of global indexes on partitioned tables with a date partition key and can confirm that it is wonderful with absolutely no affect on the application. The space used by the indexes is no longer growing. It took about 20 hours to do and created an awful lot of redo c. 200Gb so one needs to get plenty of archive log jobs running. Previoulsy we did periodic index rebuuilds which was always painful.

    Comment by kevin jones — October 1, 2009 @ 9:25 am BST Oct 1,2009 | Reply

  8. Kevin,

    It sounds as if you’ve been dropping partitions with the “update (global) indexes” option.

    Nice to hear that the coalesce is a more appropriate solution.

    You may still need to do one more rebuild, though, if you’ve allowed a history of drops to accumulate, as you may now have N partitions worth of empty space on your index freelists when perhaps you want to get that down to a steady state of just one partition’s worth of space.

    It’s typical of sorting out indexing problems – once you’ve figured out the correct strategy, you sometimes have to do just one more rebuild (with the correct configuration) before you can tick all the boxes.

    Comment by Jonathan Lewis — October 1, 2009 @ 6:17 pm BST Oct 1,2009 | Reply

  9. Luckily we are moving to a 4-way cluster and will be running in parallel for a while and we have some time booked to rebuild all the global indexes and reclaim about 200Gb of space which will cheer up our accountants.

    Comment by kevin jones — October 2, 2009 @ 9:43 am BST Oct 2,2009 | Reply

  10. [...] to several other articles. As it turned out in a recent thread on SQL.ru forum, one of possible solutions to the issue – calling ALTER index COALESCE – appears to be very limited in it’s [...]

    Pingback by Index ITL « Timur Akhmadeev's blog — April 27, 2010 @ 7:57 pm BST Apr 27,2010 | Reply

  11. Hello Jonathan You said:
    “Create the index as a globally hash partitioned index, partitioned on the first column of the index.”

    1.Can you please explain why you have recommended to hash
    on the first column only.

    2.Is this hashing only helpful during concurrent inserts
    “by spreading across blocks”? What is the impact during
    retrieval?

    3. Consider this, I have a column with low cardinality.
    Will not adding a column (primary-key) in the hash
    function reduce hash-collisions and spreading?

    Comment by subrata saha — March 22, 2011 @ 8:47 pm BST Mar 22,2011 | Reply

    • Subrata Saha,

      1.Can you please explain why you have recommended to hash on the first column only.
      I wasn’t thinking in a sufficiently general way – my thoughts were focused too much on the case of the sequence based key with a high rate of insert. The hashing has to be on the leading columns of a globally partitioned index (i.e. global partitioned indexes have to be prefixed), but doesn’t have to be restricted to just the first column.

      2.Is this hashing only helpful during concurrent inserts “by spreading across blocks”? What is the impact during retrieval?
      I think that you’ll find that that point is addressed in the third paragraph of the section. depending on the nature of the predicates it can introduce extra work on the retrieval.

      3. Consider this, I have a column with low cardinality. Will not adding a column (primary-key) in the hash function reduce hash-collisions and spreading?

      That’s a point worth raising, but there are a couple of different scenarios to consider, for example:

      If you have periods of time where the high concurrency is all about just one distinct value, then hashing on that column is clearly NOT going to spread the load – so you have to have at least a second column in the index with a large number of distinct values and a two-column hash to spread the load; but then your queries have to be “where col1 = constant and col2 = constant” if you want them to be efficient. Of course, you might consider creating the index with the less repetitive column first, hashing on just that column because then you could use the index efficiently for queries that involved both columns or just the less repetitive column.

      If your periods of high concurrency use several of your “low cardinality” values then it’s possible that that is, of itself, sufficient to spread the load across a number of different insertion points, making the partitioning unnecessary.

      In passing, if your “low cardinality” column is the leading column of an index it’s worth looking at compressing the index on that column.

      Comment by Jonathan Lewis — March 26, 2011 @ 8:08 pm BST Mar 26,2011 | Reply


RSS feed for comments on this post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Theme: Rubric. Get a free blog at WordPress.com

Follow

Get every new post delivered to your Inbox.

Join 4,267 other followers