Oracle Scratchpad

October 21, 2009

Bitmap Updates

Filed under: bitmaps,Indexing,Infrastructure,Oracle,Performance,redo,undo — Jonathan Lewis @ 5:33 pm BST Oct 21,2009

It is fairly well-known that bitmap indexes are very dense structures that can behave badly if their underlying tables are subject to even fairly low levels of insert, update or delete activity. Problems include contention, space management and performance, and these problens have spawned a couple of well-known guidelines relating to bitmap indexes:

  • Avoid concurrent modification of data by multiple processes – otherwise you can end up with processes dead-locking
  • Drop/disable bitmap indexes before data loads and rebuild them afterwards.

Of course, with a little care and experimentation, you may find that you don’t need to apply the second guideline in all cases – especially for bulk inserts.

But what if you absolutely do have to cope with a continuous (serialised) stream of small modifications to the data and can’t drop the indexes while you’re doing it. Are there any damage-limitation steps you can take. The answer, of course, is yes – there’s always a “least worst” strategy, no matter how badly you attempt to misue an Oracle feature. Here’s a demonstration of one interesting aspect of bitmap index behaviour that helps to reduce the problems of “lots of small updates”.

All I’m going to do is create a table with a bitmap index in a tablespace using 8KB blocks and freelist management (rather than ASSMautomatic segment space management – but that’s not a specific requirement of the demonstration) then run a pl/sql loop that updates 5,000 randomly selected rows one at a time.  When the update loop is complete I’m going to check the session stats for any interesting figures.

rem     Script:         bitmap_undo.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2009

execute dbms_random.seed(0)

create table t1
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects
	where	rownum <= 3000  -- > comment to avoid wordpress format issue
	rownum				id,
	trunc(dbms_random.value(1,100))	bit1,
	trunc(dbms_random.value(1,100))	bit2,
	lpad(rownum,10,'0')		small_vc
	generator	v1,
	generator	v2
	rownum <= 400000  -- > comment to avoid wordpress format issue

create index t1_i1 on t1(id);
create bitmap index t1_b2 on t1(bit2);

execute snap_my_stats.start_snap

	m_key	number(6);
	for i in 1..5000 loop
		m_key := trunc(dbms_random.value(1,250000));

		update t1 set
			bit2 = trunc(dbms_random.value(1,100))
		where id = m_key;

	end loop;

execute snap_my_stats.end_snap

The snap_my_stats procedure calls come from one of my simple snapshot packages – querying v$mystat joined to v$statname in this case. So what do you think will be the interesting numbers generated by this little piece of code when running on 10g ?

Take a look at the figures for the session’s undo and redo:

redo entries                      31,786
redo size                     41,933,324
undo change vector size       38,038,180

The number of redo entries is a little high – in a perfect case we might expect about 15,000 redo entries because (in principle) we update one row, delete one index entry and insert another 5,000 times; clearly we’ve had quite a lot of extra block cleanout activity happening at the same time. In fact, the number of redo entries isn’t particularly interesting, but I’ve included it so that you can see that the average size of a redo entry is, at first approximation, nearly 1.3KB – which is quite big.

Since half the redo entries are fairly small entries relating to block cleanout we could ignore their contribution to the count, and since 5,000 of the entries are fairly small entries describing changes to the table blocks we could ignore their contribution as well – the most significant entries are the 10,000 entries for modifying index blocks and (after allowing for small redo sizes for the other entries) our second approximation is that the average size of the redo entries for the index changes is about 4KB (which you could confirm by doing some log file dumps). Think about that for a moment – every time we update a row in the table, the two index changes between them seem to generate the equivalent of a data block of redo.

Given that undo also has to be described by redo ( see Nutshell – 1 ) you can see that most of the redo is actually from the undo – 38MB out of 42MB. So what information is being created where, and why is there so much in total, and what can we do to reduce it.

Each of my update statements updated one row of data, usually changing the indexed column from one value to another. To effect this change Oracle has  to change a bit in one bitmap index entry (the one for the current value of the column) from a one to a zero and change a bit in another bitmap index entry (the one for the new value of the column) from a zero to a one. But you don’t update index entries – you “delete” (mark as deleted) the old index entry and insert a new index entry … at least, that’s what you do with B-tree indexes in Oracle, and it’s what you used to do with bitmap indexes until 10g of Oracle).

In 10g when you change a bit in a bitmap index entry you copy the old version of the index entry into an undo record and modify the index entry “in situ” to its new value. (This may not be possible in every case, there are a few variations on the theme.) The redo change vectors you generate are:

  • a very large vector describing what you’ve done to the undo block,
  • a small change vector describing how you’ve modified the index entry.

Now remember that you do this for two index entries. So how do we manage to generate so much undo and redo ? Because a single bitmap index entry can be as large as half a data block – which is why the total redo generated for a single row change can be roughly the same size as a whole data block.

So the underlying problem is a consequence of the size of the block or rather, to be precise, the size of the freespace available in the block based on the index definition. So how can you reduce the redo generation if you really need to cope with this type of drip-feed into a table with bitmap indexes ? There are two options really – create the index in a tablespace with the smallest block size possible, or define the index with a large value for pctfree (e.g. 75%). Both strategies have their costs but they can make a big difference to the undo and redo generated.

Here, for example, are the figures from re-running my test first using a 2KB blocksize for the index, and then using an 8KB blocksize with the index pctfree set to 75. (You can expect the figures to be similar but not identical because the settings don’t result in exactly the same limiting size on an index entry .) In both cases the typical index entry has been reduced by a factor of approximately four, so the undo change vector size is roughly a quarter of the size – and that fact is reflected in the reduction in the redo size:

Blocksize 2KB – pctfree 10 (default)

redo entries                       29,718
redo size                      14,784,852
undo change vector size        10,293,344

Blocksize 8KB – pctfree 75

redo entries                       29,944
redo size                      15,913,584
undo change vector size        11,413,304

If diskspace and buffer memory are not a problem you may prefer to “waste” space in the 8KB blocks so that Oracle can make best use of the buffer cache through its LRU as the pattern of activity changes throughout the  day. If space (and memory particularly) are at a premium you may choose to go for the smaller block size and deal with the hassle of manual cache management rather than memory to “empty” space.

If you think this is bad, by the way, take a look at the equivalent figures for the test when run against 9i using an 8KB block size and the default pctfree 10 (there is no “undo change vector size” statistic on 9i).

redo entries                      136,209
redo size                     289,892,104

Apart from the excess redo, the index also exploded from 99 blocks to about 7,300 blocks and the test run took 81 seconds instead of just nine seconds.

The problem is that in 9i (and earlier), updates to bitmap index entries are treated the same way as updates to B-tree entries – which means a long “history” of very similar bitmap index entries can build up in the index, and these entries result in leaf block splits and large entries propagating upwards into branch blocks; and both the old and new entries are copied (one into the undo vector, the other into the forward vector) into the redo log.

Bonus Performance Threat:

If you enable flashback database, then Oracle will read old undo blocks from the undo tablespace and copy them into the flashback log before “new”ing them and using them (watch out for statistic “physical reads for flashback new”). So when you’re hammering the redo logs because of your bitmap updates, you’re also hammering your undo tablespace and flashback log – so it really is worth thinking carefully about the best way to do damage limitation.

Update Sept 2020

Thanks to a pingback on another of my posts on bitmap indexes I re-visited this article today and re-ran the tests on to see if any further significant changes had appeared. The results for the 8KB block size were very similar to the original 10g results.


  1. Jonathan,

    Very nicely written blog on bitmap indexes. Gives everyone a nice presentation on what to look for when creating bitmap indexes.

    Aswath Rao

    Comment by ASWATH RAO — October 22, 2009 @ 1:28 am BST Oct 22,2009 | Reply

  2. Very nice post, indeed. But I have a completly off topic question: is this a new mobile version of your page? It’s very nice!

    Comment by Daniel Stolf — October 22, 2009 @ 4:06 am BST Oct 22,2009 | Reply

    • Daniel,
      If anything has changed about the appearance of the page, it has nothing to do with me. The people at WordPress are always tweaking things, so maybe they’ve done something clever to make things balance better on mobile devices.

      Comment by Jonathan Lewis — October 23, 2009 @ 4:02 pm BST Oct 23,2009 | Reply

  3. If diskspace and buffer memory are not a problem you may prefer to “waste” space in the 8KB blocks so that Oracle can make best use of the buffer cache through its LRU as the pattern of activity changes throughout the day. If space (and memory particularly) are at a premium you may choose to go for the smaller block size and deal with the hassle of manual cache management rather than memory to “empty” space

    I think this is very very important, especially the start of the paragraph.
    Won’t any gain achieved here be negated by the downside of
    a) index consuming larger no of blocks and hence more no. of consistent gets (and possibly physical reads) for queries. This is based on assumtion that bitmap index is absolutely essential, which probably means there are other columns in same table that have bitmap indexes on them and queries generally access data by doing ANDs & ORs on these columns.
    b) buffer pool having to manage more number of blocks. Not sure if having to manage more no. of small blocks (2KB) has any overhead but having to manage more no. of 8KB blocks sure will have. Right?
    c) more occurances of blocks-splitting (especially with 8KB block size and high PCTFREE approach) during updates (or is that not possible?)

    Comment by Narendra — October 22, 2009 @ 8:47 am BST Oct 22,2009 | Reply

    • Narenda,

      The ideas you’re working through here are exactly the type of cost/benefit analysis everyone should do when they’re thinking about strategic design decisions.

      If your most significant problem is the volume of redo generated – and remember that the rate at which you can write the redo is the limit on the speed the database can operate – then you may be prepared to “lose” other resources to address that problem.

      I have pointed out that in this scenario you reduce the volume of redo dramatically by using a smaller block size – but also suggested that you might “emulate” this by trying to limit the amount of space used by a larger block size.

      Now, setting pctfree to 75 for an 8KB block size is trying to make the blocks behave rather like 2KB blocks – but it probably isn’t going to work all that well in many cases.

      Roughly speaking, in the cases where the 75% trick works, the number of blocks you manage for the 2KB and the 8KB approach will be very similar – and the block “management” is largely about latching and list management, not about what goes on inside the blocks – so the resources used for management will be similar.

      The 2KB approach is (to my mind) the nicer one because it will always give you small bitmap chunks and will therefore keep the redo as low as possible. The 8KB approach may degenerate to full block usage (and that’s pretty certain to happen in some cases) – so it’s only a temporary solution unless you keep an eye on the index and rebuild it when necessary. On the other hand, having just the 8KB block size does mean you don’t have to manage the 2KB cache – even if you appear to “waste” a lot of memory some of the time.

      The block splitting for the 2KB block approach really shouldn’t be a problem – in fact, if you’re going to get block splitting for the 2KB approach it’s almost certainly the case that you shouldn’t use the 8KB with 75% free. When the 2KB blocks split, you only generate just over 4KB of redo … when an 8KB block splits you generate a little over 16KB of redo.

      Comment by Jonathan Lewis — October 23, 2009 @ 4:25 pm BST Oct 23,2009 | Reply

  4. […] Jonathan Lewis- Bitmap Updates […]

    Pingback by Blogroll Report 16/10/2009-23/10/2009 « Coskan’s Approach to Oracle — October 26, 2009 @ 11:55 pm GMT Oct 26,2009 | Reply

  5. Respected Sir,

    Very good example.

    I want to clarify my undertanding as per your demo below

    In 10g bitmap index does not keep histroy of updated bitmap entries (i.e. old values) like B-tree?

    Is update to bitmap happens much like update to table?

    Please correct if i am wrong

    Comment by Henish — October 27, 2009 @ 5:50 pm GMT Oct 27,2009 | Reply

  6. Henish,

    That’s a pretty good way of summarising the change. (I wish I’d thought of making the comparison with column updates in tables.)

    Comment by Jonathan Lewis — October 28, 2009 @ 10:02 am GMT Oct 28,2009 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: