In my last post on larger block sizes, I pointed out a potential issue relating to space wastage and increased I/O with undo tablespaces based on a 32KB block size. Here are a couple more thoughts (one pro, one slightly con) relating to the same “large block size” option.
It is not commonly realised that a single undo block can hold the undo records for multiple transactions – although it is fairly well known that when a transaction acquires an undo block that block becomes the exclusive property of the transaction until the transaction commits.
When the transaction commits it may have written just a couple of undo records to the undo block that it has been using, so there could be a large volume of free space left in the undo block. To avoid wasting that space the update to the undo segment header that takes place as the transaction commits will assign the block to the “free block pool” for that undo segment - provided there is space for it in the pool. The next transaction to acquire a transaction table entry in that undo segment can then check the segment’s free block pool to see if there is a part-used undo block that it can acquire rather than immediately demanding a whole new undo block.
The free block pool is just a list of up to five undo blocks that can be assigned to the next transaction that goes active on the segment, and it can be seen in the symbolic block dump of the undo segment header block, for example:
FREE BLOCK POOL:: uba: 0x0080000b.1681.2c ext: 0x0 spc: 0x86e uba: 0x0080000a.1681.0a ext: 0x0 spc: 0x1b06 uba: 0x00000000.167e.13 ext: 0x1 spc: 0xf64 uba: 0x00000000.1654.01 ext: 0x2 spc: 0x1f88 uba: 0x00000000.10c0.01 ext: 0x2 spc: 0x1f88
As you can see, there were five undo blocks listed in the free block pool for this undo segment at some time in the past. The last three came from extents 0×1 and 0×2, but their block addresses have been cleared to zeros, so it looks like they were eliminated from the pool before they were re-used – presumably because of the nature and concurrency of the activity that was going on at the time. As you can see, these three blocks had 0xf64 (3,940), 0x1f88 (8,072), and 0x1f88 (8,072) bytes of free space when they were “lost”. Given the block sequence numbers (0x10c0, 0×1654 and 0x167e) compared to the latest block sequence numbers (0×1681) it looks as if the losses happened some time in the past – which suggests that the five block limit hasn’t been reached very often.
In fact, because the mechanisms of automatic undo management (AUM, a.k.a. SMU – system managed undo) work quite hard to have as many active undo segments as there are live transactions, you may find that your undo segments won’t often show more than the first two or three lines of the free block pool as used.
Given the existence of the free block pool, it’s useful to know a little about how it is used and why it can matter.
The first point, as shown above, is that the free block pool is strictly limited to five entires – so if your configuration is (a) highly concurrent and (b) manages to restrict the number of undo segments you can create, you may waste a lot of space in the undo segments and end up with excess writes to the undo tablespace. To understand how this can happen, imagine you typically have six transactions active per undo segment at any one time – if they all complete at about the same time then only five of them will be able to register the remaining space in their current undo block in the free block pool and the sixth block, with its free space, will be “lost”.
The second point is that a block gets into the free block pool if it has more than 400 bytes of free space left in it. Put that the other way round – an undo block does NOT get into the free block pool if you don’t have at least 400 bytes of free space left. If your undo blocks are 8KB, you can lose up to 400 bytes from 8KB – which is roughly 5%; if your undo blocks are 16KB that’s only 2.5%; if they’re 32KB that’s 1.25%. There is an argument for highly tuned systems that an undo tablespace using the largest possible block size COULD offer some space efficiency – provided you don’t lose this space efficiency through excessive concurrency, and from the side effects of the hard limit of 255 records per undo block that I described in my previous post on large undo blocks.
I tend to advise people to stick with 8KB block sizes unless they have a very good reason for being confident that they will get a noticeable performance benefit from a different block size; but I think it’s nice to be aware of some of the mechanics that might make it possible to justify a special case – especially if you know what to test and understand why things don’t necessarily turn out the way you expect when you switch block sizes. It’s a bit of a long shot, but there are some interesting oddities that might occasionally be worth a second glance when you’re looking at undo tablespaces.