Oracle Scratchpad

March 16, 2015

Tablespace HWM

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 10:22 am GMT Mar 16,2015

The following question appeared the Oracle-L list-server recently:

In order to resize a datafile to release space at the end, we need to find whatever the last block_id that is at the start of that free contiguous space.
Problem is that we have a very large database such that querying dba_extents to find the last block is probably not an option. The standard query(ies) that make use of dba_extents runs for hours at stretch and also  sometimes fails with a ‘snapshot too old’ (just gives up).
Is there an alternative to using dba_extents?

I was surprised to hear that a suitable query against dba_extents could last for hours, although for locally managed tablespaces Oracle does have to read the segment header block for every single segment in the tablespace to get the segment extent map and that might make things a little slow. (A follow-up post explained that part of the problem was that the tablespaces were locally managed, so maybe it wasn’t just a case of an unlucky execution plan.)

If you look hard enough there’s probably an alternative strategy for dealing with any problem – and it might even be a good one. In the case of tablespace highwater marks, how about looking at dba_free_space instead of dba_extents ? If there’s space that can be released from a file it starts at the block after the last used block, e.g.:

        tablespace_name, file_id, block_id, blocks, block_id + blocks - 1 last_block  
        tablespace_name = 'TEST_8K_ASSM_AUTO' 
order by 
        file_id, block_id

------------------------------ ---------- ---------- ---------- ----------
TEST_8K_ASSM_AUTO                       6        128        256        383
TEST_8K_ASSM_AUTO                       6       8576      12200      20775

2 rows selected.

alter database datafile '{file name}' resize {block size * 8,575};

Database altered.

If you do try this then one of two things happen – either you manage to resize the file to the current minimum it can be, or you fail with Oracle error ORA-03297: file contains used data beyond requested RESIZE value and the file can’t be resized until you move some objects which are above the highest chunk of free space, so you’re back to dba_extents to find out which segment is causing the problem.

If you want to try using optimistic approach but don’t want to run some SQL that might cause an Oracle error you could always compare the details from dba_free_space with the details from dba_data_files to see if any space has been used AFTER the last free chunk – but there’s a little trap to making that check. You’ll notice that the last block of the free space is 20,775; but look what dba_data_files says about the last block in the data file(s):

SQL> select file_id, blocks, blocks - 1 last_block, user_blocks, file_name from dba_data_files order by file_id;

---------- ---------- ---------- ----------- ------------------------------------------------------------------
         1     129280     129279      129152 /u01/app/oracle/oradata/TEST/datafile/o1_mf_system_938s4mr3_.dbf
         2     267520     267519      267392 /u01/app/oracle/oradata/TEST/datafile/o1_mf_sysaux_938s551h_.dbf
         3     131200     131199      131072 /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
         4      25600      25599       25472 /u01/app/oracle/oradata/TEST/datafile/o1_mf_users_938s6bhn_.dbf
         5     131200     131199      131072 /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
         6      20782      20781       20648 /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

6 rows selected.

There are 20,782 blocks in the data file (though the numbering starts at zero, so the last block is 20,781) so there seem to be blocks in the data file that are beyond the last blocks of free space. You’ll have to trust me when I say that there’s no data beyond the free space, I’ve dropped all the (other) segments in this tablespace and purged the recyclebin: the last free space chunks stops short of the end of the file by 6 blocks. The presence of the user_blocks column in dba_data_files helps to explain what’s going on. You can consider a datafile to be made of three components: the space management part, the part that can hold legally sized extents, and a part at the end of file which is too small to hold the smallest extent that can legally be created in the tablespace.

The details depends on the version of Oracle, the definition of the tablespace, initial size of the file, and how the file has grown. In recent versions of Oracle, and assuming you haven’t done something silly with a very small starting size and massive growth, the space management part is likely to be a chunk of 1MB at the start of the file (64KB for older versions). For a locally managed tablespace the chunk at the end of the file could be anything up to one block less than the defined size for “uniform” extent allocation, or one block short of 64KB for system allocated extents.

In my example I have blocks = 20,782, and user_blocks = 20648: that’s because the tablespace was created in a recent version of Oracle with system allocated extents and 8KB blocks: 20,782 = 20648 + 128 (space management header) + 6 (dead space at end of file); the value of user_blocks allows for 2,581 extents of 64KB, and the last six blocks of the file are (currently) unusable. (I have a more extreme example of wasted space in an example I published a couple of years ago.)


When the question first came up my first thought was simply to dump the tablespace space management block but realised just a bit too late that dba_free_space was a much easier option. If anyone does care to pursue the bitmap dump you’ll have to work out all the details because there are variations on the theme that are probably only going to appear with very large datafiles or if you’ve converted from dictionary managed to locally managed. The method starts with the dbms_space_admin package which allows you to dump a tablespace bitmap into the session’s trace file:

execute dbms_space_admin.tablespace_dump_bitmaps('TEST_8K')

Header Control:
RelFno: 5, Unit: 128, Size: 294400, Flag: 1
AutoExtend: NO, Increment: 0, MaxSize: 0
Initial Area: 126, Tail: 294399, First: 8, Free: 2283
Deallocation scn: 148317558.2950
Header Opcode:
Save: No Pending Op
File Space Bitmap Block:
BitMap Control:
RelFno: 5, BeginBlock: 128, Flag: 0, First: 8, Free: 63472
FF00FF0000000000 0000000000000000 0000000000000000 0000000000000000
0000000000000000 0000000000000000 0000000000000000 0000000000000000

This tablespace was locally managed with a block size of 8KB and uniform extents of 1MB (which equates to 128 blocks), so we’re looking at a bitmap where one bit represents 128 blocks. Since the Oracle version is 11gR2, and the file doesn’t fall into the special “tiny” category the header section is 1MB / 128 blocks; the bitmap starts in block 2 (the third block of the file) which is why the size of the “Initial Area” is 126 blocks rather than 128.  The first free extent is number 8 (counting from zero) and there are 2,283 free extents in the file.

If I use my space-reporting script to report the details of the free and used extents in the tablespace I can start to align the bitmap with the extents and work out how to interpret the ones and zeros. This is what I’ve got at present:

------- ----------- ----------- ---------- ---------- ---------------------------- ------------------
      5         128         255        128 TEST_USER  T1                           TABLE
                256         383        128 TEST_USER  T1                           TABLE
                384         511        128 TEST_USER  T1                           TABLE
                512         639        128 TEST_USER  T1                           TABLE
                640         767        128 TEST_USER  T1                           TABLE
                768         895        128 TEST_USER  T1                           TABLE
                896       1,023        128 TEST_USER  T1                           TABLE
              1,024       1,151        128 TEST_USER  T1                           TABLE
              1,152       2,175       1024 free       free
              2,176       2,303        128 TEST_USER  T3                           TABLE
              2,304       2,431        128 TEST_USER  T3                           TABLE
              2,432       2,559        128 TEST_USER  T3                           TABLE
              2,560       2,687        128 TEST_USER  T3                           TABLE
              2,688       2,815        128 TEST_USER  T3                           TABLE
              2,816       2,943        128 TEST_USER  T3                           TABLE
              2,944       3,071        128 TEST_USER  T3                           TABLE
              3,072       3,199        128 TEST_USER  T3                           TABLE
              3,200     294,399     291200 free       free

As you can see, the 8 x 1-bit (starting FF) aligns with the first 8 allocated extents of 128 block each, then the 8 x 0-bit with the 1,024 free blocks, followed by a further 8 x 1-bit and 8 x 128 block extents.  Furher investigations are left as an exercise to the interested reader.


March 11, 2015

Flashback Logging

Filed under: Flashback,Infrastructure,Locks,Oracle,Troubleshooting — Jonathan Lewis @ 3:21 pm GMT Mar 11,2015

One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:

SQL> execute print_table('select * from v$lock_type where type = ''FB''')
TYPE                          : FB
NAME                          : Format Block
ID1_TAG                       : tablespace #
ID2_TAG                       : dba
IS_USER                       : NO
DESCRIPTION                   : Ensures that only one process can format data blocks in auto segment space managed tablespaces

SQL> execute print_table('select * from v$event_name where name like ''enq: FB%''')
EVENT#                        : 806
EVENT_ID                      : 1238611814
NAME                          : enq: FB - contention
PARAMETER1                    : name|mode
PARAMETER2                    : tablespace #
PARAMETER3                    : dba
WAIT_CLASS_ID                 : 1893977003
WAIT_CLASS#                   : 0
WAIT_CLASS                    : Other

This tells us that a process will acquire the lock when it wants to format a batch of blocks in a segment in a tablespace using ASSM – and prior experience tells us that this is a batch of 16 consecutive blocks in the current extent of the segment; and when we see a wait for an FB enqueue we can assume that two session have simultaneously tried to format the same new batch of blocks and one of them is waiting for the other to complete the format. In some ways, this wait can be viewed (like the “read by other session” wait) in a positive light – if the second session weren’t waiting for the first session to complete the block format it would have to do the formatting itself, which means the end-user has a reduced response time. On the other hand the set of 16 blocks picked by a session is dependent on its process id, so the second session might have picked a different set of 16 blocks to format, which means in the elapsed time of one format call the segment could have had 32 blocks formatted – this wouldn’t have improved the end-user’s response time, but it would mean that more time would pass before another session had to spend time formatting blocks. Basically, in a highly concurrent system, there’s not a lot you can do about FB waits (unless, of course, you do some clever partitioning of the hot objects).

There is actually one set of circumstances where you can have some control of how much time is spent on the wait, but before I mention it I’d like to point out a couple more details about the event itself. First, the parameter3/id2_tag is a little misleading: you can work out which blocks are being formatted (if you really need to), but the “dba” is NOT a data block address (which you might think if you look at the name and a few values). There is a special case when the FB enqueue is being held while you format the blocks in the 64KB extents that you get from system allocated extents, and there’s probably a special case (which I haven’t bothered to examine) if you create a tablespace with uniform extents that aren’t a multiple of 16 blocks, but in the general case the “dba” consists of two parts – a base “data block address” and a single (hex) digit offset identifying which batch of 16 blocks will be formatted.

For example: a value of 0x01800242 means start at data block address 0x01800240, count forward 2 * 16 blocks then format 16 blocks from that point onwards. Since the last digit can only range from 0x0 to 0xf this means the first 7 (hex) digits of a “dba” can only reference 16 batches of 16 blocks, i.e. 256 blocks. It’s not coincidence (I assume) that a single bitmap space management block can only cover 256 blocks in a segment – the FB enqueue is tied very closely to the bitmap block.

So now it’s time to ask why this discussion of the FB enqueue appears in an article titled “Flashback Logging”. Enable the 10704 trace at level 10, along with the 10046 trace at level 8 and you’ll see. Remember that Oracle may have to log the old version of a block before modifying it and if it’s a block that’s being reused it may contribute to “physical reads for flashback new” – here’s a trace of a “format block” event:

*** 2015-03-10 12:50:35.496
ksucti: init session DID from txn DID:
        ksqlkdid: 0001-0023-00000014

*** 2015-03-10 12:50:35.496
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0023-00000014
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0023-00000014
ksqgtl: RETURNS 0
WAIT #140627501114184: nam='db file sequential read' ela= 4217 file#=6 block#=736 blocks=1 obj#=192544 tim=1425991835501051
WAIT #140627501114184: nam='db file sequential read' ela= 674 file#=6 block#=737 blocks=1 obj#=192544 tim=1425991835501761
WAIT #140627501114184: nam='db file sequential read' ela= 486 file#=6 block#=738 blocks=1 obj#=192544 tim=1425991835502278
WAIT #140627501114184: nam='db file sequential read' ela= 522 file#=6 block#=739 blocks=1 obj#=192544 tim=1425991835502831
WAIT #140627501114184: nam='db file sequential read' ela= 460 file#=6 block#=740 blocks=1 obj#=192544 tim=1425991835503326
WAIT #140627501114184: nam='db file sequential read' ela= 1148 file#=6 block#=741 blocks=1 obj#=192544 tim=1425991835504506
WAIT #140627501114184: nam='db file sequential read' ela= 443 file#=6 block#=742 blocks=1 obj#=192544 tim=1425991835504990
WAIT #140627501114184: nam='db file sequential read' ela= 455 file#=6 block#=743 blocks=1 obj#=192544 tim=1425991835505477
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=744 blocks=1 obj#=192544 tim=1425991835505985
WAIT #140627501114184: nam='db file sequential read' ela= 591 file#=6 block#=745 blocks=1 obj#=192544 tim=1425991835506615
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=746 blocks=1 obj#=192544 tim=1425991835507157
WAIT #140627501114184: nam='db file sequential read' ela= 489 file#=6 block#=747 blocks=1 obj#=192544 tim=1425991835507684
WAIT #140627501114184: nam='db file sequential read' ela= 375 file#=6 block#=748 blocks=1 obj#=192544 tim=1425991835508101
WAIT #140627501114184: nam='db file sequential read' ela= 463 file#=6 block#=749 blocks=1 obj#=192544 tim=1425991835508619
WAIT #140627501114184: nam='db file sequential read' ela= 685 file#=6 block#=750 blocks=1 obj#=192544 tim=1425991835509400
WAIT #140627501114184: nam='db file sequential read' ela= 407 file#=6 block#=751 blocks=1 obj#=192544 tim=1425991835509841

*** 2015-03-10 12:50:35.509
ksqrcl: FB,16,18002c2
ksqrcl: returns 0

Note: we acquire the lock (ksqgtl), read 16 blocks by “db file sequential read”, write them to the flashback log (buffer), format them in memory, release the lock (ksqrcl). That lock can be held for quite a long time – in this case 13 milliseconds. Fortunately the all the single block reads after the first have been accelerated by O/S prefetching, your timings may vary.

The higher the level of concurrent activity the more likely it is that processes will collide trying to format the same 16 blocks (the lock is exclusive, so the second will request and wait, then find that the blocks are already formatted when it finally get the lock). This brings me to the special case where waits for the FB enqueue waits might have a noticeable impact … if you’re running parallel DML and Oracle decides to use “High Water Mark Brokering”, which means the parallel slaves are inserting data into a single segment instead of each using its own private segment and leaving the query co-ordinator to clean up round the edges afterwards. I think this is most likely to happen if you have a tablespace using fairly large extents and Oracle thinks you’re going to process a relatively small amount of data (e.g. small indexes on large tables) – the trade-off is between collisions between processes and wasted space from the private segments.

March 9, 2015

Flashback logging

Filed under: Infrastructure,Oracle,redo,undo — Jonathan Lewis @ 2:44 pm GMT Mar 9,2015

When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

Features and circumstances change, of course, and someone recently pointed out that if your production system is multi-terabyte and you’re running with a dataguard standby and some minor catastrophe forces you to switch to the standby then you don’t really want to be running without a standby for the time it would take for you to use restore and recover an old backup to create a new standby and there may be cases where you could flashback the original primary to before the catastrophe and turn it into the standby from that point onward. Sounds like a reasonable argument to me – but you might still need to think very carefully about how to minimise the impact of enabling database flashback, especially if your database is a datawarehouse, DSS, or mixed system.

Imagine you have a batch processes that revolve around loading data into an empty table with a couple of indexes – it’s a production system so you’re running with archivelog mode enabled, and then you’re told to switch on database flashback. How much impact will that have on your current loading strategies ? Here’s a little bit of code to help you on your way – I create an empty table as a clone of the view all_objects, and create one index, then I insert 1.6M rows into it. I’ve generated 4 different sets of results: flashback on or off, then either maintaining the index during loading or marking it unusable then rebuilding it after the load. Here’s the minimum code:

create table t1 segment creation immediate tablespace test_8k
select * from all_objects
where   rownum < 1

create index t1_i1 on t1(object_name, object_id) tablespace test_8k_assm_auto;
-- alter index t1_i1 unusable;

insert /*+ append */ into t1
with object_data as (
        select --+ materialize
                rownum <= 50000
counter as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 32
        /*+ leading (ctr obj) use_nl(obj) */
        counter         ctr,
        object_data     obj

-- alter index t1_i1 rebuild;

Here’s a quick summary of the timing I got  before I talk about the effects (running

Flashback off:
Maintain index in real time: 138 seconds
Rebuild index at end: 66 seconds

Flashback on:
Maintain index in real time: 214 seconds
Rebuild index at end: 112 seconds

It is very important to note that these timings do not allow you to draw any generic conclusions about optimum strategies for your systems. The only interpretation you can put on them is that different circumstances may lead to very different timings, so it’s worth looking at what you could do with your own systems to find good strategies for different cases.

Most significant, probably, is the big difference between the two options where flashback is enabled – if you’ve got to use it, how do you do damage limitation. Here are some key figures, namely the file I/O stats and the some instance activity stats, I/O stats first:

"Real-time" maintenance
Tempfile Stats - 09-Mar 11:41:57
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
    1       1,088      22,454      20.638        .063         296        .000     1,011      22,455        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp

Datafile Stats - 09-Mar 11:41:58
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
    3      24,802      24,802       1.000        .315      24,802        .315           0        .000          2,386      20,379        .239    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         718      22,805      31.762        .001           5        .000         713        .002            725      22,814        .002    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
    6       8,485       8,485       1.000        .317       8,485        .317           0        .000            785       6,938        .348    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

Mark Unusable and Rebuild
Tempfile Stats - 09-Mar 11:53:04
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
    1       1,461      10,508       7.192        .100           1        .017       407      10,508        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp

Datafile Stats - 09-Mar 11:53:05
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
    3          17          17       1.000       5.830          17       5.830           0        .000             28          49       1.636    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         894      45,602      51.009        .001           2        .002         892        .001            721      22,811        .026    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
    6       2,586       9,356       3.618        .313         264       3.064       2,322        .001          2,443       9,214        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

There are all sorts of interesting differences in these results due to the different way in which Oracle handles the index. For the “real-time” maintenance the session accumulates the key values and rowids as it writes the table, then sorts them, then does an cache-based bulk update to the index. For the “rebuild” strategy Oracle simply scans the table after it has been loaded, sorts the key values and indexes, then writes the index to disc using direct path writes; you might expect the total work done to be roughly the same in both cases – but it’s not.

I’ve shown 4 files: the temporary tablespace, the undo tablespace, the tablespace holding the table and the tablespace holding the index; and the first obvious difference is the number of blocks written and read and the change in average read size on the temporary tablespace. Both sessions had to spill to disc for the sort, and both did a “one-pass” sort; the difference in the number of blocks written and read appears because the “real-time” session wrote the sorted data set back to the temporary tablespace one more time than it really needed to – it merged the sorted data in a single pass but wrote the data back to the temporary tablespace before reading it again and applying it to the index (for a couple of points on tracing sorts, see this posting). I don’t know why Oracle chose to use a much smaller read slot size in the second case, though.

The next most dramatic thing we see is that real-time maintenance introduced 24,800 single block reads with 20,000 blocks written to the undo tablespace (with a few thousand more that would eventually be written by dbwr – I should have included a “flush buffer_cache” in my tests), compared to virtually no activity in the “rebuild” case. The rebuild generates no undo; real-time maintenance (even starting with an empty index) generates undo because (in theory) someone might look at the index and need to see a read-consistent image of it. So it’s not surprising that we see a lot of writes to the undo tablespace – but where did the reads come from? I’ll answer question that later.

It’s probably not a surprise to see the difference in the number of blocks read from the table’s tablespace. When we rebuild the index we have to do a tablescan to acquire the data; but, again, we can ask why did we see 22,800 blocks read from the table’s tablespace when we were doing the insert with real-time maintenance. On a positive note those reads were multiblock reads, but what caused them? Again, I’ll postpone the answer.

Finally we see that the number of blocks read (reason again postponed) and written to the index’s tablespace are roughly similar. The writes differ because because the rebuild is doing direct path writes, while the real-time maintenance is done in the buffer cache, so there are some outstanding index blocks to be written. The reads are similar, though one test is exclusively single block reads and the other is doing (small) multiblock reads – which is just a little bit more efficient.  The difference in the number of reads is because the rebuild was at the default pctfree=10 while the index maintenance was a massive “insert in order” which would have packed the index leaf blocks at 100%.

To start the explanation – here are the most significant activity stats – some for the session, a couple for the instance:

"Real-time" maintenance
Name                                                                     Value
----                                                                     -----
physical reads for flashback new                                        33,263
redo entries                                                           118,290
redo size                                                          466,628,852
redo size for direct writes                                        187,616,044
undo change vector size                                            134,282,356
flashback log write bytes                                          441,032,704

Name                                                                     Value
----                                                                     -----
physical reads for flashback new                                           156
redo entries                                                            35,055
redo size                                                          263,801,792
redo size for direct writes                                        263,407,628
undo change vector size                                                122,156
flashback log write bytes                                          278,036,480

The big clue is the “physical reads for flashback new”. When you modify a block, if it hasn’t been dumped into the flashback log recently (as defined by the hidden _flashback_barrier_interval parameter) then the original version of the block has to be written to the flashback log before the change can be applied; moreover, if a block is being “newed” (Oracle-speak for being reformatted for a new use) it will also be written to flashback log. Given the way that the undo tablespace works it’s not surprising if virtually every block you modify in the undo tablespace has to be written to the flashback log before you use it. The 33,264 blocks read for “flashback new” consists of the 24,800 blocks read from the undo tablespace when we were maintaining the index in real-time plus a further 8,460 from “somewhere” – which, probably not coincidentally, matches the number of blocks read from the index tablespace as we create the index. The odd thing is that we don’t see the 22,800 reads on the table’s tablespace (which don’t occur when flashback is off) reported as “physical reads for flashback new”; this looks like a reporting error to me.

So the volume of undo requires us to generate a lot of flashback log as well as the usual increase in the amount of redo. As a little side note, we get confirmation from these stats that the index was rebuilt using direct path writes – there’s an extra 75MB of redo for direct writes.


If you are running with flashback enabled in a system that’s doing high volume data loading remember that the “physical reads for flashback new” could be a major expense. This is particularly expensive on index maintenance, which can result in a large number single block reads of the undo tablespace. The undo costs you three times – once for the basic cost of undo (and associated redo), once for the extra reads, and once for writing the flashback log. Although you have to do tablescans to rebuild indexes, the cost of an (efficient, possibly direct path) tablescan may be much less than the penalty of the work relating to flashback.

Footnote: since you can’t (officially) load data into a table with an unusable unique index or constraint, you may want to experiment with using non-unique indexes to support unique/PK constraints and disabling the constraints while loading.

February 19, 2015

255 columns

Filed under: Infrastructure,Oracle,Performance — Jonathan Lewis @ 12:45 am GMT Feb 19,2015

You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

  • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
  • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
  • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache and it won’t be cached (see 1st comment below).  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain.
  • An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
  • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.

January 26, 2015

In-memory DB

Filed under: 12c,in-memory,Infrastructure,Oracle — Jonathan Lewis @ 8:18 am GMT Jan 26,2015

A recent thread on the OTN database forum supplied some code that seemed to show that In-memory DB made no difference to performance when compared with the traditional row-store mechanism and asked why not.  (It looked as if the answer was that almost all the time for the tests was spent returning the 3M row result set to the SQL*Plus client 15 rows at a time.)

The responses on the thread led to the question:  Why would the in-memory (column-store) database be faster than simply having the (row-store) data fully cached in the buffer cache ?

Maria Colgan has addressed this question in part 3 of her series on In-Memory Database (see catalogue below), but I thought I’d repeat the basic ideas with a few imaginative numbers thrown in to give a better flavour of what’s going on. So imagine you have a table sized at 100GB, with 100 columns of data where every column holds data of a similar size and pattern; and want to execute a query of the form: select {list of columns} from big_table where colX >=  {some constant}.

Traditional Tablescan (approximation) with table fully cached

For each block of the 100GB, Oracle has to acquire the “cache buffers chains” latch, pin the block, drop the latch, and scan the block testing each row, then acquire the latch, unpin the block, and drop the latch.  Scanning the block requires a walk through the row directory and, for each row pointer, jumping to the correct location in the block for the row, stepping along the row one column at a time to get to the correct row, and then checking the column  value. If the column matches the predicate extract, format and return the required columns from that row.

It’s a lot of memory to scan, in a large number of small steps, involving a lot of latching and pinning – which translates into a lot of CPU. On the plus side, although it’s very expensive to identify the required rows, it’s very cheap to construct and return a row once you’ve identified it.

In-memory scan (approximation)

  1. Given the way I’ve described the table (100GB, 100 similar columns), Oracle can recreate it in memory as 100 lists of 1GB each; so we can identify the rows we want by scanning one of those lists and applying the predicate – so only 1GB of (fairly contigious) memory to scan, rather than 100GB, and virtually no latching and pinning to find that memory, and no jumping around following pointers and counting along rows.
  2. But it’s probably NOT 1GB of memory to scan, because Oracle has some compression/deduplication methods that it can apply to the data that often reduces the memory requirement by a factor of four of five – so perhaps it’s only 250 MB of memory to scan.
  3. But Oracle breaks large lists into chunks, so rather than 250MB of contiguous memory, it’s likely to be 250 chunks of 1MB; and as part of the description of each chunk Oracle records the lowest and highest value in the chunk; [ed:  Christian Antognini says that the metadata list every distinct value for the chunk] so it can check the predicate against the boundary values on the chunk and be in a position to discard entire chunks without looking at their content. So, depending on the actual content and pattern of the data, we may examine only a handful of chunks in detail, dropping the scan from 250MB to, perhaps, 10MB.
  4. And we still haven’t finished there, because the code that handles the column-store is able to take advantage of the SIMD (Single Instruction,  Multiple Data) operations in the CPU to check the predicate against 4, or possibly even 8, values simultaneously at a speed matching a single column comparison that has to be used for the traditional cached row-store. (Given the way that Oracle  (probably) handles the compression, I suspect that this final advantage is only significant for range-based predicates – but that’s purely conjectural).

So the huge benefit you get from In-Memory column store, compared to “fully cached row-store”  is that you are likely to scan far less memory to identify the rows that match your predicate, and do it with far less “infrastructure” activity like latching and pinning. The potential saving in CPU usage is huge.

There is, of course, a penalty to pay. As you identify the rows of interest you can (in effect) construct a bitmap representing the position of those rows in the table (and if you have predicates on more than 1 column you can use bitmap operations on the individual column bitmaps to identify the rows you want in the final result) but then you have to construct the row that goes into the result set. If your query is interested in just 5 columns that means using the bitmap to locate the correct entry from each of 5 separate column lists; if your query is interested in 99 column that means extracting the correct entry from each of 99 separate column lists. Identifying the rows you want can be very  quick, building the final result may be relatively slow.

Soundbite summary

  • Using the In-memory Database, you can identify the rows you want very quickly but it’s relatively slow to reconstruct them.
  • Using a fully cached traditional row-store, it’s relatively slow to identify the rows you want, but once you’ve found them you spend no time reconstructing them.

Bear in mind that this is an extremely simplified analysis and ignores all sorts of details about read-consistency, the probability of physical reads, the probability of scanning blocks instead of scanning chunks, and so on; my focus is only on the type of activity that differentiates row-store handling from column-store handling when all the data is in memory so that you can have some appreciation of why the benefits available from In-memory DB can vary with the pattern of the data and the way you use it.

Catalogue of blog posts by Maria Colgan and Andy Rivenes:

Other articles on In-memory DB:


January 21, 2015

LOB Space

Filed under: ASSM,Infrastructure,LOBs,Oracle — Jonathan Lewis @ 1:26 pm GMT Jan 21,2015

Following on from a recent “check the space” posting, here’s another case of the code not reporting what you thought it would, prompted by a question on the OTN database forum about a huge space discrepancy in LOBs.

There’s a fairly well-known package called dbms_space that can give you a fairly good idea of the space used by a segment stored in a tablespace that’s using automatic segment space management. But what can you think when a piece of code (written by Tom Kyte, no less) reports the following stats about your biggest LOB segment:

Unformatted Blocks .....................             107
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................         859,438
Total Blocks............................       1,746,304
Total Bytes.............................  14,305,722,368

Of the available 1.7M blocks approximately 890,000 seem to have gone missing!

I tend to think that the first thing to do when puzzled by unexpected numbers is to check for patterns in the arithmetic. First (though not particularly interesting) the LOB segment seems to be using the standard 8KB blocksize: 1,746,304 * 8192 = 14,305,722,368; more interestingly, although only approximately true, the number of full blocks is pretty close to half the total blocks – does this give you a hint about doing a little test.

create table test_lobs (
        id              number(5),
        text_content    clob
lob (text_content) store as text_lob(
        disable storage in row
        chunk 32K
        tablespace test_8k_1m_assm

        for i in 1..1000 loop
                insert into test_lobs values(
                        i, 'x'
        end loop;

I’ve created a table with a LOB segment, storing LOBs out of row with a chunk size of 32KB in a tablespace which (using my naming convention) is locally managed, 8KB blocksize, uniform 1MB extents, using ASSM. So what do I see if I try to check the space usage through calls to the dbms_space package ? (There’s some sample code to do this in the comments of the blog I linked to earlier, but I’ve used some code of my own for the following – the first figure reported is the blocks, the second the bytes):


Unformatted                   :      119 /    3,899,392
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :    1,000 /   32,768,000

Segment Total blocks: 4224
Object Unused blocks: 0

Apparently the segment has allocated 4,224 blocks, but we’ve only used 1,000 of them, with 119 unformatted and 3,105 “missing”; strangely, though, the 1,000 “Full blocks” are simultaneously reported as 32,768,000 bytes … and suddenly the light dawns. The dbms_space package is NOT counting blocks, it’s counting chunks; more specifically it’s counting “bits” in the bitmap space management blocks for the LOB segment and (I think I’ve written this somewhere, possibly as far back as Practical Oracle 8i) the bits in a LOB segment represent chunks, not blocks.


The OP has set a 16KB chunksize with a 8KB block size. His numbers look fairly self-consistent: 1,746,304 –  (2 * 859,438) – 107 = 27,321; the difference is about 1.6% of the total allocation, which is in the right ballpark for the space management blocks, especially if the segment is in a tablespace using with 1MB uniform extents.


January 16, 2015

Spatial space

Filed under: Infrastructure,Oracle,Statistics,Troubleshooting — Jonathan Lewis @ 1:00 pm GMT Jan 16,2015

One thing you (ought to) learn very early on in an Oracle career is that there are always cases you haven’t previously considered. It’s a feature that is frequently the downfall of “I found it on the internet” SQL.  Here’s one (heavily paraphrased) example that appeared on the OTN database forum a few days ago:

select table_name,round((blocks*8),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE’;

select table_name,round((num_rows*avg_row_len/1024),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE’;

The result from the first query is 704 kb,  the result from the second is 25.4 kb … fragmentation, rebuild, CTAS etc. etc.

The two queries are perfectly reasonable approximations (for an 8KB block size, with pctfree of zero) for the allocated space and actual data size for a basic heap table – and since the two values here don’t come close to matching it’s perfectly reasonable to consider doing something like a rebuild or shrink space to reclaim space and (perhaps) to improve performance.

In this case it doesn’t look as if the space reclaimed is likely to be huge (less than 1MB), on the other hand it’s probably not going to take much time to rebuild such a tiny table; it doesn’t seem likely that the rebuild could make a significant difference to performance (though apparently it did), but the act of rebuilding might cause execution plans to change for the better because new statistics might appear as the rebuild took place. The figures came from a test system, though, so maybe the table on the production system was much larger and the impact would be greater.

Being cautious about wasting time and introducing risk, I made a few comments about the question –  and learned that one of the columns was of type SDO_GEOMETRY. This makes a big difference about what to do next, because dbms_stats.gather_table_stats() doesn’t process such columns correctly, which results in a massive under-estimate for the avg_row_len (which is basically the sum of avg_col_len for the table). Here’s an example (run on 12c, based on some code taken from the 10gR2 manuals):

drop table cola_markets purge;

CREATE TABLE cola_markets (
  mkt_id NUMBER,
  name VARCHAR2(32),
  shape SDO_GEOMETRY);

INSERT INTO cola_markets VALUES(
    2003,  -- two-dimensional polygon
    SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
    SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
          -- define rectangle (lower left and upper right) with
          -- Cartesian-coordinate data

insert into cola_markets select * from cola_markets;

execute dbms_stats.gather_table_stats(user,'cola_markets')
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

analyze table cola_markets compute statistics;
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

If you care to count the number of times I execute the “insert as select” it’s 10, so the table ends up with 2^10 = 1024 rows. The 7,200 in the calculated column converts bytes to approximate blocks on the assumption of 8KB blocks and pctfree = 10. Here are the results following the two different methods for generating object statistics:

PL/SQL procedure successfully completed.

----------- ---------- ---------- ---------------
         14       1024        124               2

Table analyzed.

----------- ---------- ---------- ---------------
        109       1024        124              16

Where does the difference in Expected_blocks come from ? (The Blocks figures is 124 because I’ve used 1MB uniform extents – 128 block – under ASSM (which means 4 space management blocks at the start of the first extent.)

Here are the column lengths after the call to dbms_stats: as you can see the avg_row_len is the sum of avg_col_len.

select column_name, data_type, avg_col_len
from   user_tab_cols
where  table_name = 'COLA_MARKETS'
order by

COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             3
NAME                 VARCHAR2                           7
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             0
SYS_NC00004$         NUMBER                             4
SYS_NC00005$         NUMBER                             0
SYS_NC00006$         NUMBER                             0
SYS_NC00007$         NUMBER                             0

The figures from the analyze command are only slightly different, but fortunately the analyze command uses the row directory pointers to calculate the actual row allocation, so picks up information about the impact of inline varrays, LOBs, etc. that the dbms_stats call might not be able to handle.

COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             2
NAME                 VARCHAR2                           6
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             1
SYS_NC00004$         NUMBER                             3
SYS_NC00005$         NUMBER                             1
SYS_NC00006$         NUMBER                             1
SYS_NC00007$         NUMBER                             1

As a basic reminder – whenever you do anything slightly non-trivial (e.g. something you couldn’t have done in v5, say) then remember that all those dinky little script things you find on the Internet might not actually cover your particular case.

December 23, 2014

Just in case

Filed under: Infrastructure,Oracle,Performance,redo — Jonathan Lewis @ 10:37 am GMT Dec 23,2014

For those who don’t read Oracle-l and haven’t found Nikolay Savvinov’s blog, here’s a little note pulling together a recent question on Oracle-L and a relevant (and probably unexpected) observation from the blog. The question (paraphrased) was:

The developers/data modelers are creating all the tables with varchar2(4000) as standard by default “Just in case we need it”. What do you think of this idea?

The general answer was that it’s a bad idea (and unnecessary, anyway) and one specific threat that got mentioned was the problem of creating indexes and Oracle error ORA-01450; but coincidentally Nikolay Savvinov had written about a performance-related “bug” in August this year, which turned out, only last week, to be expected behaviour. You can read his articles for the details, but since he used a 4KB block size to demonstrate it I thought I’d repeat the exercise using an 8KB block size.


drop table t1 purge;
create table t1 (id number(6), v1 varchar(40), v2 varchar2(40), v3 varchar2(40));
create unique index t1_i1 on t1(id);

execute snap_redo.start_snap

insert into t1 
select	object_id, object_name, object_name, object_name 
from	all_objects
where	rownum <= 10000

execute snap_redo.end_snap

drop table t1 purge;
create table t1 (id number(6), v1 varchar(4000), v2 varchar2(4000), v3 varchar2(4000));
create unique index t1_i1 on t1(id);

execute snap_redo.start_snap

insert into t1 
select	object_id, object_name, object_name, object_name 
from	all_objects
where	rownum <= 10000

execute snap_redo.end_snap

I’ve dropped and created the same table twice, once with varchar2(40) columns and once with varchar2(4000) columns.

I’ve created an index on a (non-character) column – the specific results vary depending on whether the index is unique or non-unique, and whether or not you have the index, and whether or not the table already holds data, and the effective clustering on the index columns etc. etc. but the key difference between the two sets of results doesn’t go away.

I’ve inserted object_name values (maximum usage 32 bytes) into the varchar2() columns, inserting 10,000 rows.

The snap_redo package is one of my simple pre/post packages that calculates changes in values in some dynamic performance view – in this case it’s looking at v$sysstat (system statistics) for statistics relating to redo generation, which means you need to run this test on an otherwise idle instance. Here are the two sets of results from an instance of

Name                                                                     Value
----                                                                     -----
messages sent                                                               11
messages received                                                           11
calls to kcmgcs                                                            313
calls to kcmgas                                                             37
calls to get snapshot scn: kcmgss                                           74
redo entries                                                               769
redo size                                                            1,317,008
redo wastage                                                             3,888
redo writes                                                                 11
redo blocks written                                                      2,664
redo write time                                                             10
redo blocks checksummed by FG (exclusive)                                2,242
redo ordering marks                                                          1
redo subscn max counts                                                       1
redo synch time                                                              7
redo synch time (usec)                                                  88,875
redo synch time overhead (usec)                                          1,810
redo synch time overhead count (<2 msec)                                    11
redo synch writes                                                           11
redo write info find                                                        11
undo change vector size                                                261,136
rollback changes - undo records applied                                      2
IMU undo allocation size                                                17,184

Name                                                                     Value
----                                                                     -----
messages sent                                                                8
messages received                                                            8
calls to kcmgcs                                                            222
calls to kcmgas                                                             56
calls to get snapshot scn: kcmgss                                           52
redo entries                                                            20,409
redo size                                                            5,606,872
redo buffer allocation retries                                               1
redo wastage                                                             1,248
redo writes                                                                  6
redo blocks written                                                     11,324
redo write time                                                             26
redo blocks checksummed by FG (exclusive)                                  571
redo ordering marks                                                         32
redo subscn max counts                                                       1
redo synch time                                                              6
redo synch time (usec)                                                  60,230
redo synch time overhead (usec)                                            159
redo synch time overhead count (<2 msec)                                     1
redo synch writes                                                            1
redo write info find                                                         1
undo change vector size                                              1,590,520
IMU undo allocation size                                                   144

Notice, particularly, the great change in the number of redo entries and the total redo size when the character columns are defined at varchar2(4000). Note particularly that the number of redo entries is roughly “2 * number of rows inserted” – for each row that’s one for the row and one for the index entry. You can check the redo log content by dump the log file, of course (and Nikolay did), or you can take my word for it that Oracle is doing the equivalent of single row processing in the varchar2(4000) case and array processing in the varchar2(40) case.

When Oracle calculates that the row length is larger than the block size it falls back to single row processing; this can increase your redo generation significantly, and since the rate at which you can pump out redo is the ultimate rate at which you can load data this could have a significant impact on your data loading times. Declaring character columns as varchar2(4000) “just in case” is a bad idea.


November 14, 2014

Shrink Tablespace

Filed under: fragmentation,Infrastructure,Oracle — Jonathan Lewis @ 6:16 pm GMT Nov 14,2014

If you start moving objects around to try and reclaim space in a tablespace there are all sorts of little traps that make it harder than you might hope to get the maximum benefit with the minimum effort.  I’ve written a couple of notes in the past about how to proceed and, more recently, one of the difficulties involved. This is just a brief note about a couple of ideas to make life a little easier.

  • Check that you’ve emptied the recyclebin before you start
  • Before you try moving/rebuilding an object check that the total free space “below” that object is greater than  the size of the object or you’ll find that parts of the object move “up” the tablespace.
  • Before moving a table, mark its indexes as unusable. If you do this then (in recent versions of Oracle) the default behaviour is for the index space be freed as the segment vanishes and you may find that the extents of the table can move further “down” the tablespace.  (If you’ve kept tables and indexes in different tablespaces this is irrelevant, of course).
  • When you move an object think a little carefully about whether specifying an minimum initial extent size would help or hinder the move.
  • Don’t assume that moving the “highest” object first is the best strategy – work out where you expect the final tablespace HWM to be and you may find that it makes more sense to move other objects that are above the point first.
  • Moving objects with several small (64KB) extents first may allow you to free up larger (1MB, 8MB) gaps that can be used by other larger objects
  • Creating a new tablespace and moving objects to it from the old tablespace “top down” may be the quickest way to proceed.  Work towards dropping the old tablespace HWM regularly.

November 7, 2014

Quiz night

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 6:37 pm GMT Nov 7,2014

Prompted by an email from Yves Colin (who’ll be presenting on the Tuesday of UKOUG Tech14) I was prompted to dig out a little script I wrote some years ago and re-run an old test, leading to this simple question:  what’s the largest size array insert that Oracle will handle ?

If you’re tempted to answer, watch out – it’s not exactly a trick question, but there is a bit of a catch.


There is an internal limit of 255 on the size of array that Oracle can insert as a unit. I leave it as an exercise to the reader to decide whether or not this makes any significant difference to performance, since the effects of row size, number of indexes maintained, and disk I/O requirements may make the effects of the limit virtually undetectable. To demonstrate the limit all we need do is insert a few hundred rows into a table and do a block dump, optionally followed by a dump of the redo log.

create table t1 (n1 number not null) segment creation immediate;

alter system switch logfile;

insert into t1 select rownum from all_objects where rownum <= 1000;

execute dump_log

start rowid_count t1

---------- ---------- -------------
         5        180           660
         5        184           340

2 rows selected.

------------- ----------
          340          1
          660          1

2 rows selected.

alter system flush buffer_cache;

-- SQL> alter system dump datafile 5 block 180;

The dump_log procedure is simply a pl/sql wrapper for a call to ‘alter system dump logfile {current log}’; I the script rowid_count.sql extracts the file and block numbers from rowids in the given table and aggregates them in different ways.  The reason for running the script is to find a table block with a lot of rows in it; the block I dumped actually held the first 660 rows of the insert. Here’s a tiny extract from the block dump (with one little comment added):

tab 0, row 0, @0x1904			-- 6,400 dec
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 02
tab 0, row 1, @0x190a
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 03
tab 0, row 2, @0x1910
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 04
tab 0, row 3, @0x1916
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1

This shows the first four rows inserted, and you can see that the offset to the first row is 6,400 bytes, and then each subsequent row is 6 bytes further down the block (when the numbers get larger the row lengths will increase to 7 bytes). The positioning of these rows is, at first sight, a little odd – you might have guessed that they would either start at offset “zero” and work down to offset “8K”, or start at “8K” and work backwards up to zero – why is the first row 3/4 of the way down the block ?

Rather than show you more row dumps, I’ll give you a carefully selected extract from the row directory:

0x12:pri[0]     offs=0x1904
0x14:pri[1]     offs=0x190a


0x20c:pri[253]  offs=0x1f8a
0x20e:pri[254]  offs=0x1f91		End of first 255 rows
0x210:pri[255]  offs=0x120e		Start of second 255 rows
0x212:pri[256]  offs=0x1215


0x40a:pri[508]  offs=0x18f6
0x40c:pri[509]  offs=0x18fd		End of second 255 rows
0x40e:pri[510]  offs=0xdf5		Start of last 150 rows
0x410:pri[511]  offs=0xdfc


0x536:pri[658]  offs=0x1200		End of last 150 rows
0x538:pri[659]  offs=0x1207

The first 255 rows inserted are stacked at the bottom of the block at offsets 0x1904 to 0x1f91.
The second 255 rows inserted are stacked above them at offsets 0x120e to 0x18fd (note 0x18fd + 6 = 0x1903)
The last 150 rows inserted are stack above them at offsets 0xdf5 to 0x1207 (note 0x1207 + 6 = 0x120d)

No matter how large your attempted array insert, the maximum number of rows (or index entries) Oracle can insert into a block in a single internal array operation is 255.

Further corroboration comes from the redo log dump – here’s a tiny bit of a single change vector (i.e. a single atomic change to a single Oracle block) from the redo generated while this insert was going on:

CHANGE #18 CON_ID:0 TYP:0 CLS:1 AFN:5 DBA:0x014000c7 OBJ:95876 SCN:0x0000.007528dd SEQ:1 OP:11.11 ENC:0 RBL:0 FLG:0x0000
KTB Redo
op: 0x01  ver: 0x01
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x0007.007.000009f2    uba: 0x01000bda.02a9.14
KDO Op code: QMI row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x014000c7  hdba: 0x01400083
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 lock: 1 nrow: 255
slot[0]: 0
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 02
slot[1]: 1
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 03
slot[2]: 2
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 04

Although the CON_ID in the line 1 tells you this dump is from 12c the same limit holds across all (modern) versions of Oracle. The operation (OP 11.11 – in line 1) is “Insert Multiple Rows”, and at line 9 you can see: “nrow: 255”. I haven’t included the rest of the change vector, but all it does is show the remaining 252 rows.

October 13, 2014


Filed under: Infrastructure,Oracle — Jonathan Lewis @ 5:24 pm GMT Oct 13,2014

On a client site recently, experimenting with a T5-2 – fortunately a test system – we decided to restart an instance with a larger SGA. It had been 100GB, but with 1TB of memory and 256 threads (2 sockets, 16 cores per socket, 8 threads per core) it seemed reasonable to crank this up to 400GB for the work we wanted to do.

It took about 15 minutes for the instance to start; worse, it took 10 to 15 seconds for a command-line call to SQL*Plus on the server to get a response; worse still, if I ran a simple “ps -ef” to check what processes were running the output started to appear almost instantly but stopped after about 3 lines and hung for about 10 to 15 seconds before continuing. The fact that the first process name to show after the “hang” was one of the Oracle background processes was a bit of hint, though.

Using truss on both the SQL*Plus call and on the ps call, I found that almost all the elapsed time was spent in a call to shmatt (shared memory attach); a quick check with “ipcs – ma” told me (as you might guess) that the chunk of shared memory identified by truss was one of the chunks allocated to Oracle’s SGA. Using pmap on the pmon process to take a closer look at the memory I found that it consisted of a few hundred pages sized at 256MB and a lot of pages sized at 8KB; this was a little strange since the alert log had told me that the instance was allocating about 1,600 memory pages of 256MB (i.e. 400GB) and 3 pages of 8KB – clearly a case of good intentions failing.

It wasn’t obvious what my next steps should be – so I bounced the case off the Oak Table … and got the advice to reboot the machine. (What! – it’s not my Windows PC, it’s a T5-2.) The suggestion worked: the instance came up in a few seconds, with shared memory consisting of a few 2GB pages, a fair number of 256MB pages, and a few pages of other sizes (including 8KB, 64KB and 2MB).

There was a little more to the advice than just rebooting, of course; and there was an explanation that fitted the circumstances. The machine was using ZFS and, in the absence of a set limit, the file system cache had at one point managed to acquire 896 GB of memory. In fact when we first tried to start the instance at with a 400GB SGA Oracle couldn’t start up at all until the system administrator had reduced the filesystem cache and freed up most of the memory; even then so much of the memory had been allocated originally in 8KB pages that Oracle had made a complete mess of building a 400GB memory map.

I hadn’t passed all these details to the Oak Table but the justification for the suggested course of action (which came from Tanel Poder) sounded like a perfect description of what had been happening up to that point. In total his advice was:

  • limit the ZFS ARC cache (with two possible strategies suggested)
  • use sga_target instead of memory_target (to avoid a similar problem on memory resize operations)
  • start the instance immediately after the reboot

Maxim: Sometimes the solution you produce after careful analysis of the facts looks exactly like the solution you produce when you can’t think of anything else to do.

October 1, 2014

Shrink Tablespace

Filed under: fragmentation,Infrastructure,Oracle — Jonathan Lewis @ 2:55 pm GMT Oct 1,2014

In a comment on my previous post on shrinking tablespaces Jason Bucata and Karsten Spang both reported problems with small objects that didn’t move to the start of the tablespace. This behaviour is inevitable with dictionary managed tablespaces (regardless of the size of the object), but I don’t think it’s likely to happen with locally managed tablespaces if they’ve been defined with uniform extent sizes. Jason’s comment made me realise, though, that I’d overlooked a feature of system allocated tablespaces that made it much harder to move objects towards the start of file. I’ve created a little demo to illustrate the point.

I created a new tablespace as locally managed, ASSM, and auto-allocate, then created a few tables or various sizes. The following minimal SQL query reports the resulting extents in block_id order, adding in a “boundary_1m” column which subtracts 128 blocks (1MB) from the block_id, then divides by 128 and truncates to show which “User Megabyte” in the file the extent starts in.  (Older versions of Oracle typically have an 8 block space management header, recent versions expanded this from 64KB to 1MB – possibly as a little performance aid to Exadata).

        segment_name, block_id, blocks , trunc((block_id - 128)/128) boundary_1M
        dba_extents where owner = 'TEST_USER'
order by

------------------------ ---------- ---------- -----------
T1                              128       1024           0
T1                             1152       1024           8
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

As you can see t3 and t4 are small tables – 1 extent of 64KB each – and t5, which I created after t4, starts on the next 1MB boundary. This is a feature of auto-allocate: not only are extents (nearly) fixed to a small number of possible extent sizes, the larger extents are restricted to starting on 1MB boundaries and the 64KB extents are used preferentially to fill in odd-sized” holes. To show the impact of this I’m going to drop table t1 (at the start of file) to make some space.

------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

Now I’ll move table t3 – hoping that it will move to the start of file and use up some of the space left by t1. However there’s a 1MB area (at boundary 32) which is partially used,  so t3 moves into that space rather than creating a new “partly used” megabyte.

------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T4                             4232          8          32
T3                             4240          8          32
T5                             4352        128          33

It’s a little messy trying to clear up the tiny fragments and make them do what you want. In this case you could, for example, create a dummy table with storage(initial 64K next 64K minextents 14) to use up all the space in the partly used megabyte, then move t3 – which should go to the start of file – then move table t4 – which should go into the first partly-used MB (i.e. start of file) rather than taking up the hole left by t3.

Even for a trivial example it’s messy – imagine how difficult it can get to cycle through building and dropping suitable dummy tables and move objects in the right order when you’ve got objects with several small extents scattered through the file, and objects with a mixture of small extents and large extents.

September 21, 2014

Partitioned Clusters

Filed under: 12c,clusters,Infrastructure,Oracle — Jonathan Lewis @ 7:28 pm GMT Sep 21,2014

In case you hadn’t noticed it, partitioning has finally reached clusters in 12c – specifically They’re limited to hash clusters with range partitioning, but it may be enough to encourage more people to use the technology. Here’s a simple example of the syntax:

create cluster pt_hash_cluster (
        id              number(8,0),
        d_date          date,
        small_vc        varchar2(8),
        padding         varchar2(100)
-- single table
hashkeys 10000
hash is id
size 700
partition by range (d_date) (
        partition p2011Jan values less than (to_date('01-Feb-2011','dd-mon-yyyy')),
        partition p2011Feb values less than (to_date('01-Mar-2011','dd-mon-yyyy')),
        partition p2011Mar values less than (to_date('01-Apr-2011','dd-mon-yyyy'))

I’ve been waiting for them to appear ever since and the TPC-C benchmark that Oracle did with them – they’ve been a long time coming (check the partition dates – that gives you some idea of when I wrote this example).

Just to add choice (a.k.a. confusion) has also introduce attribute clustering so you can cluster data in single tables without creating clusters – but only while doing direct path loads or table moves. The performance intent is similar, though the technology and circumstances of use are different.

September 19, 2014

Shrink Tablespace

Filed under: fragmentation,Infrastructure,Oracle — Jonathan Lewis @ 12:10 pm GMT Sep 19,2014

A recent question on the OTN database forum raised the topic of returning free space in a tablespace to the operating system by rebuilding objects to fill the gaps near the start of files and leave the empty space at the ends of files so that the files could be resized downwards.

This isn’t a process that you’re likely to need frequently, but I have written a couple of notes about it, including a sample query to produce a map of the free and used space in a tablespace. While reading the thread, though, it crossed my mind that recent versions of Oracle introduced a feature that can reduce the amount of work needed to get the job done, so I thought I’d demonstrate the point here.

When you move a table its indexes become unusable and have to be rebuilt; but when an index becomes unusable, the more recent versions of Oracle will drop the segment. Here’s a key point – if the index becomes unusable because the table has been moved the segment is dropped only AFTER the move has completed. Pause a minute for thought and you realise that the smart thing to do before you move a table is to make its indexes unusable so that they release their space BEFORE you move the table. (This strategy is only relevant if you’ve mixed tables and indexes in the same tablespace and if you’re planning to do all your rebuilds into the same tablespace rather than moving everything into a new tablespace.)

Here are some outputs demonstrating the effect in a database. I have created (and loaded) two tables in a tablespace of 1MB uniform extents, 8KB block size; then I’ve created indexes on the two tables. Running my ts_hwm.sql script I get the following results for that tablespace:

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 TEST_USER  T1              TABLE
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1_I1           INDEX
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Notice that it was a nice new tablespace, so I can see the two tables followed by the two indexes at the start of the tablespaces. If I now move table t1 and re-run the script this is what happens:

alter table t1 move;

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 free       free
                512         639 TEST_USER  T2_I1           INDEX
                640         767 TEST_USER  T1              TABLE
                768      65,535 free       free

Table t1 is now situated past the previous tablespace highwater mark and I have two gaps in the tablespace where t1 and the index t1_i1 used to be.

Repeat the experiment from scratch (drop the tables, purge, etc. to empty the tablespace) but this time mark the index unusable before moving the table and this is what happens:

------- ----------- ----------- ---------- --------------- ------------------
      5         128         255 free       free
                256         383 TEST_USER  T2              TABLE
                384         511 TEST_USER  T1              TABLE
                512         639 TEST_USER  T2_I1           INDEX
                640      65,535 free       free

Table t1 has moved into the space vacated by index t1_i1, so the tablespace highwater mark has not moved up.

If you do feel the need to reclaim space from a tablespace by rebuilding objects, you can find that it’s quite hard to decide the order in which the objects should be moved/rebuilt to minimise the work you (or rather, Oracle) has to do; if you remember that any table you move will release its index space anyway and insert a step to mark those indexes unusable before you move the table you may find it’s much easier to work out a good order for moving the tables.

Footnote: I appreciate that some readers may already take advantage of the necessity of rebuilding indexes by dropping indexes before moving tables – but I think it’s a nice feature that we can now make them unusable and get the same benefit without introducing a risk of error when using a script to recreate an index we’ve dropped.


September 8, 2014

ASSM Truncate.

Filed under: ASSM,Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 11:34 am GMT Sep 8,2014

Here’s one that started off with a tweet from Kevin Closson, heading towards a finish that shows some interesting effects when you truncate large objects that are using ASSM. To demonstrate the problem I’ve set up a tablespace using system allocation of extents and automatic segment space management (ASSM).  It’s the ASSM that causes the problem, but it requires a mixture of circumstances to create a little surprise.

	tablespace test_8k_auto_assm
	datafile	-- OMF
	SIZE 1030M
	autoextend off
	blocksize 8k
	extent management local
	segment space management auto

create table t1 (v1 varchar2(100)) pctfree 99 tablespace test_8k_auto_assm storage(initial 1G);

insert into t1 select user from dual;

alter system flush buffer_cache;

truncate table t1;

I’ve created a table with an initial definition of 1GB, which means that (in a clean tablespace) the autoallocate option will jump straight to extents of 64MB, with 256 table blocks mapped per bitmap block for a total of 32 bitmap blocks in each 64MB extent. Since I’m running this on and haven’t included “segment creation immediate” in the definition I won’t actually see any extents until I insert the first row.

So here’s the big question – when I truncate this table (using the given command) how much work will Oracle have to do ?

Exchanging notes over twitter (140 char at a time) and working from a model of the initial state, it took a little time to get to understand what was (probably) happening and then produce this silly example – but here’s the output from a snapshot of v$session_event for the session across the truncate:

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
local write wait                                    490           0          83.26        .170          13
enq: RO - fast object reuse                           2           0         104.90      52.451         105
db file sequential read                              47           0           0.05        .001           0
db file parallel read                                 8           0           0.90        .112           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.67        .067         153
events in waitclass Other                             2           0           0.04        .018         109

The statistic I want to highlight is the number recorded against “local write wait”: truncating a table of one row we wait for 490 blocks to be written! We also have 8 “db file parallel read”  waits which, according to a 10046 trace file, were reading hundreds of blocks. (I think the most significant time in this test – the RO enqueue wait – may have been waiting for the database writer to complete the work needed for an object checkpoint, but I’m not sure of that.)

The blocks written were the space management bitmap blocks for the extent(s) that remained after the truncate – even the ones that referenced extents above the high water mark for the table. Since we had set the tables initial storage to 1GB, we had a lot of bitmap blocks. At 32 per extent and 16 extents (64MB * 16 = 1GB) we might actually expect something closer to 512 blocks, but actually Oracle had formatted the last extent with only 8 space management blocks. and the first extent had an extra 2 to cater for the level 2 bitmap lock and segment header block giving: 32 * 15 + 8 + 2 = 490.

As you may have seen above, the impact on the test that Kevin was doing was quite dramatic – he had set the initial storage to 128GB (lots of bitmap blocks), partitioned the table (more bitmap blocks) and was running RAC (so the reads were running into waits for global cache grants).

I had assumed that this type of behaviour happened only with the “reuse storage” option of the truncate command: and I hadn’t noticed before that it also appeared even if you didn’t reuse storage – but that’s probably because the effect applies only to the bit you keep, which may typically mean a relatively small first extent. It’s possible, then, that in most cases this is an effect that isn’t going to be particularly visible in production systems – but if it is, can you work around it ? Fortunately another tweeter asked the question “What happens if you ‘drop all storage?'” Here’s the result from adding that clause to my test case:

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
enq: RO - fast object reuse                           1           0           0.08        .079           0
log file sync                                         1           0           0.03        .031           0
db file sequential read                              51           0           0.06        .001           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.56        .056         123
events in waitclass Other                             3           0           0.87        .289         186

Looking good – if you don’t keep any extents you don’t need to make sure that their bitmaps are clean. (The “db file sequential read” waits are almost all about the data dictionary, following on from my “flush buffer cache”).

Footnote 1: the same effect appears in
Footnote 2: it’s interesting to note that the RO enqueue wait time seems to parallel the local write wait time: perhaps a hint that there’s some double counting going on. (To be investigated, one day).

« Previous PageNext Page »

The Rubric Theme. Blog at


Get every new post delivered to your Inbox.

Join 5,685 other followers