Oracle Scratchpad

February 18, 2022

Index ITL Limit

Filed under: deadlocks,Indexing,Infrastructure,Locks,Oracle,trace files — Jonathan Lewis @ 5:02 pm GMT Feb 18,2022

Here’s a little script that I wrote more than 10 years ago when investigating some undesirable behaviour with indexes. I’ve just rediscovered it after seeing it mentioned in a comment to an old article that I had been prompted to revisit. This isn’t going to help you solve any specific problem, but it might give you some feel for how much work Oracle has to do to cater for efficient index maintenance.

The script is just a dirty little hack to crash a session by calling a procedure recursively until something breaks – in this case when all the ITL slots of an index leaf block are full and the block doesn’t split for the next insert (which was a little surprising in its own right).

rem
rem     Script:         itl_limit.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2010
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             11.1.0.6
rem  

create table itl_limit(n1 number) pctfree 0;
create unique index il_01 on itl_limit(n1) pctfree 0;

create or replace procedure recursive_itl(i_tx_count number)
as
        pragma autonomous_transaction;
begin
        if i_tx_count != 0 then
                insert into itl_limit values(200 - i_tx_count);
                recursive_itl(i_tx_count - 1);
                commit;
        end if;
end;
/

alter session set events '10046 trace name context forever, level 8';

execute recursive_itl(200);

alter system checkpoint;

alter session set events '10046 trace name context off';

prompt  ==========================================
prompt  If there is no index on the table then you
prompt  should see 169 rows in one block and 31 in
prompt  the other. But if there is an index there
prompt  should be no rows thanks to the rollback
prompt  caused by the error.
prompt  ==========================================

select 
        dbms_rowid.rowid_block_number(rowid), count(*) 
from 
        itl_limit
group by 
        dbms_rowid.rowid_block_number(rowid)
;

prompt  =================================
prompt  Try for a tree dump on the index
prompt  after which you can dump the root
prompt  block to see the ITL entries
prompt  =================================

column object_id new_value m_object_id

select  object_id, object_type, object_name
from    user_objects
where   object_name = 'IL_01'
/

alter session set events 'immediate trace name treedump level &m_object_id ';

If you comment out the creation of the index il_01 then the script completes very quickly (complaining, of course, about the attempt to do a treedump with a null level (ORA-49100: Failed to process event statement). Assuming you’re using an 8KB block size the rowid count query will show that you’ve got 169 rows in one block and 31 rows in the other – and if you dump the block with 169 rows you will find that (a) there’s loads of empty space in the block, and (b) the number of ITL entries has reached 169, and that’s the limiting factor that restricted the number of rows we could insert.

If you create the index then you’ll have to wait roughly 142 seconds for the procedure call to fail (with an ORA-00060: deadlock detected error) and the script to complete. And when you generate the tkprof output from the trace file you’ll find that most of the time is spent in the following statement:

INSERT INTO ITL_LIMIT
VALUES
(200 - :B1 )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    169      0.96     143.69         11      56324       4939         168
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      170      0.96     143.69         11      56324       4939         168

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 138     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD TABLE CONVENTIONAL  ITL_LIMIT (cr=5 pr=0 pw=0 time=192 us starts=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        11        0.00          0.00
  enq: TX - allocate ITL entry                   48        5.01        142.49
********************************************************************************

We’ve execute 169 times, but reported only 168 rows, which suggests something went wrong on one insert. The 48 waits for “enq:TX – allocate ITL entry” invite further investigation, of course – so let’s get back to the raw trace file and find them.

grep -T -n  "WAIT.*alloc" or19_ora_15879.trc

 571   :WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 1000682 name|mode=1415053316 usn<<16 | slot=10420232 sequence=2 obj#=94939 tim=5285776687
 574   :WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 1016171 name|mode=1415053316 usn<<16 | slot=1310726 sequence=6209 obj#=94939 tim=5286793280
 577   :WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 1001580 name|mode=1415053316 usn<<16 | slot=10223624 sequence=2 obj#=94939 tim=5287795235
...
 709   :WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 4999483 name|mode=1415053316 usn<<16 | slot=11468804 sequence=2 obj#=94939 tim=5423905081
42381  :WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 2999710 name|mode=1415053316 usn<<16 | slot=9633800 sequence=2 obj#=94939 tim=5426905028

All 48 waits occur after the 169th attempt to insert a row. Oracle rotates through 12 ITL slots waiting one second on each, then goes round the loop again waiting 2 seconds on each, then 4 seconds, then 5 seconds – except it doesn’t wait on the 12th ITL on the final loop, instead it reports “DEADLOCK DETECTED (ORA-00060)” in the trace file and dumps a Deadlock graph of the form:

Deadlock graph:
                                          ------------Blocker(s)-----------  ------------Waiter(s)------------
Resource Name                             process session holds waits serial  process session holds waits serial
TX-00930008-00000002-0039DED3-00000000         44      49     X        18979      44      49           S  18979

Note the self-deadlock – the holder and waiter are the same session. After the deadlock graph we get the usual stack dump and after 48,000 lines of trace we see the message “Attempting to break deadlock by signaling ORA-00060” after which the session waits on the 12th ITL for 3 seconds and then the whole stack of autonomous transactions rolls back:

WAIT #139861052053576: nam='enq: TX - allocate ITL entry' ela= 2999710 name|mode=1415053316 usn<<16 | slot=9633800 sequence=2 obj#=94939 tim=5426905028
EXEC #139861052053576:c=429678,e=142567211,p=0,cr=504,cu=11,mis=0,r=0,dep=1,og=1,plh=0,tim=5427342894
ERROR #139861052053576:err=60 tim=5427342914
CLOSE #139861052053576:c=1,e=1,dep=1,type=3,tim=5427343141
XCTEND rlbk=1, rd_only=0, tim=5427343240
XCTEND rlbk=1, rd_only=0, tim=5427343336
...

My test script reports the object_id (not the data_object_id) of the index and does a treedump of it (which should show just a root block (which is also a leaf block) with no entries. The root block address lets you do a treedump, which will show something like:

----- begin tree dump
leaf: 0x400008b 67109003 (0: row:0.0 avs:3988)
----- end tree dump

Notice how the available space (avs) in this root/leaf block is only 3988 bytes rather than roughly 8,000 for an 8KB block size. That’s because the ITL area has taken advantage of its ability to grow to half the size of the block, and once it has grown it doesn’t shrink (Note: there is a hard limit of 255 which will only become apparent with larger block sizes – and if you want to test that you’ll have to edit my script to change both occurrences of the constant 200 to (at least) 256).

The data block address (DBA) of the root/leaf block is given in both hexadecimal and decimal; but if you can’t work out which file and block number this represents (I happen to know it’s file 16, and I can convert 0x8b to 139 decimal in my head) then you can check for the segment header block and add one to the block number; or use calls to the dbms_utility package to translate the DBA before doing a block dump:

SQL> select
  2          dbms_utility.data_block_address_file(67109003) file#,
  3          dbms_utility.data_block_address_block(67109003) block#
  4  from
  5          dual
  6  /

     FILE#     BLOCK#
---------- ----------
        16        139

1 row selected.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system dump datafile 16 block 139;

System altered.

Here’s the section of the resulting trace that shows you how bad things had got before the

 seg/obj: 0x172dd  csc:  0x0000000001311be1  itc: 169  flg: E  typ: 2 - INDEX
     brn: 0  bdba: 0x4000088 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x02   0xffff.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000001311be1
0x03   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x04   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
...
0xa8   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0xa9   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
Leaf block dump

A final comment: if you’re wondering why the table (without an index) can get to 169 rows while the index achieves 168 rows and fails on the 169th – there’s one ITL in the index ITL area that is reserved for the “service ITL” (see answer 2), the one that Oracle uses to register and hold the block on an “index node split”.

Footnote

One of the entertaining things, about poking around with Oracle is the way that you discover unexpected details – some of which are side-effects that you realise you should have predicted, some of which are just a little bizarre. In this case the easter egg in the trace file was the following statement (reproduced from the tkprof output, and made highly visible because I used the sort=execnt option.)

update /*+ rule */ undo$ set name=:2,file#=:3,block#=:4,status$=:5,user#=:6,
  undosqn=:7,xactsqn=:8,scnbas=:9,scnwrp=:10,inst#=:11,ts#=:12,spare1=:13
where
 us#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      336      0.00       0.00          0          0          0           0
Execute    336      0.02       0.02          3        336        531         336
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      672      0.03       0.03          3        336        531         336

The predictable side effect was that Oracle was going to create a number of new undo segments: as a standard detail of minimising contention Oracle tries to give every concurrent transaction (which means every one of my autonomous transactions) its own undo segment.

The surprising detail was the /*+ rule */ hint – still in 19.11.0.0. I guess that that was to ensure that Oracle executed the update through an index access – but possibly a suitable /*+ index() */ hint would be more appropriate to almost every version of Oracle.

There were a number of other segment/undo related statements that operated 140+ times in the course of this test – which did make me wonder if a very busy OLTP system (doing lots of tiny, concurrent, transacation) with a lot of undo segments could spend a significant amount of its time managing undo segments – but that’s a thought for another day. Another “look at that some time” thing that appeared was the large number of selects and inserts to a table call undohist$.

January 18, 2019

DML Tablescans

Filed under: deadlocks,Infrastructure,Locks,Oracle,Parallel Execution,Performance — Jonathan Lewis @ 1:25 pm GMT Jan 18,2019

This note is a follow-up to a recent comment on a blog note about Row Migration:

So I wonder what is the difference between the two, parallel dml and serial dml with parallel scan, which makes them behave differently while working with migrated rows. Why might the strategy of serial dml with parallel scan case not work in parallel dml case? I am going to make a service request to get some clarifications but maybe I miss something obvious?

The comment also referenced a couple of MOS notes:

  • Bug 17264297 “Serial DML with Parallel scan performs single block reads during full table scan when table has chained rows in 11.2”
  • Doc ID 1514011.1 “Performance decrease for parallel DML on compressed tables or regular tables after 11.2 Upgrade

The latter document included a comment to the effect that 11.2 uses a “Head Piece Scan” while 11.1 uses a “First Piece scan”, which is a rather helpful comment. Conveniently my blog note referenced an earlier note on the potential for differentiating between migrated and chained rows through a “flag” byte associated with each row piece. The flag byte has an H bit for the row head piece, an F bit for the row first piece, and an L bit for the row last piece and {no bits set} for a row piece in the middle of a chained row.

Side note:

      • A “typical” simple row will be a single row-piece with the H, F and L bits all set.
      • A simple migrated row will start with an “empty” row-piece in one block with the H bit set and a pointer (nrid – next rowid) to a row in another block that will have the F and L bits set and a pointer (hrid – head rowid) back to the head piece.
      • A chained row could start with a row piece holding a few columns and the H and F bits set and a pointer to the next row piece which might lead to a long chain of row pieces with no bits set each pointing to the next row piece until you get to a row piece with the L bit set. 
      • Alternatively you might have row which had migrated and chained – which means it could start with an empty row piece with just the H bit and a pointer to the next row piece, then a row piece with the F bit set, a back pointer to the header, and a next pointer to the next row piece, which could lead to a long chain of row pieces with no bits set until you reach a row piece with the L bit set.

Combining the comments about “head piece” and “first piece” scans with the general principles of DML and locking it’s now possible to start making some guesses about why the Oracle developers might want updates through tablescans to behave differently for serial and parallel tablescans. There are two performance targets to consider:

  • How to minimise random (single block) I/O requests
  • How to minimise the risk of deadlock between PX server processes.

Assume you’re doing a serial tablescan to find rows to update – assume for simplicity that there are no chained rows in the table. When you hit a migrated row (H bit only) you could follow the next rowid pointer (nrid) to find and examine the row. If you find that it’s a row that doesn’t need to be updated you’ve just done a completely redundant single block read; so it makes sense to ignore row pieces which are “H”-only row pieces and do a table scan based on “F” pieces (which will be FL “whole row” pieces thanks to our assumption of no chained rows). If you find a row which is an F row and it needs to be updated then you can do a single block read using the head rowid pointer (hrid) to lock the head row piece then lock the current row piece and update it; you only do the extra single block read for rows that need updates, not for all migrated rows. So this is (I guess) the “First Piece Scan” referenced in Doc ID 1514011.1. (And, conversely, if you scan the table looking only for row pieces with the H flag set this is probably the “Head Piece Scan”).

But there’s a potential problem with this strategy if the update is a parallel update. Imagine parallel server process p000 is scanning the first megabyte of a table and process p001 is scanning the second megabyte using the “first piece” algorithm.  What happens if p001 finds a migrated row (flags = FL) that needs to be updated and follows its head pointer back into a block in the megabyte being scanned by p000?  What if p000 has been busy updating rows in that block and there are no free ITLs for p001 to acquire to lock the head row piece? You have the potential for an indefinite deadlock.

On the other hand, if the scan is using the “head piece” algorithm p000 would have found the migrated row’s head piece and followed the next rowid pointer into a block in the megabyte being scanned by p001. If the row needs to be updated p000 can lock the head piece and the migrated piece.

At this point you might think that the two situations are symmetrical – aren’t you just as likely to get a deadlock because p000 now wants an ITL entry in a block that p001 might have been updating? Statistically the answer is “probably not”. When you do lots of updates it is possible for many rows to migrate OUT of a block; it is much less likely that you will see many rows migrate INTO a specific block. This means that in a parallel environment you’re more likely to see several PX servers all trying to acquire ITL entries in the same originating block than you are  to see several PX servers trying to acquire ITL entries in the same destination block. There’s also the feature that when a row (piece) migrates into a block Oracle adds an entry to the ITL list if the number of inwards migrated pieces is more than the current number of ITL entries.

Conclusion

It’s all guesswork of course, but I’d say that for a serial update by tablescan Oracle uses the “first piece scan” to minimise random I/O requests while for a parallel update by tablescan Oracle uses the “head piece scan” to minimise the risk of deadlocks – even though this is likely to increase the number of random (single block) reads.

Finally (to avoid ambiguity) if you’ve done an update which does a parallel tablescan but a serial update (by passing rowids to the query co-ordinator) then I’d hope that Oracle would use the “first piece scan” for the parallel tablescan because there’s no risk of deadlock when only the query co-ordinator is the only process doing the locking and updating, which makes it safe to use the minimum I/O strategy. (And a paralle query with serial update happens quite frequently because people forget to enable parallel dml.)

Footnote

While messing around to see what happened with updates and rows that were both migrated and chained I ran the following script to create one nasty row. so that I could dump a few table blocks to check for ITLs, pointers, and locks. The aim was to get a row with a head-only piece (“H” bit), an F-only piece, a piece with no bits set, then an L-only piece. With an 8KB block size and 4,000 byte maximum for varchar2() this is what I did:


rem
rem     Script:         migrated_lock.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2019
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0
rem

create table t1 (
        n1 number,
        l1 varchar2(4000),
        s1 varchar2(200),
        l2 varchar2(4000),
        s2 varchar2(200),
        l3 varchar2(4000),
        s3 varchar2(200)
);

insert into t1 (n1,l1,s1) values(0,rpad('X',4000,'X'),rpad('X',200,'X'));
commit;

insert into t1 (n1,l1) values(1,null);
commit;

update t1 set
        l1 = rpad('A',4000),
        s1 = rpad('A',200),
        l2 = rpad('B',4000),
        s2 = rpad('B',200),
        l3 = rpad('C',4000),
        s3 = rpad('C',200)
where
        n1 = 1
;

commit;

execute dbms_stats.gather_table_stats(user,'t1');

update t1 set
        s1 = lower(s1),
        s2 = lower(s2),
        s3 = lower(s3)
where
        n1 = 1
;

alter system flush buffer_cache;

select
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from
        t1
group by
        dbms_rowid.rowid_relative_fno(rowid),
        dbms_rowid.rowid_block_number(rowid)
order by
        dbms_rowid.rowid_relative_fno(rowid),
        dbms_rowid.rowid_block_number(rowid)
;

The query with all the calls to dbms_rowid gave me the file and block number of the row I was interested in, so I dumped the block, then read the trace file to find the next block in the chain, and so on. The first block held just the head piece, the second block held the n1 and l1 columns (which didn’t get modified by the update), the third block held the s1 and l2 columns, the last block held the s2, l3 and s3 columns. I had been expecting to see the split as (head-piece(, (n1, l1, s1), (l2, s2), (l3, s3) – but as it turned out the unexpected split was a bonus.

Here are extracts from each of the blocks (in the order they appeared in the chain), showing the ITL information and the “row overhead” information. If you scan through the list you’ll see that three of the 4 blocks have an ITL entry for transaction id (xid) of 8.1e.df3, using three consecutive undo records in undo block 0x0100043d. My update has locked 3 of the 4 rowpieces – the header and the two that have changed. It didn’t need to “lock” the piece that didn’t change. (This little detail was the bonus of the unexpected split.)


Block 184
---------
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.00b.00000ee1  0x01000bc0.036a.36  C---    0  scn  0x00000000005beb39
0x02   0x0008.01e.00000df3  0x0100043d.0356.2e  ----    1  fsc 0x0000.00000000

...

tab 0, row 1, @0xf18
tl: 9 fb: --H----- lb: 0x2  cc: 0
nrid:  0x00800089.0



Block 137       (columns n1, l1 - DID NOT CHANGE so no ITL entry acquired)
---------       (the lock byte relates to the previous, not cleaned, update) 
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.00b.00000ee1  0x01000bc0.036a.35  --U-    1  fsc 0x0000.005beb39
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000

...

tab 0, row 0, @0xfcb
tl: 4021 fb: ----F--- lb: 0x1  cc: 2
hrid: 0x008000b8.1
nrid:  0x00800085.0



Block 133 (columns s1, l2)
--------------------------
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.00b.00000ee1  0x01000bc0.036a.34  C---    0  scn  0x00000000005beb39
0x02   0x0008.01e.00000df3  0x0100043d.0356.2f  ----    1  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000

...

tab 0, row 0, @0xf0b
tl: 4213 fb: -------- lb: 0x2  cc: 2
nrid:  0x008000bc.0



Block 188 (columns s2, l3, s3)
------------------------------
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.00b.00000ee1  0x01000bc0.036a.33  C---    0  scn  0x00000000005beb39
0x02   0x0008.01e.00000df3  0x0100043d.0356.30  ----    1  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000

...

tab 0, row 0, @0xe48
tl: 4408 fb: -----L-- lb: 0x2  cc: 3

Note, by the way, how there are nrid (next rowid) entries pointing forward in every row piece (except the last), but it’s only the “F” (First) row-piece has the hrid (head rowid) pointer pointing backwards.

 

May 9, 2016

RI Locks

Filed under: deadlocks,Indexing,Locks,Oracle,trace files,Troubleshooting — Jonathan Lewis @ 12:24 pm BST May 9,2016

RI = Referential Integrity: also known informally as parent/child integrity, and primary (or unique) key/foreign key checking.

I’m on a bit of a roll with things that I must have explained dozens or even hundreds of times in different environments without writing them up on my blog, so here’s a blog note I could have used to answer a question that came up over the weeked on the Oracle database forum.

What happens in the following scenario:


create table parent (
        id        number(8,0),
        constraint par_pk primary key(id)
);

create table child  (
        id_p      number(8,0) not null references parent,
        id_c      number(8,0) not null,
        constraint child_pk primary key(id_p, id_c)
)
;

--
--      session 1
--

insert into parent values(1);

--
--      session 2
--

insert into child values(1,1);

Since the parent row corresponding to the child row doesn’t seem to exist as far as session 2 is concerned you might expect session 2 to respond immediately with an error message like:

ERROR at line 1:
ORA-02291: integrity constraint (TEST_USER.SYS_C0017926) violated - parent key not found

In fact, although the end-user session is not allowed to see the uncommitted parent row, the user’s shadow process can see the uncommitted row and will wait until session 1 commits or rolls back – so if you examine v$lock for the current locks for the two sessions you’d see something like this:

  1  select  sid, type, id1, id2, lmode, request, ctime, block
  2  from    V$lock
  3  where   sid in (select sid from V$session where username = 'TEST_USER')
  4  and     type != 'AE'
  5  order by
  6*         sid, type desc
  7  /

       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------- -- ---------- ---------- ---------- ---------- ---------- ----------
         3 TX     327709      12584          6          0        283          1
           TM     143734          0          2          0        283          0
           TM     143732          0          3          0        283          0

       250 TX     589829      12877          6          0        240          0
           TX     327709      12584          0          4        240          0
           TM     143734          0          3          0        240          0
           TM     143732          0          3          0        240          0


7 rows selected.

In the above, SID 250 is session 2: it’s holding a transaction lock (TX) in mode 6 (exclusive) because it has acquired an undo segment and has generated some undo, it’s also waiting for a transaction lock in mode 4 (share) and – checking id1 and id2 – we can see that the transaction table entry it’s waiting for is held by SID 3 (session 1)  in mode 6 (and we also note that the lock held by SID 3 is marked as a blocker).

If SID 3 commits (thus releasing the transaction lock) SID 250 will continue processing the insert; if SID 3 rolls back SID 250 will raise error ORA-02291 and roll back its insert statement. (Note: if this were a multi-statement transaction it would only be the insert into child that would be rolled back; that’s another one of those details that is important but often isn’t stated explicitly, leaving people believing that the entire transaction would be rolled back.)

Updates and deletes can produce the same effects. Imagine that we have just created the two tables, and then run the following:



--
--      session 1
--

insert into parent values(1);
commit;
delete from parent where id = 1;

--
--      session 2
--

insert into child values(1,1);

Again session 2 will wait for session 1 to commit or roll back. In this case if session 1 commits session 2 will raise Oracle error ORA-02291, if session 1 rolls back session 2 will continue with the insert.

Deadlocks

Whenever you can demonstrate a way of producing a wait chain you can also manage to produce a deadlock. Consider the following (starting, again, from empty tables);


-- (1) session 1
insert into parent values(1);

-- (2) session 2
insert into parent values(2);

-- (3) session 1
insert into child values(2,2);

-- (4)session 2
insert into child values(1,1);

Session 1 will start waiting for session 2 to commit (or rollback) at step 3, then session 2 will start to wait for session 1 at step 4 – with the result that session 1 will recognise the deadlock after about three seconds and rollback its last statement, raising exception ORA-00060 and dumping a trace file. (Note: session 1 will not, as many people think, roll back the entire transaction, it will roll back only the single statement that allowed the deadlock to develop). Session 2 will still be waiting for session 1 to commit or rollback its insert into parent.

Contrary to the popular claim, Oracle will not “resolve” the deadlock, it will simply break the deadlock leaving one session waiting for the other session to respond appropriately to the deadlock error.

For reference, here’s the deadlock graph (from a 12c trace file) produced by session 1 (SID = 3) for this demo:


Deadlock graph:
                                          ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name                             process session holds waits  process session holds waits
TX-00010017-000026C7-00000000-00000000          6       3     X             33     250           S
TX-000A000D-000026F8-00000000-00000000         33     250     X              6       3           S

session 3: DID 0001-0006-00000004       session 250: DID 0001-0021-00000041
session 250: DID 0001-0021-00000041     session 3: DID 0001-0006-00000004

Rows waited on:
  Session 3: no row
  Session 250: no row

When you see a deadlock graph with TX waits of type S (share, mode 4) it’s a very good bet that the wait has something to do with indexes – which may mean referential integrity as discussed here, but may mean collisions on primary keys, and may mean something to do with simple collisions on index-organized tables. You’ll notice that the “Rows waited on:” section shows no row – unfortunately in earlier versions of Oracle you may find a spurious row entry here because the wait information from some other (block) wait has been left in the relevant columns in v$session.

October 22, 2014

Deadlocks

Filed under: deadlocks,Locks,Oracle — Jonathan Lewis @ 6:34 pm BST Oct 22,2014

A recent question on the OTN forum asked about narrowing down the cause of deadlocks, and this prompted me to set up a little example. Here’s a deadlock graph of a not-quite-standard type:


Deadlock graph:
                                          ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name                             process session holds waits  process session holds waits
TX-00040001-000008EC-00000000-00000000         50     249     X             48       9           X
TX-000A001F-000008BC-00000000-00000000         48       9     X             50     249           S


My session (the one that dumped the trace file) is 249, and I was blocked by session 9. The slight anomaly, of course, is that I was waiting on a TX lock in mode 4 (Share) rather than the more common mode 6 (eXclusive).

There are plenty of notes on the web these days to tell you that TX mode 4 relates in some way to a unique index (or some associated referential integrity) or an ITL wait. (Inevitably there are a couple of other less frequently occurring and less well documented reasons, such as waits for tablespaces to change state but I’m going to ignore those for now). The question is, how do I tell whether this example is related to uniqueness (indexing) or ITLs ? For recent versions of Oracle the answer is in the rest of the trace file which now holds the “Current Wait Stack” and “Session Wait History” for the session that dumped the trace file.

Reading down my trace file, past the line which says “Information for THIS session”, I eventually get to this:


    Current Wait Stack:
     0: waiting for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0xa001f, sequence=0x8bc
        wait_id=80 seq_num=81 snap_id=1
 

So it didn’t take me long to find out I had an ITL problem (which should be a pretty rare occurrence in newer versions of Oracle); but there’s more:

...

    There is at least one session blocking this session.
      Dumping 1 direct blocker(s):
        inst: 1, sid: 9, ser: 40192
      Dumping final blocker:
        inst: 1, sid: 9, ser: 40192
    There are 2 sessions blocked by this session.
    Dumping one waiter:
      inst: 1, sid: 357, ser: 7531
      wait event: 'enq: TX - allocate ITL entry'

...

    Session Wait History:
        elapsed time of 0.000035 sec since current wait
     0: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0x5000c, sequence=0xa39
        wait_id=79 seq_num=80 snap_id=1
        wait times: snap=5.002987 sec, exc=5.002987 sec, total=5.002987 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000047 sec of elapsed time
     1: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0xa001f, sequence=0x8bc
        wait_id=78 seq_num=79 snap_id=1
        wait times: snap=1 min 4 sec, exc=1 min 4 sec, total=1 min 4 sec
        wait times: max=1 min 4 sec
        wait counts: calls=22 os=22
        occurred after 0.000032 sec of elapsed time

...
     8: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0x5000c, sequence=0xa39
        wait_id=71 seq_num=72 snap_id=1
        wait times: snap=5.001902 sec, exc=5.001902 sec, total=5.001902 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000042 sec of elapsed time
     9: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0xa001f, sequence=0x8bc
        wait_id=70 seq_num=71 snap_id=1
        wait times: snap=4.005342 sec, exc=4.005342 sec, total=4.005342 sec
        wait times: max=4.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000031 sec of elapsed time

...

    Sampled Session History of session 249 serial 3931
    ---------------------------------------------------

    The history is displayed in reverse chronological order.

    sample interval: 1 sec, max history 120 sec
    ---------------------------------------------------
      [9 samples,                                          11:14:50 - 11:14:58]
        waited for 'enq: TX - allocate ITL entry', seq_num: 81
          p1: 'name|mode'=0x54580004
          p2: 'usn<= 8 sec (still in wait)
      [5 samples,                                          11:14:45 - 11:14:49]
        waited for 'enq: TX - allocate ITL entry', seq_num: 80
          p1: 'name|mode'=0x54580004
          p2: 'usn<<16 | slot'=0x5000c
          p3: 'sequence'=0xa39
          time_waited: 5.002987 sec (sample interval: 4 sec)
...

The little report that follows the initial wait state shows that the situation was a little messy – session 9 was my first and last blocker, but there was another session tangled up in the chain of waits, session 357.

Following this there’s a set of entries from my v$session_wait_history – and if you look carefully at the slot and sequence that appears on the second line of each wait you’ll notice that my waits have been alternating between TWO other sessions/transactions before I finally crashed.

Finally there’s a set of entries for my session extracted from v$active_session_history. (Question: I’m only allowed to query v$active_session_history if I’ve licensed the Diagnostic Pack – so should I shut my eyes when I get to this part of the trace file?) This breakdown also shows my session alternating between waits on the two different blockers, giving me a pretty good post-event breakdown of what was going on around the time of the deadlock.

March 13, 2014

Shrink Space

Filed under: Bugs,deadlocks,fragmentation,Index Rebuilds,Indexing,Locks,Oracle — Jonathan Lewis @ 7:08 am GMT Mar 13,2014

Here’s an example of a nasty accident that can be seen in a slightly unusual output from v$lock (on 11.2.0.4).

Rather conveniently it demonstrates a comment I’ve often made about deadlocks not being “resolved” automatically, with the result that incorrectly written applications can end up with two sessions taking it in turns (see-sawing) to block each other.
(more…)

April 13, 2013

Deadlocks

Filed under: deadlocks,Locks,Oracle,Troubleshooting — Jonathan Lewis @ 11:36 am BST Apr 13,2013

Here’s a deadlock graph that might cause a little confusion:

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
 Resource Name          process session holds waits  process session holds waits
 TX-001a0015-00014787        34      90     X             32       3           S
 TX-00190008-0000601b        32       3     X             34      90           S

 session 90: DID 0001-0022-00000327      session 3: DID 0001-0020-000009E9
 session 3: DID 0001-0020-000009E9       session 90: DID 0001-0022-00000327

 Rows waited on:
  Session 90: obj - rowid = 00030CE5 - AAAwzlAAGAABDiuAAA
  (dictionary objn - 199909, file - 6, block - 276654, slot - 0)
  Session 3: obj - rowid = 00030CE5 - AAAwzlAAGAABp8gAAA
  (dictionary objn - 199909, file - 6, block - 433952, slot - 0)

(more…)

March 20, 2013

Lock Bug

Filed under: Bugs,deadlocks,Locks,Oracle — Jonathan Lewis @ 12:08 pm GMT Mar 20,2013

Here’s an oddity that I ran into a little while ago while trying to prepare a sample trace file showing a particular locking pattern; it was something that I’d done before, but trace files can change with different versions of Oracle so I decided to use a copy of 11.2.0.2 that happened to be handy at the time to check if anything had changed since the previous (11gR1) release. I never managed to finish the test; here are the steps I got through:
(more…)

February 22, 2013

Deadlock Detection

Filed under: deadlocks,Locks,Oracle,Troubleshooting — Jonathan Lewis @ 6:19 pm GMT Feb 22,2013

By some strange coincidence, the “London Bus” effect perhaps, there have been three posts on the OTN database forum in the last couple of days relating to deadlocks; and they have prompted me to indulge in a little rant about the myth of Oracle and deadlock detection; it’s the one that goes:

“Oracle detects and resolves deadlocks automatically.”

Oracle may detect deadlocks automatically, but it doesn’t resolve them, it simply reports them (by raising error ORA-00060 and rolling back one statement) then leaves the deadlocked sessions stuck until the session that received the report resolves the problem or an external agent resolves the problem.

Consider the following example (which, I have to admit, I wrote without access to a live instance):

(more…)

August 29, 2011

Deadlock

Filed under: deadlocks,Indexing,Locks,Oracle,Troubleshooting — Jonathan Lewis @ 5:25 pm BST Aug 29,2011

Here’s a deadlock graph that appeared on Oracle-L and OTN a couple of days ago.

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-001a0002-0002a0fe       196     197     X            166    1835           S
TM-0000c800-00000000       166    1835    SX            196     197    SX   SSX

It’s a little unusual because instead of the common TX mode 6 (eXclusive) crossover we have one TX and one TM lock, the TX wait is for mode 4 (S) and the TM wait is for a conversion from 3 (SX) to 5 (SSX).

(more…)

April 8, 2011

Deadlock

Filed under: deadlocks,Locks,Oracle,Troubleshooting — Jonathan Lewis @ 5:57 pm BST Apr 8,2011

Here’s an example of a slightly less common data deadlock graph (dumped from 11gR2, in this case):
(more…)

Website Powered by WordPress.com.