I usually think about table locks (and the meaning of the lmode and request columns in v$lock) in terms of numbers, and never seem to remember which number goes with which name – apart from mode 6 = exclusive. So I’ve finally put up a little list somewhere I’ll always be able to find it if I need it. Note: This list is specifically about the lock modes for a TM lock:
Value Name(s) Table method (TM lock) 0 No lock n/a 1 Null lock (NL) Used during some parallel DML operations (e.g. update) by the pX slaves while the QC is holding an exclusive lock. 2 Sub-share (SS) Until 188.8.131.52/6 "select for update" Row-share (RS) Since 184.108.40.206/2 used at opposite end of RI during DML until 11.1 Lock table in row share mode Lock table in share update mode 3 Sub-exclusive(SX) Update (also "select for update" from 220.127.116.11/6) Row-exclusive(RX) Lock table in row exclusive mode Since 11.1 used at opposite end of RI during DML 4 Share (S) Lock table in share mode Can appear during parallel DML with id2 = 1, in the PX slave sessions Common symptom of "foreign key locking" (missing index) problem Note that bitmap indexes on the child DON'T address the locking problem 5 share sub exclusive (SSX) Lock table in share row exclusive mode share row exclusive (SRX) Less common symptom of "foreign key locking" but likely to be more frequent if the FK constraint is defined with "on delete cascade." 6 Exclusive (X) Lock table in exclusive mode create index -- duration and timing depend on options used insert /*+ append */
(See Lock Horror for notes on the mode 3 lock during RI in 11g).
It’s a minor irritant, of course, that some of the lock modes have two names and two abbreviations – and some documents use one name and the other abbrevation (e.g. talking about row-exclusive locks, then using the SX abbreviation). The various lock trace files seem to favour SS, SX, and SSX over RS, RX and SRX.n
The change to mode 3 for “select for update” is described in Metalink (MOS) in note 3646162.8 – which references a bug of the same number describing a false deadlock issue in RAC; and it may still be possible to reverse the change in Oracle 11 (according to bug note 4969880) by setting the “_fix_control” hidden parameter to ‘4969880:ON’.
The requirement for the ‘mode 2 at opposite end of RI’ may relate to problems of checking referential integrity while executing parallel DML (unpublished bug 2435787 may apply, but I have a note to myself about parallel DML resulting in deadlocks although I can’t find anything about this on Metalink).
I also have a note about a session in 9.2 getting two TM locks on the same object during parallel dml, one in mode 1 with id1 = 0 and one in mode 4 with id1 = 1. I haven’t looked at this for a long time, though.
Update 18th Aug:
Following a thread on OTN relating to deadlocks and wait-for-graphs (WFG) from the lock manager daemon (LMD), I found the following link that showed GES (global enqueue resources) enqueues having different values for the lock mode, viz:
#define KJUSERNL 0 /* no permissions */ (Null) #define KJUSERCR 1 /* concurrent read */ (Row-S (SS)) #define KJUSERCW 2 /* concurrent write */ (Row-X (SX)) #define KJUSERPR 3 /* protected read */ (Share) #define KJUSERPW 4 /* protected write */ (S/Row-X (SSX)) #define KJUSEREX 5 /* exclusive access */ (Exclusive)
(You can see the “define” names appearing in gv$ges_enqueue if you want to set up a few tests in a RAC system – or you could just check the racdiag.sql script from Metalink (MOS) to see them used in a monitoring query.)
So when LMD dumps a deadlock trace with something like the following:
Global Wait-For-Graph(WFG) at ddTS[0.4] : BLOCKED 4098bcd08 5 [0x130025][0x1475c9],[TX] [65595,583] 0 BLOCKER 412275b78 5 [0x130025][0x1475c9],[TX] [131100,2] 1 BLOCKED 41228b128 5 [0x70001][0x178a52],[TX] [131100,2] 1 BLOCKER 4098bade8 5 [0x70001][0x178a52],[TX] [65595,583] 0
you should be able to look at the column of 5’s – in this example – and assume that you have “crossed” “TX mode 6” locks in the instances, which means you have a standard data collision except that it’s across nodes.