Oracle Scratchpad

July 26, 2009

Empiricism

Filed under: Infrastructure,Troubleshooting — Jonathan Lewis @ 7:25 pm BST Jul 26,2009

Here’s a problem I solved for a client recently:

In a test-run of a mixed OLTP system with a bit of degree of concurrency,  the test would stall from time to time with many sessions (at one point 80 of them) waiting on event: “enq: TX – row lock contention”. 

The statement being run by the sessions reporting the wait was always the same: “update tableX set foreign_key_column = :b2 where primary_key = :b1″ – and there was no way that two sessions were trying to update the same row (I know because I set an autonomous trigger to catch all the values on every attempted insert, update, and delete – and because I know the way the code test is supposed to work).

After thinking about it for a bit, I decided to re-run the test after executing: “alter system set shared_servers = 300″. That solved the problem.

So, if you take the empirical approach to tuning Oracle systems, would you now use this method for dealing with waits for that event ? ;)

31 Comments »

  1. Hi Jonathan,
    Was there any other trial before this one or was this the first one you tried ? and Why ?

    Comment by coskan — July 26, 2009 @ 8:08 pm BST Jul 26,2009 | Reply

    • Coskan,

      The thinking went like this: “the shared_servers and large_pool_size are inappropriate (which I was able to infer from observation that I have not mentioned) – so I’ll just re-run the test with more appropriate settings while I try to work out why this strange locking problem is here”.

      The change made the locking problem “invisible” – a fact which I could check by dynamically altering shared_servers as the test ran.

      Comment by Jonathan Lewis — July 27, 2009 @ 7:39 am BST Jul 27,2009 | Reply

  2. The update needs to ensure the parent row exists and if it exists that isn’t being deleted. If the parent row is locked, the update needs to wait for the locking transaction to end.
    You can have transaction 1 locking the parent row, and 80 shared server processes locked waiting for transaction 1 to end. The session owning transaction 1 then can’t get a shared server process to complete the transaction. Hence the stall.

    Big jump from 80 to 300, but it is one solution (though it could still fail if transaction 1 goes on for long enough). Another would be to change the UPDATE to a SELECT FOR UPDATE…NOWAIT on the parent followed by an UPDATE. Or the operation updating the parent could be done using a dedicated connection so it can’t be blocked off. Or maybe a resource profile to kick off the locked updates before they bring they consume all the shared server connections.

    Comment by Gary — July 26, 2009 @ 11:59 pm BST Jul 26,2009 | Reply

    • Gary,

      Your explanation of why a change in shared_servers made a difference is correct. The lock holder was running through a shared server, and when the waiters hit a problem the holder couldn’t get back onto a shared server for a while because all available shared servers were in use and a few more had to be started.

      The problem was self-escalating because all the waiters were holding their shared servers, so the more waiters you got the more shared servers had to be started and the longer the problem persisted.

      Your description of a possible locking problem is also correct – but not relevant in this case. I failed to mention that the wait was requested in mode 6 – whereas index and RI locking issues display as mode 4 waits.

      Comment by Jonathan Lewis — July 27, 2009 @ 7:44 am BST Jul 27,2009 | Reply

  3. “alter system set shared_servers = 300″. That solved the problem.

    Well it works for all my enq: TX – row lock contention problems

    Comment by Doug Burns — July 27, 2009 @ 11:45 am BST Jul 27,2009 | Reply

    • Doug,

      You’re supposed to include a smiley when you say things like that or people will take you seriously, and may even quote you.

      Comment by Jonathan Lewis — July 27, 2009 @ 8:10 pm BST Jul 27,2009 | Reply

      • Funny you should mention that. As Mr. Put-Smileys-After-Almost-Everything, I considered quite carefully whether I should in this one case when one was clearly warranted and decided not to because it would be funnier. I’m like that.

        It’s probably best that you mentioned it though and I must confess I’d forgotten the likelihood of random out-of-context quotes appearing elsewhere.

        Comment by Doug Burns — July 28, 2009 @ 4:49 am BST Jul 28,2009 | Reply

  4. “So, if you take the empirical approach to tuning Oracle systems, would you now use this method for dealing with waits for that event ?”

    No, one data point is insufficent to test a hypothesis, perhaps if I were a faith based DBA I’d use this all the time.. although the only faith based DBA I’m aware of has stated that no real production enviroment uses a Shared Server configuration these days as memory is so cheap maybee some more memory and switching to dedicated server was the right thing to do. :)

    Comment by Chris_c — July 27, 2009 @ 12:30 pm BST Jul 27,2009 | Reply

    • Chris_C,
      But I got multiple data points – if I wanted to see TX waits I just set shared_servers to 100; if I wanted LOTS of TX wait I set it to 75. I changed it (in mid-test) about six times in a row and found that the number of TX waits went down as the setting of shared_servers went up (and vice versa).

      But you are right, of course: It’s only one database running one application – so it’s only one data point.

      Then we ALWAYS have to remember that correlation is not the same as causation.

      Comment by Jonathan Lewis — July 27, 2009 @ 8:05 pm BST Jul 27,2009 | Reply

  5. The sessions were blocking each other although they were trying to update different rows. Hmmm… My guess is some problem on a block level:
    1. The block transaction list is too short hence not all sessions wanting to place a lock on a row in a given block were able to do it immediately. I think this would lead to exactly the situation described.
    2. A long shot: The table being updated is compressed hance from time to time additional time was needed to decompress a block.
    3. A realy long shot: bitmap index on foreign_key_column.

    Comment by Todor Botev — July 27, 2009 @ 1:22 pm BST Jul 27,2009 | Reply

    • Todor,

      1) Good idea – but the wait would be “enq: TX – allocate ITL entry”.

      2) No compression being done.

      3) Good idea – but no bitmap indexes on the table. Also, as with other index-related waits, the wait would be mode 4 not mode 6. In passing, and not related to your suggestion, it’s not commonly known but the presence of a bitmap index will not stop the “foreign key locking” problem – it has to be a b-tree index.

      Comment by Jonathan Lewis — July 27, 2009 @ 8:09 pm BST Jul 27,2009 | Reply

  6. I think the reason for the lock is simpler than I thought in the first place. I assume the following happens:

    1. Session 1 updates the PK of the parent table, e.g. from 1 to 2.

    2. Session 2 executes:

    update tableX set foreign_key_column = :b2 — b2 is 1 or 2
    where primary_key = :b1; — b1 is the updated PK on step 1

    3. Session 2 has to wait until Session 1 issues commit or rollback.

    If this is the situation, a reasonable question would be: does the PK really needs to be updated? As this happens in a (probably automatic) test, it looks loke an “overtesting” – simulating a stiation that does not happen in the reality.

    If the PK really needs to be updated – even often – in the real application, this seems to be a design problem.

    Comment by Todor Botev — July 28, 2009 @ 8:13 am BST Jul 28,2009 | Reply

  7. I saw I made a mistake in the UPDATE statement in my previous post:

    update tableX set foreign_key_column = :b2 — b2 is 1 or 2
    where primary_key = :b1;

    Of course :b1 can be any primary key in tableX and has nothing to do with the parent table.

    Comment by Todor Botev — July 28, 2009 @ 8:17 am BST Jul 28,2009 | Reply

    • Todor,

      Same comment as for bitmap indexes – in this case the wait would be for a TX mode 4, not mode 6.

      Basically, if a TX wait involves an index (or index-organized table (IOT)) then it’s going to be mode 4.

      Comment by Jonathan Lewis — July 28, 2009 @ 7:43 pm BST Jul 28,2009 | Reply

      • Jonathan,

        Thank you for clarifying this! Do you mean that in this case the lock on the entries in the PK index is the one that blocks the updates?

        Comment by Todor Botev — July 29, 2009 @ 10:04 am BST Jul 29,2009 | Reply

  8. Well,

    At least we got another ammo! ^_^

    Comment by lascoltodelvenerdi — July 28, 2009 @ 8:40 am BST Jul 28,2009 | Reply

  9. Could be picking hairs but “The statement responsible for the wait” may be ambiguous. If that was the statement issued by the client, but a second SQL was executed because of a trigger (or ON COMMIT MV) then either or both could be deemed ‘responsible’ for the wait.
    I’m wondering about row movement (cluster, hash partition or these newfangled reference partitions) but it may be something really simple like a LOCK TABLE IN EXCLUSIVE MODE.

    Comment by Gary — July 28, 2009 @ 11:19 pm BST Jul 28,2009 | Reply

    • Gary,

      You’re right – that statement was very badly worded. I’ve now changed it to make it clear that it was the waiting sessions that were reporting that statement.

      There were no triggers (apart from the autonomous one I introduced briefly), no MVs, no row movement (which could have resulted in TX/4) nothing in the slightest new-fangled. (My best bet is a bug in the front end code relating to loss of identity in connection pools – but I’ve got no hard evidence for it at all).

      I’ll have to check for “lock table” or “alter table move” – I hadn’t thought of that (although there’s no reason why any part of the code should do either of those).

      Comment by Jonathan Lewis — August 3, 2009 @ 9:36 pm BST Aug 3,2009 | Reply

      • That front-end code doesn’t do a SELECT..FOR UPDATE does it ? Wouldn’t fire any triggers, but if the connection pool gets confused the app may have issued a SELECT…FOR UPDATE in one transaction and then tried to update the locked row in a separate transaction. Either the first transaction eventually gets rolled-back by some cleanup, or the confusion means it gets committed as part of another transaction.

        Comment by Gary — August 3, 2009 @ 11:04 pm BST Aug 3,2009 | Reply

        • Gary,

          There’s no “select for update”. The only statements in the library cache are:

          insert into tabX
          update tabX where pk =
          delete from tabX where pk =

          I think the underyling problem is one of confusion in the connection pooling mechanism – and the apparent resolution from change the number of shared servers is simply a timing benefit.

          One point I haven’t mentioned, by the way, is that the thing is a benchmark type of thing being controlled by LoadRunner. It’s quite possible that there’s a flaw in the driving data, or in the way LoadRunner has been configured, that allows “business transactions” to interfere with each other.

          Comment by Jonathan Lewis — August 6, 2009 @ 4:33 pm BST Aug 6,2009

  10. Was it that same old issue with the absent index on the FK column in the child table? Update the PK or Delete a row in the parent; then the whole child table must be locked and full scanned to check the RI. In this (short) time the updates on the child have to wait. I’m not sure about the mode though.

    In such case the solution is to add an index on foreign_key_column in tableX.

    “The statement responsible for the wait” – does it mean “causing the wait” or “waiting” ?

    Comment by Todor Botev — July 29, 2009 @ 5:07 am BST Jul 29,2009 | Reply

    • Todor,

      a) The parent/child locking problem shows up as a TM mode 4.
      b) See my reply to Gary about the poor choice of wording.

      Comment by Jonathan Lewis — August 3, 2009 @ 9:37 pm BST Aug 3,2009 | Reply

  11. [...] Lewis gets things rolling with his post, Empiricism. Jonathan asks his readers if an empirical approach to tuning would be appropriate for a particular [...]

    Pingback by Log Buffer #156: a Carnival of the Vanities for DBAs | Pythian Group Blog — July 31, 2009 @ 10:32 pm BST Jul 31,2009 | Reply

  12. Hi Jonathan

    I am facing a similar problem at a client site .
    There are around 500 Sessions waiting for enq: TX – row lock contention .

    The SQL being fired is DELETE FROM APP_USER_FUNCTION WHERE DOMAIN_ID = :B2 AND USER_ID = :B1 from 550 different sessions .

    Following were my observations

    SID EVENT P1 P2 SECONDS_IN_WAIT STATE
    —- —————————— ———- ———- ————— ———-
    1016 enq: TX – row lock contention 1415053318 1835049 0 WAITING
    1017 1415053318 1835049 0 WAITING
    .

    The table in question was APP_USER_FUNCTION .
    This table doesnt have a primary key or a foreign key on it.Also index is present on domain_id and user_id column (Con-catinated index) .

    Can you tell me by increasing the inittrans value of this table from 1 to say 10 helpme in any ways !!

    Or any other way to tune this type of contention !!

    Regards

    Amit bhube

    Comment by Amit Bhube — August 6, 2009 @ 12:01 pm BST Aug 6,2009 | Reply

    • If I understand you have 550 session all doing something like:

      DELETE FROM APP_USER_FUNCTION WHERE DOMAIN_ID = 1000000 AND USER_ID = 9999999

      then in this case, I think, increasing the value of the initrans would not help you.
      When a session have to delete a row, it place a lock on it and the other(s) session(s) must wait until a commit or rollback.

      Initrans can help you if you have a lot of insert.

      To tune this…mumble…I would place a commit after the delete IF AND ONLY IF this don’t change the flow of the program.

      I found a bit strange that 550 concurrent session issue the same delete.

      Comment by lascoltodelvenerdi — August 6, 2009 @ 2:20 pm BST Aug 6,2009 | Reply

    • Amit,

      Your enqueue wait is for a TX lock in mode 6 – which should be a simple data collision on a heap table.

      Your users are queueuing because they want to update (delete) the same rows (or row) in the table.

      It looks like you need to find a way to stop doing this, or find a way of commiting the change as quickly as possible.

      Comment by Jonathan Lewis — August 6, 2009 @ 4:07 pm BST Aug 6,2009 | Reply

  13. Hi Jonathan,

    Oracle releases a shared server process back to other sessions only when the database call has finished. Even if a session is hung waiting for a lock (or sleeping due dbms_lock.sleep, waiting for IO, etc), the same server process will be exclusively associated with a single session.

    This rules out issues like “not being able to get a process to complete the database call” as a process is always associated with the session during a call.

    If it had been a real hang (circular wait chain) then deadlock detection would have rolled someone’s call back.. but if you didn’t see those it’s probably just that a lot of sessions ended up waiting (indirectly) for the same blocker and this issue may have been amplified with other symptoms like CPU starvation due many shared server process startups.

    And that leads to the question, how long stalls did you see? Was it seconds instead of milliseconds or minutes instead of seconds or more?

    Or as you said it could have been some loadrunner load profile timing issue..

    Comment by Tanel Poder — August 10, 2009 @ 3:24 pm BST Aug 10,2009 | Reply

    • Tanel,

      I didn’t phrase my comment very well – the “waiter that couldn’t get back” was probably waiting in the common queue for a server to become available, not “on a server waiting and then quitting” (although the client did have some ‘select for update … skipped locked’ statements – which have some very funny side effects on enqueue behaviour.

      If I recall corrrectly the wait times were in the order of a very small number of seconds. Trying to solve the exact cause of the problem is a non-issue since it went away when the system was configured properly – and since the suggested change had the predicted (as in “and this might be the cause of your locking problem”) result I never got past the “hand-waving” explanation of why it was an appropriate solution and on to look at the exact mechanism.

      Comment by Jonathan Lewis — August 10, 2009 @ 9:48 pm BST Aug 10,2009 | Reply

      • Jonathan,

        I have a current issue with large waits Enq Tx-row lock contention in a RAC system. The primary wait “appears” to be on a table of line items, about 85 columns of non-pk with a two column pk, both number. The application code seems to do an insert of the two columns of the primary key. Then it does another (or two or three based on dba_tab_modifications) updates of all 85 non-pk columns. Over 90% of AWR report on both nodes show Enq Tx. The line item table also has eight indexes, and 20 columns in the table go to fk’s in small(er) code tables.

        My initial thought would be to reduce hot blocks. The primary key is, by the way, a monotonically increasing id, plus a few line numbers, usually less than 10. I am assuming that the primary key is introducing the hot blocks, and using either a id with instance_id/sessionid/sequence_no, or a hash clustered primary key would reduce some of the contention.

        In this case, I believe there is a lot of contention on the table itself. This leads me to look at single table hash clustered design as a potential resolution.
        My problem is in properly designing the cluster hashkeys and size parameters if needed. I am not sure if I size based on avg. row size / block size, or by total number estimated rows ever stored in the table.

        The multiple total-row update pattern is alarming to me, but I am stuck with it for the moment.
        My current SQL is :

        create cluster quote_line_cluster 
        ( quote_id         number,
          quote_line_no    number
        )
        single table
        hashkeys 1000;
        CREATE TABLE "QUOTE_LINE_CL"
          (
            "QUOTE_ID"                       NUMBER NOT NULL ENABLE,
            "QUOTE_LINE_NO"                  NUMBER NOT NULL ENABLE,....
        /*snipped column definitions--about 85*/
        )
        CLUSTER quote_line_cluster ( quote_id,quote_line_no );
        
        And for the sequence:
        
        create sequence rac_seq start with 1 cache 10000
        /
        create or replace package rac_seq_pkg
        is 
        function get_seq return number;
        end;
        /
        create or replace package body rac_seq_pkg
        is
        function get_seq return number
        is
        l_id number;
        begin
        select to_number(TO_CHAR(USERENV('INSTANCE'))||TO_CHAR(USERENV('SESSIONID'))||to_char(rac_seq.nextval)) into l_id from dual;
        return l_id;
        end get_seq;
        end;
        /
        

        I think the sequence will help resolve the initial primary key insert hot spots. But will a hash cluster potentially resolve the table hot spots, or am I off track in pursuing this approach?

        Any advice would be appreciated.

        Thanks,

        Dana

        Comment by Dana — July 8, 2012 @ 9:37 am BST Jul 8,2012 | Reply

        • Dana,
          The first point I need to make is that this is not a forum – if you want help solving problems (and aren’t at the point where you need to hire a consultant) you can try the OTN database forum, or Oracle-L for informal aid.

          Having said that:
          a) you need to check whether the TX enqueues are mode 4 or mode 6, it could make a big difference to diagnosis. Since you are presumably licensed for the performance pack you could query ASH (v$active_session_history, or the dba_ equivalent) to check in more detail. If the waits are numerous and long-term you could also start a trace on a session for a few minutes to see exactly what it is doing.

          b) TX enqueues are generally not related to “hot blocks”, they are about data competition, although there can be some odd index leaf block split effects. If you have any evidence that the problem relates to index blocks being delayed as they split and move about a key RAC-related detail is to check for the sequence CACHE size that you are using for your monotonically increasing value. If the problem relates to RAC and high-value leaf blocks then a large sequence cache size (in the order of 1,000 to 10,000) may be sufficient to resolve the problem.

          c) The phrase “Over 90% of AWR report on both nodes show Enq Tx.” is not very clear. If it means that 90% of your DB Time is spent in Enq TX and it’s on this PK index then why do you think there is “a lot of contention on the table as well”. If you do raise your problem on the OTN database forum, make sure you give some quantitive information, rather than vague indicators – e.g. Load Profile and Top Timed Events from both nodes. You’ll also end up being asked to supply some “Segments by …” sections as well, probably.

          d) The single table hash cluster is very limited in it applicability. It is not appropriate for continuously growing data, only for data of a fixed (or strictly bounded) size. Since you’re talking about “line items” with a sequence based key I suspect you have a data set that is, nominally, growing without a bound.

          Comment by Jonathan Lewis — July 12, 2012 @ 7:28 pm BST Jul 12,2012

        • Thank you for your response. I apologize for inappropriate posting. I’ll pursue in an appropriate forum.

          Comment by Dana Day — July 12, 2012 @ 7:32 pm BST Jul 12,2012


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Theme: Rubric. Get a free blog at WordPress.com

Follow

Get every new post delivered to your Inbox.

Join 4,268 other followers