Oracle Scratchpad

January 25, 2018

gc buffer busy

Filed under: Uncategorized — Jonathan Lewis @ 2:12 pm GMT Jan 25,2018

I had to write this post because I can never remember which way round Oracle named the two versions of gc  buffer busy when it split them. There are two scenarios to cover when my session wants my instance to acquire a global cache lock on a block and some other session is already trying to acquire that lock (or is holding it in an incompatible fashion):

  • The other session is in my instance
  • The other session is in a remote instance

One of these cases is reported as “gc buffer busy acquire”, the other as a “gc buffer busy release” – and I always have to check which is which. I think I usually get it right first time when I see it, but I always manage to convince myself that I might have got it wrong and end up searching the internet for Riyaj Shamsudeen’s blog posting about it.

The “release” is waiting for another instance to surrender the lock to my instance; the “acquire” is waiting for another session in my instance to finish acquiring the lock from the other  instance.

I decided to jot down this note so I didn’t have to keep searching for Riyaj’s and also because a little problem on OTN at the moment showed a couple of AWR reports with an unlikely combination of waits for acquire (180,000,000) and release (2,000) waits.

If you’re wondering why this looks odd – if I’m waiting for an acquire someone else in my instance must be waiting for a release.  Obviously many sessions could be waiting for one release, and if acquirers time out very rapidly (though they’re not reported as doing so) then the ratio could get very high – but 90,000 acquires per release doesn’t look right.



  1. Riyaj is one of the most knowledgeable persons that I know of specially in the RAC space.

    Comment by Amir — January 25, 2018 @ 2:18 pm GMT Jan 25,2018 | Reply

  2. I believe he may try to disable the result cache for a while and he can verify, since based on the IO profile, much of reads are higher in comparison to writes. So, allocation of 77G of Buffer cache and give a check from IO, since its “direct path” reads which are causing “6.566M” per second and 7.4GB from data file (probably LOB’s of some data is loading from Siebel services). In turn TM locks are causing over RAC instance, due to which you implicit rac events we are seeing (CR and current block grants). Secondly, _gc_bypass_readers is false in comparison with _gc_read_mostly_locking is false, so if most of the readers by pass the lock then grants would get downgrade until its explicit lock is required for writing (X), so if can give modify to true and rebounce the instance, so that grant enqueue messages across RAC cluster would reduce at interconnect level.

    At initial level he may try, that should provide some relief, apologies if my understanding went wrong with issue.

    Comment by Pavan Kumar N — January 26, 2018 @ 4:17 am GMT Jan 26,2018 | Reply

    • Pavan Kumar N,

      It would probably be better to comment on the ODC thread if you have an account there.

      My first, thought, though is that while your comments about the _gc_ parameters may be particularly relevant to some of the timing and counting anomalies (and may be a clue that a couple of “bugs fixed in” are not completely fixed), their effects may be a secondary issue due to the change in the workload caused by the online change to the level of the optimizer_features_enable parameter and a few bad execution plans now being chosen by the optimizer.

      The OP has now posted an AWR report from an interval when the older setting for OFE was in place, and the set of resource-intensive SQL statements has changed dramatically – though the number of lob reads and writes, direct path reads, and result cache actions has hardly changed.

      Comment by Jonathan Lewis — January 26, 2018 @ 9:03 am GMT Jan 26,2018 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply to Pavan Kumar N Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by