Oracle Scratchpad

December 23, 2008

Library Cache

Filed under: Infrastructure,Troubleshooting — Jonathan Lewis @ 7:52 pm BST Dec 23,2008

A question came up on the OTN forum a couple of days ago about a problem with library cache latches and a rapid increase in sessions connected. 

The description allowed some room for interpretation, but I read it as saying there had been a one-off incident rather than a new, and ongoing, problem.  So I asked for a little clarification and gave an example of a possible cause for a random spike like this.

The oddity is one that can happen quite easily with Web-based applications that haven’t been configured to behave reasonably under stress. I first wrote about this issue nearly two years ago – in a little case study I pointed the OP to as part of my reply.

Be careful when you configure any type of web application server that uses connection or session pooling. Creating an Oracle connection, and then creating a session on top of it can be very labour-intensive.  A session needs a big block of memory from the shared pool for the “session parameters”, and if it can’t find it, the side effects can be nasty.  If (say) ten web application servers simultaneosly decide that all want 5 new sessions each then the impact on the library cache and shared pool latches can be catastrophic.

A quick and dirty bit of damage limitation – check the setting of your shared_pool_reserved_size parameter, check the value for the sessions parameter, and increase the shared_pool_reserved_size by something like 26KB for every session. This might leave you enough memory in the “reserved” part of the shared pool for all the new sessions when the web application servers have a panic attack. [Ed 2nd Jan 2009: This strategy is no longer relevant if you are on versions 10.2 or later, see comment below from Tanel Poder].

The other thing to do is ensure that the configuration of the web application servers minimises the rate at which new sessions might be needed – so configure it to hold as many sessions as you might need at normal peak levels, and ensure that sessions can only be created or destroyed one at a time, rather than in batches.


  1. Jonathan,
    Merry Christmas and thank you for this thought- and action-provoking post. Some comments:
    2) Some DRCP white papers mention “connection storms”, which seem to refer to what you describe here. PHP Web apps are probably worse culprits than J2EE apps, since PHP runs in separate processes that can persist connections but not share them.
    3) You mention the work of creating a connection and a session; in addition, the first SQL statements submitted in the new session have to be soft parsed at the least, which increases the latching even more.
    4) Do you know how to configure any of the following “clients” to avoid creating lots of sessions at once : JDBC, J2EE, PHP/OCI8, DRCP?
    5) Looks like 11g has a listener-side solution with the Connection Rate Limiter :

    Comment by Stew Ashton — December 25, 2008 @ 11:51 am BST Dec 25,2008 | Reply

    • Stew,
      Thanks for the comment.
      1) Yes – it is shared_pool_reserved_size, and I have corrected the posting accordingly.
      2) The point about PHP is useful – and for anyone who doesn’t recognise the abbreviation that’s the 11g “database resident connection pooling” feature. Yet another layer introduced by Oracle to improve the scalability of “unfriendly” front-ends.
      3) True – and it will be interesting to see how (or if) DRCP can co-operate with the session_cached_cursors parameter to cut down the overheads of sessions that are always connecting and parsing their SQL for the first time.
      4) I rarely learn that stuff – if the symptoms show up, I call in the resident expert in the relevant tool, explain what Oracle is seeing, explain the sort of thing that’s happening, and then ask them if they know how to change the tool configuration to modify the behaviour.
      5) I’ll take a close look at that link the moment I have some spare time – at first glance I didn’t see anything telling me whether excess connections are queued to obey the limit or whether they get an error.

      Comment by Jonathan Lewis — December 29, 2008 @ 9:35 pm BST Dec 29,2008 | Reply

  2. Hi Jonathan,
    How did you arrive at 26kb figure for each session.

    Satheesh Babu S

    Comment by Satheesh Babu S — December 25, 2008 @ 4:20 pm BST Dec 25,2008 | Reply

    • Satheesh Babu S.

      I have a query that I’ve run against x$ksmspr from time to time to check for anomalies and new features, and it usually shows a few allocations with a name like “session param v” (this varies a little with version) and the sizing information for this allocation comes out at values in the range of 20KB to 26KB, dependent on version and platform.

      In fact, having just written this note in answer to your question, I’ve just run the query against a small laptop system running, and that particular allocation no longer occurs. I’ll have to put something on my to-do list about tracking down how the information has been broken into chunks – but for recent versions of Oracle that very specific threat seems to have been eliminated.

      Comment by Jonathan Lewis — December 29, 2008 @ 10:12 pm BST Dec 29,2008 | Reply

  3. All,

    Just going off at a subtle tangent here, but still related to database abuse from web applications. You should always use connection pooling when accessing the database. However, some connection pools have min and max settings, if there is a rapid increase in connections from the pool, i.e. from min to max, not only can this lead to the problems as described in Jonathan’s original post, but connection storms also. Therefore set the min and max settings for the pool to be the same.


    For J2EE / JDBC you should use a pooled JDBC data source which has been set up on the actual application server itself. In an ideal world, the database should only be hit with new connections when the application or web server(s) starts up. In the J2EE/JDBC world, OCI8 will be used with something called a ‘thick’ JDBC driver, I know that WebSphere supports this, I presume most other main stream J2EE app servers support this also. I don’t know if you can have connection pooling with the thick driver, but I would hazard a guess that you can. Most people tend to stick with ‘thin’ pure Java based JDBC drivers. In 11g the ‘thick’ OCI based driver also gives you access to the client side results cache.

    My apologies if I have come across as having hijacked this thread . . . just my 2 cents


    Comment by Chris Adkin — December 30, 2008 @ 10:32 am BST Dec 30,2008 | Reply

    • Chris,
      Thanks for the comment – adding relevant information doesn’t count as hijacking; and I’d welcome any thoughts that others may have, particularly in relation to your response to Steve.

      Regarding your first point, though, I wouldn’t be quite so strict with the min and max setting – I’d set the min to allow for the largest load that I could reasonably expect, and set the max to “a little more”, to cater for anomalies and to give me some warning of the growth in popularity of my wonderful internet application.

      Comment by Jonathan Lewis — December 31, 2008 @ 11:24 am BST Dec 31,2008 | Reply

  4. Jonathan,

    The session parameter array is broken down to max 2kB chunks in 10.2.x, thus the allocations will be smaller than _shared_pool_reserved_min_alloc, thus the allocations will be made from generic shared pool area.

    Comment by Tanel Poder — December 30, 2008 @ 7:23 pm BST Dec 30,2008 | Reply

    • Tanel,

      Thanks – I just love it when someone else does the work for me ;)
      I’ve already found a couple of the bits – but I haven’t managed to account for anything like the whole 20 – 26KB yet.

      Comment by Jonathan Lewis — December 31, 2008 @ 11:32 am BST Dec 31,2008 | Reply

  5. Jonathan,
    Concerning the Connection Rate Limiter and “whether excess connections are queued to obey the limit or whether they get an error”, this little White Paper describes a test of 150 simultaneous connection requests, which appear to be queued and honored at a regular rate :
    If you mean me, I’m Stew not Steve ;)
    Yes, Oracle provides two JDBC drivers: the “thick” one uses OCI (not OCI8) and the “thin” one stands alone. Both provide connection pooling. “OCI8” is the name of a “thick” driver included in PHP.
    Your point is well taken that connection pools can be tamed by setting the min and max values the same (or nearly so, as Jonathan prefers).

    Comment by Stew Ashton — January 1, 2009 @ 5:28 pm BST Jan 1,2009 | Reply

  6. Stew,

    Thanks for the link.

    A thought occurs to me: is there any reason to stop people from installing an Oracle 11g home and running an Oracle 11g listener that only listens for connections for their existing pre-11g instances ? That way they could get the benefit of the rate limiter BEFORE they upgrade their database to 11g.

    I can think of no reason why, in principle, this shouldn’t work. (Although I should point out that I haven’t yet read the manuals to see if the rate limiter only applies when the target database is an 11g database).

    Comment by Jonathan Lewis — January 2, 2009 @ 9:29 am BST Jan 2,2009 | Reply

  7. Jonathan,

    I was intrigued by your idea of using the 11g listener with an older database, so I did another search on OTN and found the following :

    With Oracle Database 10g Release 2 Patchset 2 (, the listener has the capability to throttle the rate at which connections are spawned to the database.”

    So it appears to already be available, just not fully documented until 11g :)

    Comment by Stew Ashton — January 4, 2009 @ 12:52 am BST Jan 4,2009 | Reply

  8. Stew,

    Fantastic. I just love the way that blogging makes it possible for bits of information like this to get pulled together. (Must start a Wiki one day).


    Comment by Jonathan Lewis — January 4, 2009 @ 7:10 pm BST Jan 4,2009 | Reply

  9. Hi Jonathan,

    Thanks for the note :. Though totally not related to this topic but not completely away from it.

    Reg : For Library Cache …..Oracle Documentation says.. ,

    “Another key statistic is the amount of free memory in the shared pool at peak times.The amount of free memory can be queried from V$SGASTAT , looking at the free memory for the shared pool. Optimally, free memory should be as low as possible,without causing any reloads on the system”

    While we know low memory on the Shared Pool causes lots of issues(RELOADS) .Would having excess memory in Shared Pool causes any additional overhead for Oracle ?. In most of my customer base, if the shared pool is quite large and they don’t bother to shrink it as the general perception is that this should not cause any harm to the performance ( In fact, may help if there is load surge).

    Is there any performance metric for Library cache which would indicate this excessive free memory maintenance ? Or we just dont need to bother much about this as this overhead is quite minimal ?


    Comment by jagatheesh — May 28, 2014 @ 5:44 pm BST May 28,2014 | Reply

    • jagatjeesh,

      As a general principle I don’t think that there’s any way that the presence of unused memory in the shared pool could cause performance problems (unless you make your shared pool so big that the machine starts paging – but that’s a silly extreme). There are, perhaps, two special cases to consider. First: if you generate a lot of “literal string,, use once” SQL then the larger the shared pool the more garbage it will accumulate and monitoring scripts may become increasingly expensive to run; secondly, you may run into some odd bug that leaves you with a large number of child cursors for a single parent – e.g. anomalies of cursor-sharing (whether adaptive in 11g, or forced/similar) – again the large shared pool could allow you to accumulate more of these, which could result in an increase in library cache latch contention (waits and CPU).

      You could argue that if you’ve got a lot of free memory in the shared pool (and the various other pools) then it’s memory that might give a little performance benefit if move to the buffer cache. Two points about that, though: first – in many cases modern systems tend use “automatic shared memory management (ASMM)” with fixed lower limits for the db_cache_size and shared_pool_size so that Oracle can move memory granules between the two if it seems to be a good idea; secondly it is possible (though a little unlucky and likely to be the result of a bug) to introduce performance problems by having an over-large buffer cache – there have been odd cases in the past, for example, where very large numbers of CR copies of a single block have appeared, leading to contention for the cache buffers chains latch.

      I have often seen systems, though, with hundreds of megabytes of free memory reported in various of the (non-cache) memory pools and no indication that the excess was the direct cause of a problem. (I usually point out the waste, though, and suggest moving most of the excess into the db cache as a low impact but easy change.)

      Comment by Jonathan Lewis — May 30, 2014 @ 6:32 am BST May 30,2014 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by