Some time ago, I wrote about resizing the SGA and the problems of automatic SGA management. In the follow-up, one of of the suggestions was to set a minimum value for the db cache and the shared pool to avoid the risk of the memory management thrashing memory back and forth between the two areas.
Since then, I’ve taken to checking the frequency of SGA resizing operations whenever I visit a site that uses ASMM, and using the results as a guideline to suitable minimum sizes for the different memory areas – or even as a threat indicator that ASMM should be disabled. Based on the results below, for example, I’d think about setting the shared pool size to a minimum of 800M.
rem rem Script: sga_resize_ops.sql rem Author: Jonathan Lewis rem Dated: December 2006 rem Purpose: rem rem Last tested rem 11.2.0.4 rem 11.1.0.7 rem 10.2.0.3 rem Not tested rem Not relevant rem 9.2.0.8 rem 8.1.7.4 rem rem Notes: rem Quick and Dirty for 10g rem SGA resizing operations rem rem Watch out for rapid fluctuations rem rem In 10.2 check also v$sgastat for KGH: NO ACCESS rem which is memory in a shared pool (or other) granule rem slowly being acquired by the db cache. rem rem Will have to write a new version for 11g which rem uses v$memory_resize_ops to deal with PGA/SGA rem interaction (and ASM) as well rem rem Look at v$sga_dynamic_components (and in 11g the rem equivalent v$memory_dynamic_components) for summary rem view of granule sizes (which implies a future option rem for varying granule sizes). rem rem See also: V$SGA_DYNAMIC_FREE_MEMORY (x$ksge) start setenv set timing off column component format a25 column parameter format a30 column initial_size format 99,999,999,999 column target_size format 99,999,999,999 column final_size format 99,999,999,999 column started_at format a15 column ended_at format a15 set linesize 90 set pagesize 60 spool sga_resize_ops prompt ============ prompt Full History prompt ============ select component, oper_type, oper_mode, parameter, initial_size, target_size, final_size, status, to_char(start_time,'dd-mon hh24:mi:ss') started_at, to_char(end_time,'dd-mon hh24:mi:ss') ended_at from v$sga_resize_ops where initial_size != 0 or target_size != 0 or final_size != 0 order by start_time ; prompt ============== prompt Since midnight prompt ============== set linesize 120 set pagesize 60 break on starting_time skip 1 column oper_type format a12 column component format a25 column parameter format a22 column timed_at format a10 select to_char(start_time,'hh24:mi:ss') timed_at, oper_type, component, parameter, oper_mode, initial_size, final_size from v$sga_resize_ops where start_time > trunc(sysdate) order by start_time, component ; spool off
Here’s an extract from the output I saw at one site. As you can see, there are some fairly busy moments during the day; and one really bizarre storm of activity around 2:20 in the morning. I don’t know what was going on at 2:20, or again at 14:06, but there were a couple of spikes of waits for “cursor: pin S wait on X” at about the same time.
TIMED_AT OPER_TYPE COMPONENT PARAMETER OPER_MODE INITIAL_SIZE FINAL_SIZE -------- ---------- -------------------- -------------------- --------- ------------ ---------- 01:41:48 GROW DEFAULT buffer cache db_cache_size DEFERRED 1358954496 1392508928 SHRINK shared pool shared_pool_size DEFERRED 738197504 704643072 01:42:54 SHRINK DEFAULT buffer cache db_cache_size DEFERRED 1224736768 1191182336 GROW shared pool shared_pool_size DEFERRED 872415232 905969664 01:44:18 GROW DEFAULT buffer cache db_cache_size DEFERRED 1392508928 1426063360 SHRINK shared pool shared_pool_size DEFERRED 704643072 671088640 01:44:24 GROW DEFAULT buffer cache db_cache_size DEFERRED 1191182336 1224736768 SHRINK shared pool shared_pool_size DEFERRED 905969664 872415232 01:45:24 SHRINK DEFAULT buffer cache db_cache_size DEFERRED 1224736768 1191182336 GROW shared pool shared_pool_size DEFERRED 872415232 905969664 01:48:54 GROW DEFAULT buffer cache db_cache_size DEFERRED 1191182336 1224736768 SHRINK shared pool shared_pool_size DEFERRED 905969664 872415232 02:20:25 GROW DEFAULT buffer cache db_cache_size DEFERRED 1291845632 1325400064 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1325400064 1308622848 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1308622848 1291845632 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1291845632 1275068416 SHRINK shared pool shared_pool_size DEFERRED 805306368 771751936 GROW shared pool shared_pool_size IMMEDIATE 771751936 788529152 GROW shared pool shared_pool_size IMMEDIATE 788529152 805306368 GROW shared pool shared_pool_size IMMEDIATE 805306368 822083584 14:01:22 SHRINK DEFAULT buffer cache db_cache_size DEFERRED 1241513984 1207959552 GROW shared pool shared_pool_size DEFERRED 855638016 889192448 14:02:52 GROW DEFAULT buffer cache db_cache_size DEFERRED 1207959552 1241513984 SHRINK shared pool shared_pool_size DEFERRED 889192448 855638016 14:04:22 GROW DEFAULT buffer cache db_cache_size DEFERRED 1241513984 1275068416 SHRINK shared pool shared_pool_size DEFERRED 855638016 822083584 14:06:52 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1308622848 1291845632 GROW DEFAULT buffer cache db_cache_size DEFERRED 1275068416 1308622848 SHRINK shared pool shared_pool_size DEFERRED 822083584 788529152 GROW shared pool shared_pool_size IMMEDIATE 788529152 805306368 14:06:55 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1291845632 1275068416 GROW shared pool shared_pool_size IMMEDIATE 805306368 822083584 14:06:58 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1275068416 1258291200 GROW shared pool shared_pool_size IMMEDIATE 822083584 838860800 14:07:01 SHRINK DEFAULT buffer cache db_cache_size IMMEDIATE 1258291200 1241513984 GROW shared pool shared_pool_size IMMEDIATE 838860800 855638016 14:08:53 GROW DEFAULT buffer cache db_cache_size DEFERRED 1241513984 1275068416 SHRINK shared pool shared_pool_size DEFERRED 855638016 822083584 14:09:23 SHRINK DEFAULT buffer cache db_cache_size DEFERRED 1275068416 1241513984 GROW shared pool shared_pool_size DEFERRED 822083584 855638016
Update Sept 2009: If you think that you are suffering problems due to excess resizing operations, check Metalink for document id 7189722.8 – which describes this behaviour as a bug “confirmed as affecting 10.2.0.4”. The bug is fixed in 10.2.0.5 and and 11.2, but there is a workaround offered through a hidden parameter:
alter session set "_memory_broker_stat_interval=999;
The interval is set in seconds, and the default is 30 seconds. I doubt if there’s any specific significance to the value 999 used in the demonstration code from the bug note.
On the downside – bug 6528336 was confirmed in 10.2.0.3 and reported fixed in 10.2.0.4 – so you may still want to be a little cautious about ASMM even in the latest releases.
…a guideline to… what?
Comment by joel garry — April 17, 2007 @ 3:38 am BST Apr 17,2007 |
Joel, Thanks for the alert. I can’t think how that bit just above the code went missing; thumb trouble with cut-n-paste I guess. I’ve put the missing bit back in.
Comment by Jonathan Lewis — April 17, 2007 @ 6:27 am BST Apr 17,2007 |
[…] 17th, 2007 · No Comments Reading the latest post on Jonathan Lewis’s excellent blog, I was once again struck by Oracle’s mental acronym […]
Pingback by Oracle MAS - Mental Acronym Strategy « OraStory — April 17, 2007 @ 9:45 am BST Apr 17,2007 |
Jonathan, how did you determine that 800m minimum for db_cache_size is enough? Or will that be the minimum for shared_pool_size?
Comment by Yas — April 17, 2007 @ 7:23 pm BST Apr 17,2007 |
Yas – I’m not doing too well with this post; that should have been the shared_pool (I’ve now corrected it in the text) based on the way that around 2:00 am and 2:00 pm the db cache tries to shrink the shared pool and meets prompt resistance and a push back.
Comment by Jonathan Lewis — April 19, 2007 @ 9:19 pm BST Apr 19,2007 |
[…] Update (2-May-2007) Jonathan Lewis recently provided some updated updated information on this subject on his blog – SGA Resizing. […]
Pingback by Oracle10g - Automatic SGA Sizing « Mike R’s Blog — May 2, 2007 @ 8:22 pm BST May 2,2007 |
This exactly happens in our load test environment. We see “cursor: pin S wait on X” when sga_target set to non zero and shared_pool_size and large_pool are set to zero.
We have to set large pool and shared pool to around 1G to disapper this wait event.(Testing 1000 users and shared server configuration).
Comment by reega — February 1, 2008 @ 3:22 pm GMT Feb 1,2008 |
Are you now in favor of enabling ASMM and setting minimum values for shared pool and buffer cache or are you in favor of disabling ASMM per https://jonathanlewis.wordpress.com/2006/12/04/resizing-the-sga/
Our highly active 3 node RAC OLTP system experienced a severe 3 minute slowdown recently and we’re using ASMM. Oracle is suggesting to just set minimums for shared pool and buffer cache, but I’m wondering if we should just disable ASMM.
Comment by Steve Martin — May 13, 2009 @ 1:46 pm BST May 13,2009 |
Steve,
I still have a fairly strong bias against using the automatic SGA management – especially with RAC; but if you had a clear need for varying amounts of memory over a long time span (a day/night cycle, say) I would be prepared to use it but set some lower limits for the key areas to avoid thrashing near the boundary values.
In your case, I’d start by checking v$sga_resize_ops to see if there were anything strange around the time the slowdown occurred; and I’d check the history (AWR or statspack depending which you use) for the SGA memory allocation to see if there were regular significant changes in size between the start and end of snapshots. The other thing I’d look out for in the SGA memory usage report is a large amount of memory marked “KGH NO ACCESS” – which is shared pool memory being used as buffer cache.
If you see any clues that the buffer cache or shared pool can put extreme demands on each other, I’d be more inclined to fix the memory size.
Comment by Jonathan Lewis — May 13, 2009 @ 7:26 pm BST May 13,2009 |
[…] too low, leading to possible interference problems as new servers have to be started up. Checking v$sga_resize_ops for automatic resizing of the large pool could also give you some clues about how appropriate your […]
Pingback by Shared Server « Oracle Scratchpad — July 27, 2009 @ 5:09 pm BST Jul 27,2009 |
[…] Resizing the SGA Filed under: Infrastructure, Troubleshooting — Jonathan Lewis @ 6:32 pm UTC Dec 4,2006 [More on resizing the SGA] […]
Pingback by Resizing the SGA « Oracle Scratchpad — September 11, 2009 @ 3:27 pm BST Sep 11,2009 |
Bug#8879595 is of a case where setting “_memory_broker_stat_interval”=999 didn’t help, with KGH: NO ACCESS being 2GB !!
BTW, do you know what ‘DEFERRED’ in OPER_MODE really means ?
Hemant K Chitale
Comment by Hemant K Chitale — September 16, 2009 @ 2:28 am BST Sep 16,2009 |
Deferred means that the shared pool granule’s transition to a buffer cache one was deferred, not really done. The granule becomes a composite granule, all the memory belonging to shared pool heap, but the shared pool heap manager gives whatever memory it can from it to buffer cache manager as KGH: NO ACCESS chunks…
Comment by Tanel Poder — September 18, 2009 @ 10:18 am BST Sep 18,2009 |
Tanel,
Thanks for that explanation. The example also has a deferred shrink of the db cache which, by symmetry, suggests that we are releasing space in a db cache granule as buffers get written or become unpinned.
Do you know if Oracle has a different labelling mechanism when going the other way, or does it immediately label these granules as sga heap granules with some KGH: NO ACCESS space ?
Comment by Jonathan Lewis — September 19, 2009 @ 3:49 pm BST Sep 19,2009 |
[…] stats: a script to report the start time and run time of the automatic stats collection job SGA Resizing: a script to report resizing operations (v$sga_resize_ops) What have I done: a script to report the […]
Pingback by Simple scripts « Oracle Scratchpad — February 25, 2010 @ 5:56 pm GMT Feb 25,2010 |
Hi Jonathan,
So you are effectively saying is that the ‘TARGET’ and ‘FINAL’ column are what oracle is aimimg SHARED_POOL to be shrinked in case of SHARED_POOL SHRINKING with DEFFERED operation mode , but not the actual value SHARED_POOL could ultimately be set to .
1) Then why this pseudo ‘TARGET’ value is again used as the ‘INITIAL’ value next time SHARED_POOL is asked to GROW or SHRINK again .
Does not this make ‘TARGET’ column of v$sga_resize_ops and ‘CURRENT SIZE’ column v$sga_dynamic_components bit misleading , since they don’t take account of KGH NO ACCESS
2) Now suppose SHARED_POOL GROWS IMMEDIATE by 1 granule after a DEFFERED SHRINK of SHARED_POOL by 1 granule .
Does this ensure reclaiming the KGH NO ACCESS part by SHARED_POOL that was given away to DB BUFFER CACHE by earlier DEFFERED SHRINK and therfore making KGH NO ACCESS in v$sgasta showing 0.
Comment by Soumik — May 16, 2012 @ 1:27 pm BST May 16,2012 |
Soumik,
I haven’t said (or implied) anything about a ‘TARGET’ column. If you mean the target_size column in v$sga_size_ops view, the description in the manuals says that this is what you asked for (adjusted to allow the appropriate granule size), while the final_size is what you finally get. For example, for the following output I asked for a 5MB shared_pool_size which Oracle rounded up to a granule – but there was a minimum memory requirement for my setup, so the final size didn’t go below 96M:
As far as I am aware, it’s the previous final_size that becomes the current initial_size – and yes, there are no doubt cases where columns like current_size, or initial_size, or final_size are misleading because they don’t allow for the amount of memory which hasn’t yet been released from one function to the other.
To your question (2) – the only valid answer I can give is: I don’t know, I didn’t write the code or the specification. It might be possible to engineer a few tests to check, but that could be a little difficult. We might hope that if there was a partially converted granule on its way to becoming a buffer cache granule, and you asked Oracle to increase the shared pool then it would be that granule that was used to grow the shared pool – but there may be circumstances where such behaviour is not possible.
Comment by Jonathan Lewis — May 24, 2012 @ 12:45 pm BST May 24,2012 |
[…] I thought of looking at v$memory_resize_ops and found that the automatic memory management had switched a lot of memory to the PGA, allowing […]
Pingback by Interval Partition Problem | Oracle Scratchpad — February 21, 2018 @ 8:40 am GMT Feb 21,2018 |