Oracle Scratchpad

April 30, 2007

System Statistics

Filed under: CBO,Statistics,System Stats,Tuning — Jonathan Lewis @ 8:21 pm BST Apr 30,2007

In chapter 2 of Cost Based Oracle – Fundamentals, I made the following comment about system statistics.

“… you could simply calibrate your hardware (or at least the I/O subsystem) for absolute performance figures …”

I recently received an email asking me how to ‘calibrate the hardware’, and to my surprise I discovered that I had never explained in the book what I meant.

The idea is simple – to my mind, the purpose of system statistics is to tell Oracle how the hardware can behave when under stress; and for most people the most critical hardware is their set of discs themselves.

So my theory is that we would like to tell Oracle about the load that the discs can bear when under pressure.  However, for various reasons – including SAN caches, read-ahead algorithms and the like – Oracle loses sight of the discs and can easily gather very misleading statistics about the mreadtim particularly.

To calibrate the devices independently of Oracle, simply create a huge data file (a few hundred gigabytes would be realistic nowadays) and start doing a lot of random reads.

To start with, make the random reads the same size as your Oracle block size, using offsets from the start of file that are always a multiple of the Oracle block size. This is to emulate random indexed access.

Then pick your db_file_multiblock_read_count and change the read size to db_file_multiblock_read_count * db_block_size, still calculating the offsets as a multiple of the single block size. This is to emulate Oracle’s multiblock reads, but avoiding the confusion caused by read-ahead and predictive caching. We do this because the physical I/O still takes place and still impacts on all other I/O activity, even though one individual process may benefit from the read-ahead.

Set the MBRC to your chosen db_file_multiblock_read_count, use the average read time from the first test as the sreadtim, and the average read time from the second test as the mreadtim.

Footnote: in 10g, if you don’t create system statistics Oracle “invents” them based on your db_file_multiblock_read_count and (usually notional) values for the seek time and transfer rate of your disks; and the default values used by Oracle seem to work quite well. If in doubt, setting MBRC to 8, sreadtim to 12 and mreadtim to 26 – the effective defaults for 10g – will probably be better than taking no action in 9i for most OLTP systems.

Warning: System statistics have a global impact on your database. You don’t shouldn’t change them on a production system without going through your usual acceptance test cycle first.

[More on System Statistics]


  1. This is another very interesting post and is something I’d like to try out. How do you you usually go about populating your huge tablespace, do you use something that uses existing database in the database or do you use your own scripts which manufacturers data ?.

    Comment by Chris Adkin — May 2, 2007 @ 8:58 am BST May 2,2007 | Reply

  2. uhm…

    I think that Jonathan is talking about “off db” testing e.g. a little C program or something like that.

    Or not?

    Comment by Antonio — May 2, 2007 @ 12:23 pm BST May 2,2007 | Reply

  3. You can use a utility like Filebench to help gather this kind of info for Solaris as well as define your IO subsystem performance characteristics.

    Easier than writing C code !

    Comment by Jeroen Wilms — May 3, 2007 @ 6:01 am BST May 3,2007 | Reply

  4. @Jeroen

    but it’s only for Solaris!

    I want to be…portable! ^_^

    Really, I hope that Jonathan can explain a bit more: is he talking of an external (O.S. program, not an Oracle one) or an internal one (a PL/SQL program, a particular query etc…)

    Jeroen, thank you for the uri, I’ll check it as soon as possible!

    Comment by Antonio — May 3, 2007 @ 12:03 pm BST May 3,2007 | Reply

  5. This one could help too.

    Comment by Oscar de la Torre — May 3, 2007 @ 2:26 pm BST May 3,2007 | Reply

  6. The intention was to do a completely “off-db” test, using an external program. I used to use a pair of very simple C programs I wrote about 10 years ago (and still available at this URL on my website) to do this, but since then IOZone has been published, and Oracle recently launched Orion to do something similar. (I haven’t tested either product).

    Comment by Jonathan Lewis — May 4, 2007 @ 5:59 am BST May 4,2007 | Reply

  7. While reading Oracle 11g New Feature Guide paragraph I/O Calibration, I have immediately remembered this blog entry.
    It looks like Oracle is listening to it’s users more and more. In 11g there is new procedure dbms_resource_manager.CALIBRATE_IO(). That does exactly what was described in this blog entry :)

    Best regards,

    Comment by Edgar Chupit — August 10, 2007 @ 2:30 pm BST Aug 10,2007 | Reply

  8. In reference to one of the earlier comments, as of April 2009, Filebench is now available for Linux and MacOSX as well as for Solaris.

    Comment by Andy Zitelli — April 11, 2009 @ 12:48 am BST Apr 11,2009 | Reply

  9. […] we set the environment – I’ve disabled system statistics (a.k.a. CPU costing) for the purposes of the demonstration to get a stable […]

    Pingback by Analytic Agony « Oracle Scratchpad — September 7, 2009 @ 5:33 pm BST Sep 7,2009 | Reply

  10. Hi Jonathan,
    I’ve found your blog to be invaluable when trying to decipher performance issues. Hopefully this is a quick answer for you. We have a large number of 10g databases and I’ve been checking the general state of their optimizer setup. For system statistics, most are using NoWorkload, but I’ve noticed something odd: some databases have unexpectedly low values for CpuSpeedNw even though they are running on identical servers. This is because they were migrated from older servers and sys stats were not regathered. I know that if I delete the old stats they will be automatically regathered on database restart. What I don’t know is the impact of doing this: how critical is the value of CpuSpeedNw to the optimizer?

    Many thanks!
    Mark Teehan

    Comment by mark teehan — January 11, 2010 @ 7:14 am BST Jan 11,2010 | Reply

    • Mark,

      Sorry about the late reply – I sometimes lose sight of questions if I’ve let a batch of comments accumulate.

      The answer to your question is that the CPUSPEEDNW doesn’t usually have a big impact on the total cost of a query – in most cases the majority of the cost comes from the I/O component. Unfortunately there are always exceptions, and sometimes even a very small change in total cost is enough to make the optimizer choose a different execution plan.

      As a quick check, I just executed dbms_stats.delete_system_stats on a database and the values for the three noworkload statistics all changed immediately – and bouncing the database didn’t make them change again. Then gathering system stats with the noworkload option changed only the CPUSPEEDNW. So I wouldn’t want to predict exactly what’s going to happen in your case if you delete the system stats and bounce the database.

      One thought – rather than coping with the possibility of two separate changes in the values why don’t you just set the stats to values that match those from the most similar system that you’ve got. A piece of pl/sql like the following should be adequate (with serveroutput enabled):

      	m_value		number;
      	m_status	varchar2(64);
      	m_start		date;
      	m_stop		date;
      	dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEEDNW', m_value);
      	dbms_output.put_line('CPUSPEEDNW :' || m_value);
      	dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOSEEKTIM', m_value);
      	dbms_output.put_line('IOSEEKTIM  :' || m_value);
      	dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOTFRSPEED', m_value);
      	dbms_output.put_line('IOTFRSPEED :' || m_value);

      Comment by Jonathan Lewis — January 25, 2010 @ 8:47 am BST Jan 25,2010 | Reply

  11. Hi Jonathan,
    Is there any plans invalidation process that occurs after the gathering of system statistics?


    Comment by Alon Principal — January 12, 2010 @ 12:08 pm BST Jan 12,2010 | Reply

  12. Alon,

    I’ve not seen one (although I haven’t checked 11g yet). I would be a little surprised if it happened because it would mean invalidating the entire cursor cache, which could have a dire effect on the library cache and shared pool latches.

    Since Oracle Oracle only invalidates dependent cursors over a period of time (5 hours, according to the hidden parameter _optimizer_invalidation_period) when you gather stats on an object, I don’t thnk it’s likely to do something this aggressive when you gather system stats.

    Comment by Jonathan Lewis — January 12, 2010 @ 1:30 pm BST Jan 12,2010 | Reply

  13. […] Lewis has several posts on his blog about System Stats. Here are links to a couple: JL on System Stats JL on System Stats 2 Be sure and read through the comments as there is some excellent information […]

    Pingback by Kerry Osborne’s Oracle Blog » Blog Archive Oracle 10gR2 Autotuned db_file_multiblock_read_count — January 14, 2010 @ 4:01 am BST Jan 14,2010 | Reply

  14. I have to come to hear that system statistics do not help in Exadata and hence the recommendation is not to gather them in Exadata. Is this because the optimizer does not have any statistics of the performance of smart scans and hence would be mislead to pick a less efficient (which does not use smart scan) execution plan? Any idea ?

    Comment by Mukundhan — August 9, 2012 @ 1:56 pm BST Aug 9,2012 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by