Oracle Scratchpad

February 23, 2011

AWR Reports

Filed under: AWR,Oracle,Statspack,Troubleshooting — Jonathan Lewis @ 6:51 pm BST Feb 23,2011

A little thought for those who use OEM to generate AWR reports – if you generate the standard HTML form and then try posting it into (say) the OTN database forum when you want some help interpreting it, it won’t look very readable (even if you do remember to put the right tags before and after to reproduce it in fixed font).

So it’s a good idea to know how to produce the text format – which is what I do most of the time (especially since I often end up with a simple telnet or putty session into a client server). Take a look at $ORACLEHOME/rdbms/admin for all the scripts starting with “awr” – there’s quite a lot of them, and the number keeps growing. Apart from finding a script that will give you the standard AWR report in a good old-fashioned text format, you may that newer versions of Oracle include a few useful variations on the theme.

Here’s a list from the 11.2.0.2 home ($ORACLE_HOME/rdbms/admin) with (mostly) the one-line description from the start of file. I’ve put the most useful ones in the first list:

awrrpt.sql      -- basic AWR report
awrsqrpt.sql    -- Standard SQL statement Report

awrddrpt.sql    -- Period diff on current instance

awrrpti.sql     -- Workload Repository Report Instance (RAC)
awrgrpt.sql     -- AWR Global Report (RAC)
awrgdrpt.sql    -- AWR Global Diff Report (RAC)

awrinfo.sql     -- Script to output general AWR information

For most people the awrrpt.sql and awrsqrpt.sql are likely to be sufficient, but the “difference between two periods” can be very useful – especially if you do things like regularly forcing an extra snapshot at the start and end of the overnight batch so that you can (when necessary) find the most significant differences in behaviour between the batch runs on two different nights.

If you get into the ‘RAC difference report’ you’ll need a very wide page – and very good eyesight !

There are also a lot of “infrastructure and support” bits – some of the “input” files give you some nice ideas about how you can write your own code to do little jobs like: “run the most recent AWR report automatically”:

awrblmig.sql    -- AWR Baseline Migrate
awrload.sql     -- AWR LOAD: load awr from dump file
awrextr.sql     -- AWR Extract

awrddinp.sql    -- Get inputs for diff report
awrddrpi.sql    -- Workload Repository Compare Periods Report

awrgdinp.sql    -- Get inputs for global diff reports
awrgdrpi.sql    -- Workload Repository Global Compare Periods Report

awrginp.sql     -- AWR Global Input
awrgrpti.sql    -- Workload Repository RAC (Global) Report

awrinpnm.sql    -- AWR INput NaMe
awrinput.sql    -- Get inputs for AWR report

awrsqrpi.sql    -- Workload Repository SQL Report Instance

I usually carry copies of the scripts with me when I’m troubleshooting in case I need them at client sites – sometimes I’m not allowed the privileges I really need to do rapid troubleshooting, but if I can persuade the DBA to give me execute privileges on package dbms_workload_repository and select privileges on a couple of the necessary tables and views then I can run the reports from an otherwise “minimal” account.

There are also a couple of deceptively named files that you might miss in 11.2:

spawrrac.sql  -- Server Performance AWR RAC report
spawrio.sql  -- AWR IO Intensity Report
spadvrpt.sql -- Streams Performance ADVisor RePorT

Although the initial letters in the names suggest that these files might fall in with statspack, they actually report from the AWR tables – however the first one (spawrrac.sql) was only a temporary measure, and prints out the warning message:

This script will be deprecated.  The official release of the Global AWR report is awrgrpt.sql

So if you’re using the spawrrac.sql – stop it.

20 Comments »

  1. It’s good to know those awr report scripts for RAC.

    Comment by sync — February 25, 2011 @ 4:51 am BST Feb 25,2011 | Reply

  2. Remember everyone, that using AWR – even the command line scripts – requires EE + additional Tuning Pack license for your database.
    Statspack is still alive and well worth the money!

    Comment by Sam — February 26, 2011 @ 7:14 am BST Feb 26,2011 | Reply

  3. hey , cool awr report script collection!

    Comment by Maclean Liu — February 26, 2011 @ 6:24 pm BST Feb 26,2011 | Reply

  4. […] couple of weeks ago I listed a number of scripts from 11.2.0.2 relating to AWR reports – it seems only sensible to publish a corresponding list […]

    Pingback by Statspack Reports « Oracle Scratchpad — June 15, 2012 @ 7:08 am BST Jun 15,2012 | Reply

  5. […] 1-statspack examples. 2-Analyaz statspack. 3-Active Session History. 4-Statspack Article. 5-About Statspack. 6-Using Statspack. 7-AWR Reports […]

    Pingback by Understand AWR Report « AbbasGol's Blog — October 10, 2012 @ 6:42 am BST Oct 10,2012 | Reply

  6. […] AWR Reports […]

    Pingback by Viewing Figures « Oracle Scratchpad — December 10, 2012 @ 10:52 am BST Dec 10,2012 | Reply

  7. Can anyone recommend good references for AWr and statspack output? I don’t really mean interpretation of them – this is somewhere between an art and a science and there are many articles out there – I really mean one that simply and clearly defines what all the contents are. For example, I recently found myself with a colleague looking over the “Tablespace IO Stats -> ordered by IOs” section and we both realised we were not 100% sure what “Av Buf Wt(ms)” signified – is this figure averaged over reads and writes? I think the report is full of little subtleties like this which seem obvious until someone questions you closely about what they mean! But shurely (sic) these are fully documented somewhere?

    Comment by cam — March 7, 2013 @ 10:01 am BST Mar 7,2013 | Reply

    • Hi,
      As AWR is coming from Statspack, I usually check in statspack scripts how the numbers are calculated.
      For example:
      – Searching for ‘Av Buf Wt(ms)’ in sprepins.sql shows that it is calculated from time / wait_count from stats$filestatxs
      – Searching in spcusr.sql shows that they come from x$kcbfwait
      – x$kcbfwait sums the waits you see in v$waitstat
      So I can guess that ‘Av Buf Wt(ms)’ is the average wait time for all wait events (the ‘Buffer Waits’ column) on the file blocks. Wt is for ‘Wait’ not for ‘Write’ ans it concerns read or write events.
      Regards,
      Franck.

      Comment by Franck Pachot — March 9, 2013 @ 9:53 pm BST Mar 9,2013 | Reply

    • Cam,

      I don’t think there are any good references for either that give the detail we would both like to see.
      As Franck has pointed out, the best we can do is look at statspack – where we have the pl/sql source – when we want to try and work backwards to the printed figures. Even then, as with your “Av Buf Wt(ms)” you also have to know that v$filestatxs is one of the view created by statspack and that the x$kcbfwait it references (which is not commonly documented) is the summary of buffer busy waits by file number – making the figures youthe fitur see the average buffer busy wait time by file.

      Unfortunately, dependent on version of Oracle, you have to guess whether the buffer busy waits data (x$kcbwait and x$kcbfwait) includes or excludes the “read by other session” special case that was isolated in 10g; and then you have to cross check with the “Segments by Buffer Busy Waits” from v$segstat. Nobody does this type of analysis in a vacuum, so no-one has a list of the thousands of cross-referenced details that would be necessary to make it possible to automate the interpretation of the numbers.

      Comment by Jonathan Lewis — March 10, 2013 @ 11:49 am BST Mar 10,2013 | Reply

      • Thanks for the responses – what a shame that so much useful information is collected, summarised and wrapped in useful reports but that these reports are inadequately documented!

        Comment by macartain — March 11, 2013 @ 1:44 pm BST Mar 11,2013 | Reply

  8. […] AWR Reports […]

    Pingback by Viewing Figures | Oracle Scratchpad — October 8, 2013 @ 5:34 pm BST Oct 8,2013 | Reply

  9. […] a555.net(jeff.a1):/app/support/SLOB: ./slob2-rac-analyze.sh rac_awr_12jul2013/awr.20.032/awr.20.032.txt > slob.csv Info : Analyzing file rac_awr_12jul2013/awr.20.032/awr.20.032.txt Info : Filename = awr.20.032.txt Info : Update Pct = 20 Info : Workers = 032 Info : Read IOPS = 85.8 Info : Write IOPS = 33.0 Info : Redo IOPS = 15.6 Info : Total IOPS = 134.4 Info : Read Num Waits = 712 Info : Read Wait Time = 0.58 Info : Read Latency us = 814.606 Info : Write Num Waits = 926 Info : Write Wait Time = 0.28 Info : Write Latency us = 302.375 Info : Redo Num Waits = 2043 Info : Redo Wait Time = 0.37 Info : Redo Latency us = 181.106 Info : Num CPUs = 384 Info : Num CPU Cores = 192 Info : Num CPU Sockets = 24 Info : Linux Version = Red Hat Enterprise Linux Server release 6.3 (Santiago) Info : Kernel Version = 2.6.32-279.2.1.el6.x86_64 Info : Processor Type = Intel(R) Xeon(R) CPU E7- 2830 @ 2.13GHz Info : SLOB Run Time = 300 Info : SLOB Work Loop = 0 Info : SLOB Scale = 10000 Info : SLOB Work Unit = 256 Info : SLOB Redo Stress = LIGHT Info : SLOB Shared Data Mod = 0 Info : No more files found Info : ============================= Info : AWR Files Found = 1 Info : AWR Files Processed = 1 Info : Errors Experienced = 0 Info : ============================= Jonathan Lewis has a nice article covering the different AWR Reports. […]

    Pingback by RAC aware SLOB2 analyze script (Flash DBA) – Oramoss Ltd. — May 2, 2016 @ 9:20 pm BST May 2,2016 | Reply

  10. Hi Jonathan,

    First of all thanks for always sharing good information!!! :)

    I have a question about AWR report’s and I very apreciate your opnion.
    For analyze PEAK values (IOPS/Throughput for example) for reason of the get max values for sizing a new database server (or in my situation, for sizing a best cloud configuration in AWS. Currently we have a Exadata x4-2 quarter but we need to migrate to AWS) the AWR default report (awrrpt.sql) it’s the best way to get the peak values ?

    I’m asking this question because in a specific snap_id (Apparently the IOPS/Throughput peak) in the AWR report “Other Instance Activity Stats” section the “physical read total IO requests and “physical read total bytes” (only reads to make this short) the values are there is the values from dba_hist_sysmetric_summary view” AVERAGE column.

    If I query the view dba_hist_sysmetric_summary using the same snap_id used in AWR report, the MAXVAL column show me vey higher value than shown in AWR report.

    AWR report: (Other Instance Activity Stats , “per Second” column):

    “per Second” column””
    physical read total IO requests = 3,338
    physical read total bytes = 3,439,127,524.47

    dba_hist_sysmetric_summary view column AVERAGE and MAXVAL:

    AVERAGE MAXVAL
    physical read total IO requests = 3346,80963095357 83169,0579227696
    physical read total bytes = 3447972895,57073 86387923055,1265

    Both same snap_id (RAC, but only instance 1 is in use. )
    So, to get peak values for sizing goals, Do you agree only AWR report is not enouhg ?
    The most important question: Can I use(trust) in my sizing this maxvalue from dba_hist_sysmetric_summary for peak values ?

    Thank you very much!!

    Regards

    Comment by Wll — March 28, 2018 @ 3:44 pm BST Mar 28,2018 | Reply

    • Wll,

      It’s definitely not sensible to assume that the average figures from an AWR report are a good guideline to peak requirements and the sysmetric summaries help to improve the picture. You do need to be a little cautious, though, about jumping to the max value from the sysmetric, because by itself that still doesn’t give you a complete picture – what if the maximum persisted for (say) 5 seconds in a 1 hour window, would you want to increase your capacity by a factor of 3 because of a little blip that might stretch to 15 or 20 seconds if you provisioned a smaller machine ?

      So do look at the sysmetric figures, but look at smaller granularities – you can get 1 minute and 5 minute metrics for intervals where the one hour figures look threatening – to get a better picture of how significant the max values are. (And the standard deviation can be helpful as a guide to variation in load as well; and don’t forget that a requirement for excess throughput or I/O counts might be resolved by scheduling rather than extra hardware.)

      Comment by Jonathan Lewis — April 5, 2018 @ 7:01 pm BST Apr 5,2018 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.