Starting from a comment on an old statspack/AWR page, with a near-simultaneous thread appearing on OTN, (do read both) here’s a quick summary of getting statspack onto 12c with containers. (For non-container databases it’s a standard install).
July 4, 2013
March 29, 2013
From time to time I’ve looked at an AWR report and pointed out to the owner the difference in work load visible in the “SQL ordered by” sections of the report when they compare the summary figure with the sum of the work done by the individual statements. Often the summary will state that the captured SQL in the interval represents some percentage of the total workload in the high 80s to mid 90s – sometimes you might see a statement that the capture represents a really low percentage, perhaps in the 30s or 40s.
You have to be a little sensible about interpreting these figures, of course – at one extreme it’s easy to double-count the cost of SQL inside PL/SQL, at the other you may notice that every single statement reported does about the same amount of work so you can’t extrapolate from a pattern to decide how significant a low percentage might be. Nevertheless I have seen examples of AWR reports where I’ve felt justified in suggesting that at some point in the interval some SQL has appeared, worked very hard, and disappeared from the library cache before the AWR managed to capture it.
Now, from Nigel Noble, comes another explanation for why the AWR report might be hiding expensive SQL – a bug, which doesn’t get fixed until 12.2 (although there are backports in hand).
February 18, 2013
Here’s a little follow-on from Friday’s posting. I’ll start it off as a quiz, and follow up tomorrow with an explanation of the results (though someone will probably have given the correct solution by then anyway).
I have a simple heap table t1(id number(6,0), n1 number, v1 varchar2(10), padding varchar2(100)). The primary key is the id column, and the table holds 3,000 rows where id takes the values from 1 to 3,000. There are no other indexes. (I’d show you the code, but I don’t want to make it too easy to run the code, I want you to try to work it out in your heads).
February 15, 2013
It’s very easy to get a lot of information from an AWR (or Statspack) report – provided you remember what all the numbers represent. From time to time I find that someone asks me a question about some statistic and my mind goes completely blank about the exact interpretation; but fortunately it’s always possible to cross check because so many of the statistics are cross-linked. Here’s an example of a brief mental block I ran into a few days ago – I thought I knew the answer, but realised that I wasn’t 100% sure that my memory was correct:
January 7, 2013
A recent (Jan 2013) post on the OTN database forum reported a performance problem on Oracle 184.108.40.206 (so no AWR), and posted a complete statspack report to one of the public file-sharing sites. It’s been some time since I did a quick run through the highlights of trouble-shooting with statspack, so I’ve picked out a few points from this one to comment on.
As usual, although this specific report is Statspack, the same analysis would have gone into looking at a more modern AWR report, although I will make a couple of comments at the end about the extra material that would have been available by default with the AWR report that would have helped us help the OP.
November 13, 2011
After going through AWR reports (Instance Efficiency Percentages) I observed they have low Execute to Parse % but high Soft Parse %.
Please share if you had faced such issue and any suggestions to solve this
September 26, 2011
Time for another of those little surprises that catch you out after the upgrade.
Take a look at this “Top N” from a standard AWR report, from an instance running 220.127.116.11
Top 5 Timed Foreground Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- direct path read 3,464,056 6,593 2 33.5 User I/O DB CPU 3,503 17.8 db flash cache single block ph 2,293,604 3,008 1 15.3 User I/O db file sequential read 200,779 2,294 11 11.6 User I/O enq: TC - contention 82 1,571 19158 8.0 Other
March 9, 2011
A couple of weeks ago I listed a number of scripts from 18.104.22.168 relating to AWR reports – it seems only sensible to publish a corresponding list for Statspack. In fact, there are two such lists – one for “traditional” Statspack, and one for “standby statspack” – a version of statspack you can pre-install so that you can run statspack reports against a standby database.
Statspack – the basic source of information is $ORACLE_HOME/rdbms/admin/spdoc.txt
spreport.sql -> Generates a Statspack Instance report sprepins.sql -> Generates a Statspack Instance report for the database and instance specified sprepsql.sql -> Generates a Statspack SQL report for the SQL Hash Value specified sprsqins.sql -> Generates a Statspack SQL report for the SQL Hash Value specified, for the database and instance specified sppurge.sql -> Purges a limited range of Snapshot Id's for a given database instance sptrunc.sql -> Truncates all Performance data in Statspack tables spuexp.par -> An export parameter file supplied for exporting the whole PERFSTAT user
Standby Statspack – the basic source of information is $ORACLE_HOME/rdbms/admin/sbdoc.txt
sbreport.sql - Create a report sbaddins.sql - Add a standby database instance to the configuration sblisins.sql - List instances in the standby configuration sbdelins.sql - Delete an instance from the standby configuration sbpurge.sql - Purge a set of snapshots
A warning note about standby statspack – there is a significant structural change from 22.214.171.124 to 126.96.36.199 allowing support of multiple standby databases. I haven’t looked closely at it yet, but it did cross my mind that it might be possible to install the 188.8.131.52 version in an 184.108.40.206 database to avoid late upgrade issues – I may get around to testing the idea one day.
February 25, 2011
A couple of days ago I mentioned as a passing comment that you could take AWR snapshots at the start and end of the overnight batch processing so that if you ever had to answer the question: “Why did last night’s batch overrun by two hours?” you had the option of creating and comparing the AWR report from the latest batch run with the report from a previous batch run (perhaps the corresponding night the previous week) and check for any significant differences. Moreover, Oracle supplies you with the code to compare and report such differences from 10.2 (at least) using the script $ORACLE_HOME/rdbms/admin/awrddrpt.sql
The thing I didn’t mention at the time was how to take a snapshot on demand. It’s very easy if you have the appropriate execute privilege on package dbms_workload_repository.
The single input parameter can be ‘TYPICAL’ (the default) or ‘ALL’.
I keep this one liner in a little script called awr_launch.sql – because I can never remember the exact name of the package without looking it up. (And sometimes I run it with sql_trace enabled so that I can see how much work goes into a snapshot as we change versions, features, configuration, workload and platforms.)
February 23, 2011
A little thought for those who use OEM to generate AWR reports – if you generate the standard HTML form and then try posting it into (say) the OTN database forum when you want some help interpreting it, it won’t look very readable (even if you do remember to put the right tags before and after to reproduce it in fixed font).
January 14, 2011
Some time ago I was on a client site which was busy setting up a RAC cluster using Oracle 10g. Although I wasn’t involved with that particular system there were a couple of coffee-breaks where I ended up chatting with the consultant that the client had hired to install the cluster. In one of our breaks he surprised me by making the casual comment: “Of course, you can’t run Statspack on RAC because it locks up the system.”
April 14, 2010
Part 12 is about a 2-node RAC system. Someone recently posted links to a matched pair of AWR reports in this thread on the OTN Database General forum and, after I had made a couple of quick comments on them, gave me permission to use them on my blog.
March 21, 2010
There is a lot of confusion around about the significance of the statistic “parse calls”. The important thing to remember is that it is simply counting a certain type of call from the OCI library – the amount of work done by a parse call may vary enormously depending upon circumstances, and sometimes the amount of work is so tiny that it’s not worth worrying about.
A “parse call” may:
a) Have to optimise the statement because it failed to find it after searching the library cache
b) Find the statement after searching the library cache, and still have to optimise it for various reasons, e.g. the previous plan has been flushed from memory or the same text applies to different objects depending on who is executing it.
c) Find the statement after searching the library cache and not have to optimise it because the plan is still available and the user has the appropriate privileges.
d) Operate through the session cursor cache or pl/sql cursor cache allowing it to use a short cut to the statement’s location in the library cache without having to search the cache.
When the Oracle increments the counter for “parse calls” you still have to work out whether that call turned into (a), (b), (c) or (d).
Just to confuse the issue, Oracle may also record a “parse count (hard)” without recording a “parse call”.
Update Jan 2011
Randolf Geist has been looking at adaptive cursor sharing, and has noted that the parse call – including parse calls that go through the session cursor cache – seems to be the point in the code where adaptive cursor sharing can take place: in other words, it’s not an event that gets triggered or flagged by executions.
September 10, 2009
Trouble-shooting with Statspack / AWR:
Something has to be top of the “Top 5 Timed Waits” … even when there are no performance problems.
July 10, 2009
Of course, the exciting part for me was sitting down with a batch of Statspack and AWR reports that I had been supplied with in the previous couple of days and doing a “real-time” analysis of them.