It’s very easy to get a lot of information from an AWR (or Statspack) report provided you remember what all the numbers represent. From time to time I find that someone asks me a question about some statistic and my mind goes completely blank about the exact interpretation; but fortunately it’s always possible to cross check because so many of the statistics are cross-linked. Here’s an example of a brief mental block I ran into a few days ago – I thought I knew the answer but realised that I wasn’t 100% sure that my memory was correct:
In this Load Profile (for an AWR report of 60.25 minutes), what does that Transactions figure actually represent ?
Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 57,567.09 12,028.39 Logical reads: 48,043.83 10,038.54 Block changes: 314.07 65.62 Physical reads: 667.70 139.51 Physical writes: 46.25 9.66 User calls: 619.33 129.41 Parses: 505.67 105.66 Hard parses: 36.94 7.72 Sorts: 313.05 65.41 Logons: 0.56 0.12 Executes: 1,165.42 243.51 Transactions: 4.79 % Blocks changed per Read: 0.65 Recursive Call %: 95.18 Rollback per transaction %: 24.95 Rows per Sort: 25.09
And, while we’re at it, what does the “Rollback per transaction %” actually mean and what, if anything, can we infer from the value ?
Since we’re looking at transactions, it’s a fairly safe bet that the figure is something to do with commits – but is it counting only committed transactions, or does it included rolled back transactions [philosophical question – is a change that’s never committed really a transaction since it “never happened” as far as the rest of the world is concerned?]. Fortunately we can look at the Instance Activity Statistics to check.
First, though, lets convert the “Per Second” figures into an absolute value by multiplying by the duration in seconds (3,615) of the report. Allowing for rounding errors we’re looking for a value between ceiling(4.785 *3615) and floor(4.795 * 3615), i.e. between 17,298 and 17,338. Let’s also keep in mind that the “Rollback per transaction %” is very close to 25%.
Here, then, are a couple of useful figures from the Instance Activity:
Statistic Total per Second per Trans -------------------------------- ------------------ -------------- ------------- user commits 12,986 3.6 0.8 user rollbacks 4,316 1.2 0.3
Almost immediately you can see that user rollbacks are roughly one third of user commits, which means the rollback percentage is just “user rollbacks / (user commits + user rollbacks)”. As a further sanity check, 4,316 + 12,986 = 17,302 which falls nicely into our required range. Ta-da, job done: “Transactions” is the sum of user commits and user rollbacks (as a first approximation – but is it the whole answer).
Of course, I picked a fairly extreme example from my AWR library to make a point – which means you might now be asking whether all those user rollbacks pose some sort of threat. How serious are they ? Luckily there are a couple more statistics that tell us:
Statistic Total per Second per Trans -------------------------------- ------------------ -------------- ------------- rollback changes - undo records 222 0.1 0.0 transaction rollbacks 59 0.0 0.0
The name of the first statistic has been trimmed a little in this text report, the full name is: “rolllback changes – undo records applied”. As you can see we’ve only applied 222 undo records in our 4,316 rollback calls, so we’re not really rolling anything back (most of the time). Moreover, the “transaction rollbacks” corroborates this observation – we have only attempted to rollback 59 “real” (data-changing) transactions. Most of those rollbacks are probably the default “rollback after every call” that some web application servers make.
As a closing thought: if you’re responsible for several different systems it’s convenient to keep a couple of “reference” AWR or Statspack reports from busy, normal, and quiet periods for each system; that way, if you use the figures from one system to work out the meaning of some derived value you can use the figures from another system to check if your hypothesis is correct.
Footnote: there are two other statistics that include the text “undo records applied” (not that there’s enough space for those words to become visible in the textual AWR report), but they both relate to creating read-consistent copies of blocks.
Footnote 2: Is anyone getting worried by my “first approximation” comment, and has anyone started wondering if there may be more to transaction rollbacks than user rollbacks, and if so where they fit into the arithmetic ? To be continued. [Late update – here’s part 2]
Thanks Jonathan.
Could You explain ‘Recursive Call %: 95.18’ this bothers me a lot :) seems quite high .
Regards
GregG
Comment by goryszewskig — February 15, 2013 @ 9:26 am GMT Feb 15,2013 |
It could be just result of cursor loops in plsql, especially with plsql_optimize_level < 2.
Comment by Valentin Nikotin — February 15, 2013 @ 8:20 pm GMT Feb 15,2013 |
GregG
Every SQL statement inside a pl/sql block is recursive SQL – in my case I did little more than call an SQL statement inside a pl/sql loop – as Valentin surmised.
Comment by Jonathan Lewis — February 16, 2013 @ 9:15 am GMT Feb 16,2013 |
Jonathan, I meant that just one cursor loop may cause a lot of recursive calls as every successful fetch call is counted as recursive. This is clear to see in case of plsql_optimize_level < 2 or select for update cursor loop where is no array optimization or for manual fetches of course.
Comment by Valentin Nikotin — February 16, 2013 @ 7:49 pm GMT Feb 16,2013 |
Here is the simple code to check AWR reports and example for recursive calls counts :
Comment by Valentin Nikotin — February 17, 2013 @ 7:06 am GMT Feb 17,2013
Valentin,
Thanks for the demonstration. I’ve often wished that Oracle would introduce a “fetch count” statistic to match the parse count and execute count. (Splitting recursive calls into sys-recursive and user-recursive would be nice as well).
Comment by Jonathan Lewis — February 18, 2013 @ 7:16 am GMT Feb 18,2013
I guess “rollback to savepoint”, statement and row level restarts can affect this statistics. I try to check this in a while.
Comment by Valentin Nikotin — February 15, 2013 @ 10:07 am GMT Feb 15,2013 |
I had the same thought.
Quick test did not increment any of user_commits, user_rollbacks or transaction rollbacks.
Comment by Dom Brooks — February 15, 2013 @ 11:22 am GMT Feb 15,2013 |
Checked for rollback to savepoint and statement restart – only “rollback changes – undo records applied” increased.
Comment by Valentin Nikotin — February 15, 2013 @ 8:42 pm GMT Feb 15,2013 |
Dom,
Great, isn’t it.
You can spend an awful lot of your time deliberately rolling back – but not quite far enough – and the “obvious” statistics tells you nothing about it. As Valentin points out, you have to cross-check with “rollback changes – undo records applied” to get an idea of how much work you are doing while rolling back. (And, of course, the rolling back from “write consistency” – as well as various other things – shows up in that statistic as well.)
Comment by Jonathan Lewis — February 16, 2013 @ 9:18 am GMT Feb 16,2013 |
Jonathan
If you issue a superflous COMMIT (commit that changes nothing ) the user commits statistics is not incremented. If you issue a superflous ROLLBACK (rollback that rolls nothing back as those probably done by web application servers ) the user rollback statistics is incremented while the rollback changes – undo records applied statistics aren’t incremented. If you issue a real rollback (rollback that rolls something back) both the user rollback statistics and the undo records applied are incremented. If you issue a rollback to savepoint (whatever it is superflous or real rollback savepoints) the user rollback statistics is not incremented while the statistics undo records applied is incremented for a real rollback to savepoint and it is not incremented for a superflous rollback to savepoint.
That is : for commits, we can trust the statistics user_commits but for rollbacks we had better to trust rollback changes – undo records applied
Comment by Mohamed — February 16, 2013 @ 12:20 pm GMT Feb 16,2013
Jonathan, what is this “rollback after every call” you refer to?
Possibly I have never heard of this (“possibly” because I may have just forgotten it, and will soon have an ‘Ah hah!’ moment )
Comment by Jared — February 15, 2013 @ 4:43 pm GMT Feb 15,2013 |
Jared,
I can’t quote names and versions off the top of my head, but various middle-tier tools handling connection pooling will (or used to) “isolate” the work done by each request by issuing a rollback extremely frequently.
Comment by Jonathan Lewis — February 16, 2013 @ 9:21 am GMT Feb 16,2013 |
Jonathan,
“Allowing for rounding errors we’re looking for a value between 4.785 *3616 and 4.795 * 3615, i.e. between 17.298 and 17,333. ”
You obviously want to take the lowest and highest possible values – so I suppose you mean:
” 4.785 *3615 and 4.795 * 3616, i.e. between 17.298 and 17,333. “
Comment by Todor Botev — February 15, 2013 @ 4:45 pm GMT Feb 15,2013 |
even better:
” 4.785 *3615 and 4.795 * 3616, i.e. between 17.298 and 17,339.”
Comment by Todor Botev — February 16, 2013 @ 4:14 am GMT Feb 16,2013 |
Todor,
I got there in the end (and even got the . changed to a ,).
I’ve take the ceiling() of the lower and floor() of the higher value, though.
Comment by Jonathan Lewis — February 16, 2013 @ 9:27 am GMT Feb 16,2013 |
Jonathan,
I am glad you touched on this topic since I was looking at my AWR last week and wondering what is my peak load in my system….so my question to you is …..can I use these stats ” Transactions per sec.” to determined when the peak load is on my production system?
Comment by Lou — February 17, 2013 @ 5:04 am GMT Feb 17,2013 |
Lou,
To answer that question indirectly – which is the higher load: 1,000 transactions that update one row each, or 100 transactions that update 10 rows each ?
What if each type of transaction does a 10MB tablescan, what if the smaller transaction maintains 10 indexes per row, while the large transaction doesn’t maintain any indexes ?
There are four or five key resources and measures that have different significance to different people. The same single measure will not be appropriate for everyone.
Comment by Jonathan Lewis — February 18, 2013 @ 7:12 am GMT Feb 18,2013 |
[…] a little follow-on from Friday’s posting. I’ll start it off as a quiz, and follow up tomorrow with an explanation of the results […]
Pingback by Transactions 2 « Oracle Scratchpad — February 18, 2013 @ 7:54 am GMT Feb 18,2013 |
[…] Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of AWR metric here. […]
Pingback by Oracle TPS: Evaluating Transaction per Second – |GREP ORA — May 25, 2016 @ 11:39 am BST May 25,2016 |
[…] Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of AWR metric here. […]
Pingback by Oracle TPS: Evaluating Transaction per Second — June 15, 2018 @ 5:33 pm BST Jun 15,2018 |