Oracle Scratchpad

November 27, 2006

Event 10132

Filed under: Execution plans,trace files,Troubleshooting — Jonathan Lewis @ 12:00 pm BST Nov 27,2006

From 9i onwards, if you enable event 10132 in your session, then every statement you subsequently optimise will be dumped into your trace file, along with the structure of the actual execution plan that was used. For example, from an early version of 9i:

Current SQL statement for this session:
select * from t1 where v1  = :n1
Plan Table
| Operation          | Name |Rows |Bytes |Cost | TQ |IN-OUT| PQ Distrib |Pstart| Pstop |
| SELECT STATEMENT   |      |   0 |    0 |  10 |    |      |            |      |       |
|  TABLE ACCESS FULL | T1   |   1 |  123 |  10 |    |      |            |      |       |

You will notice that the plan allows for parallel and partitioned execution, although in this example neither feature was used, so the relevant columns stay blank. You’ll also notice that there is no dump of the predicate information – which is a little disappointing.

In 10g you get a lot more information, including the predicate information, peeked values of bind variables, optimizer environment, and a couple of other odds and ends. Moreover, the plan structure is trimmed of redundant columns.

One thought that you might want to consider before your next upgrade – if you don’t have a library of SQL statements with expected execution plans. You could enable this event system-wide for a couple of days before the upgrade and collect all the SQL with execution plans.

After the upgrade, if you find some SQL performing very badly this gives you the option for searching through the trace files to find the pre-upgrade execution plan, which should give you have some idea of how best to fix the problem.

Of course, you only want to do this system-wide setting if you’re fairly sure that the overhead won’t be excessive – after all, you get a dump into the session trace file for every statement optimised: so you probably want to check the statistic parse count (hard) to see how much of an overhead you’re going to get from all the extra I/O to the dump directory.


  1. We talked about taking all the data from v$sql_plan and storing it in a base table. Then doing a ‘diff’ once a day to see if any plans changed.
    That seems easier than dumping to a trace file.
    You can then use email notification if something changed use history data to extend it.

    Comment by Ryan — November 27, 2006 @ 3:00 pm BST Nov 27,2006 | Reply

  2. Ryan, the drawback (or trade-off) with your approach is the frequency with which you dump the v$sql_plan view.
    If you don’t dump frequently you may miss some activity SQL completely as plans can age out of memory very quickly.
    If you dump frequently the workload can rapidly become prohibitive, especially if you try to correlate the plans you already have dumped with the plans that are currently in memory.
    A slightly cheaper alternative may be to dump the hash_value and plan_hash_value from v$sql, and only dump a plan when you find a hash_value with a new plan_hash_value.

    Comment by Jonathan Lewis — November 27, 2006 @ 3:45 pm BST Nov 27,2006 | Reply

  3. Jonathan,

    When 10132 set system-wide, does this also dumps oracle background process works (sqls) as when we enable instance level trace?

    I guess, this would be better than 10053 event.


    Comment by Syed Jaffar Hussain — November 27, 2006 @ 4:40 pm BST Nov 27,2006 | Reply

  4. Jaffar, this would be vastly better than the 10053 as (a) it is shorter (b) it is more readable, and (c) it happens for a rule-based query. It does seem to work for background processes if you do ‘alter system’ – but many of the backgrounds reuse their SQL quite regularly, and some seem to hold the cursors – so unless the event goes in the spfile (init.ora) there may be background SQL that you don’t see. Mind you, you can’t really tune background SQL.

    Comment by Jonathan Lewis — November 29, 2006 @ 7:33 pm BST Nov 29,2006 | Reply

  5. I do agree with you Jonathan that we can’t tune the background sqls. I was just wondering that when we enable this event system-wite, it won’t just create a large trace file and slowdown the performance of the production. I do understand that its very rare that people will enable this event system-wide as long as its not so important to set system-wide.
    I will do some R&D about this event and thanks for discussing about this event.


    Comment by Syed Jaffar Hussain — November 30, 2006 @ 2:44 pm BST Nov 30,2006 | Reply

  6. […] you want to cover all options, you could also set event 10132 just before executing the query (which means you wouldn’t need to dump the query, as it would […]

    Pingback by How parallel « Oracle Scratchpad — March 14, 2007 @ 7:17 am BST Mar 14,2007 | Reply

  7. […] child cursors produced for the query had the same optimizer environment settings. I even did a 10132 event dump of running the query in the 2 schemas, but I could not for the life of me put my finger […]

    Pingback by techblog » Blog Archive » Oracle Linguistic Indexes — January 4, 2008 @ 2:48 pm BST Jan 4,2008 | Reply

  8. You can also query SYS.WRH$_SQL_PLAN based on the sql_id in Oracle 10g. That will give you a pretty good idea of all of the different query plans used for a particular sql statement.

    Comment by Mike Andrews — May 29, 2008 @ 5:33 pm BST May 29,2008 | Reply

  9. Mike,

    That’s a pretty useful repository for queries that are known to be expensive – if there are variations in execution plans, you may find them there. But AWR typically captures a small percentage of the SQL you run, and only the expensive SQL (by a variety of classifications).

    When you upgrade it is often a small collection of previously inexpensive queries that suddenly change execution path. And those are less likely to appear in wrh$_sql_plan. (You may get lucky, of course, and find that wrh$_sql_plan – which you can query with awrsqrpt.sql – has captured some cheap plans as well as the expensive plans).

    Comment by Jonathan Lewis — May 30, 2008 @ 9:26 am BST May 30,2008 | Reply

  10. if we set topnsql to ‘MAXIMUM’, then it can capture complete set of sql in cursor cache. does it mena it can be an alternative to 10132.

    Comment by Ananda — September 28, 2010 @ 1:01 pm BST Sep 28,2010 | Reply

    • Ananda,

      Probably not, since you’re still only taking a snapshot at a moment in time. There could be lots of SQL that has come and gone from memory since the last snapshot.

      Comment by Jonathan Lewis — September 29, 2010 @ 2:58 pm BST Sep 29,2010 | Reply

  11. I want to capture all the sql oracle executes when I drop a table.

    So i did the following.

    create table t20 as select * from dba_objects;

    exec dbms_stats.gather_table_stats(ownname=>’SCOTT’,tabname=>’T20′);

    alter session set timed_statistics = true;

    alter session set statistics_level=ALL;

    alter session set max_dump_file_size=UNLIMITED;

    alter session set tracefile_identifier=’tab_imp_drop’;

    alter session set events ‘10132 trace name context forever, level 12’;

    drop table t20;

    alter session set events ‘10132 trace name context off’;

    But the resulting trace file has got only the following deletes.

    delete from idl_ub1$ where obj#=:1
    delete from idl_char$ where obj#=:1
    delete from idl_ub2$ where obj#=:1
    delete from idl_sb4$ where obj#=:1
    delete from error$ where obj#=:1

    As i have gathered stats etc, why i can’t see deletes from tables like obj$,HISTGRM$,HIST_HEAD$ etc that hold histogram and object information etc.


    Comment by Ananda — March 15, 2011 @ 2:56 pm BST Mar 15,2011 | Reply

    • Quoting from the article:

        “… then every statement you subsequently optimise …”

      So the first guess is that those statements are already cached and don’t need to be optimised.

      If you just want to see the SQL you should probably use event 10046 – level 4 would get you the bind value, level 12 would get you wait states as well. If you want to see the execution plans you would probably have to flush the shared_pool before the drop when using 10132 – but that’s not usually a nice thing to do to a production system.

      Comment by Jonathan Lewis — March 15, 2011 @ 10:07 pm BST Mar 15,2011 | Reply

  12. Thanks Jonathan.
    That explains why I am not getting all the deletes. when i tried the same drop repeatedly, i couldn’t get even a single delete statement from 10132 trace file. That means it got all the sql from cache.
    I have tried 10046 and got 42 delete statements from different dictionary tables.


    Comment by Ananda — March 17, 2011 @ 11:15 am BST Mar 17,2011 | Reply

  13. Isn’t it totally awesome stuff, if I can afford to flush the shared pool/shutdown the DB & set this event a couple of days before the upgrade. I would have all the details in the trace files and then grep shall do all the job, in case required.

    Comment by Amardeep Sidhu — March 18, 2011 @ 3:55 am BST Mar 18,2011 | Reply

    • Amardeep,

      With the passing of time new options become available (or viable), so I’ll just take advantage of your comment to point out that a related strategy for upgrading from 11.1 to 11.2 is to enable SQL Plan capture (of SQL Baselines) before the upgrade, without enabling use of SQL Baselines – then if you’ve got all the old plans available when something goes wrong after the upgrade you can use dbms_xplan.display_sql_baseline() to show the plans you used to have for a query.

      On the plus side it’s easier to do the rediscovery if something goes wrong. On the minus side the capture will increase the I/O on the data files and redo log files rather than hitting just the file system.

      Comment by Jonathan Lewis — March 21, 2011 @ 8:48 am BST Mar 21,2011 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by