Oracle Scratchpad

February 11, 2007

SQL Profiles (10g)

Filed under: CBO,Execution plans,Hints,Performance,Statistics,Tuning — Jonathan Lewis @ 7:43 pm GMT Feb 11,2007

[Forward to part 2]

When the Tuning Advisor suggests that you accept a SQL Profile, what is it offering you. If you want to find out, the following SQL seems to be the appropriate query to run before you accept the profile:

        rat.attr1        -- 10g column
--      rat.arrt5        -- 11g column
        wri$_adv_tasks     tsk,
        wri$_adv_rationale rat
where = {sql tuning task name}
and     rat.task_id = 
order by

The {sql tuning task name} will be the task name generated and printed at the top of the Enterprisie Manager screen where you can see the recommendations. The output might list a few lines like the following:


These are actually hints – they will be accepted if you include them in the usual fashion in your SQL (although that wouldn’t be a good idea at present as they are undocumented, unsupported, and liable to change in their implementation).

Like Stored Outlines, the SQL Profile consists of a stored SQL statement, and a set of hints that will be brought into play when that SQL has to be optimised. Unlike Stored Outlines, the hints for SQL Profiles do not attempt to dictate execution mechanisms directly. Instead they supply arithmetical correction factors to the optimizer as it does its arithemetic as, even with a 100% sample size, it is still possible for the optimizer to misinterpret your statistics and produce an unsuitable execution path.

In principle, if the data distributions do not change, then a stored profile will ensure that the optimizer “understands” your data and does the right thing – even when the data volume changes.

[Forward to part 2]


  1. if the data distributions do not change

    Good point. Which make it comparable to aging statistics so don’t forget to reevaluate SQL profiles periodically.

    Actually, SQL profiles might be the next step in CBO after statistics. It allows better precisions for specific cases. Perhaps, Oracle should merge SQL profile with table stats and produce join statistics/profile, for example. Or multiple column selectivity (i.e. account for column values correlation).

    Comment by Alex Gorbachev — February 12, 2007 @ 4:15 am GMT Feb 12,2007 | Reply

  2. right! the big “if” (re: data distributions); so that means, never do a data reorg/data purge or add new “business” data which will skew the distribution… otherwise, with this feature, you’re bound to run into trouble! fixing something to break something else! ;-)

    good point! more (valid) darts shot at the new features! ;-)

    Comment by Cos — February 12, 2007 @ 4:17 am GMT Feb 12,2007 | Reply

  3. I’ll be very interested by what you think the 1st “OPT_ESTIMATE(..INDEX_SCAN..)” means. If you take a 10053 trace after you’ve accepted the profile and re-issue an SQL parse, you (Sorry, I better have to say I) won’t probably see this 1st correction appearing in the trace file. This is different in the case of “OPT_ESTIMATE(..TABLE..)” or “OPT_ESTIMATE(..JOIN..)”.

    I would be very interested to see an example of how this kind of HINT is demonstrated in a plan calculation.

    Best Regards,


    Comment by Gregory — February 12, 2007 @ 8:02 am GMT Feb 12,2007 | Reply

  4. I echo what Gregory said–seeing any details you can discern of how it works under the hood would be interesting. I don’t know, maybe what’s true for Hollywood movies is true for Oracle too: Some of us really like to see guts. :)

    Comment by Jason Bucata — February 12, 2007 @ 6:11 pm GMT Feb 12,2007 | Reply

  5. Alex, I would emphasise this differently. If you understand your data, then you use the appropriate tool to solve particular problems. In this case, if you know that the data has a stable relationship that the optimizer cannot detect, you create the profile once – and leave it. I wouldn’t use a profile if I thought it was going to be necessary to keep recreating it.

    Gregory, Jason – addendum on its way.

    Comment by Jonathan Lewis — February 12, 2007 @ 7:14 pm GMT Feb 12,2007 | Reply

  6. Hi,

    OPT_ESTIMATE(@”SEL$1″, TABLE, “REQUESTS”@”SEL$1″,SCALE_ROWS=0.0004355485551)

    What does all this mean.Can you explain.

    For example 10G SQL TUNING is suggesting me a profile with 100% benifit.

    Bofore I implement any thing on Production I have to take approval from managment/change request.Most of the time we need a rollback plan.

    If we want to implement SQL Profile not from GRID or OEM how I can do that.
    How can I rollback the SQL profile/keep track of profiles ?


    Comment by Mohammed — May 28, 2008 @ 12:43 am GMT May 28,2008 | Reply

    task_name => ‘’,
    category => ‘MY_CATEGORY’);


    Once satisfied with Plan, change category to default.

    Comment by JM — May 29, 2008 @ 6:27 pm GMT May 29,2008 | Reply

  8. you may also want to check out this metalink note for profile creation/management

    Subject: Automatic SQL Tuning – SQL Profiles
    Doc ID: Note:271196.1 Type: BULLETIN
    Last Revision Date: 15-MAY-2008 Status: PUBLISHED

    Comment by JM — May 29, 2008 @ 6:49 pm GMT May 29,2008 | Reply

  9. Mohammed,

    If you go on to the second item about SQL Profiles, I’ve made some comments there about how to interpret some opt_estimate() hints.

    In answer to your other questions: the view dba_sql_profiles lets you seen information about profiles – including their names; and you can drop a profile with a call like:

    dbms_sql_tune.drop_sql_profile( 'some_profile_name' )

    Thanks for supplying the reference. It’s interesting that Metalink has produced an example which shows the optimizer “ignoring” a hint when there is a SQL profile hidden behind the scenes.

    You may be interested to know that there is an extra parameter to accept_sql_profile() that can be very useful: force_match => true. The default is false.

    If you accept a profile but set this parameter to true, then a profile for a query with a predicate like: “colX = 99” will also be accepted for the same query with the predicate changed to read “colX = 100”. In other words, it does for SQL profiles roughly the same thing that cursor_sharing does for literal string SQL.

    Comment by Jonathan Lewis — May 30, 2008 @ 9:32 am GMT May 30,2008 | Reply

  10. […] Execution plans, Hints, trace files — Jonathan Lewis @ 8:11 pm UTC Feb 12,2007 Following yesterday’s note on SQL Profiles, someone asked how I detect that an opt_estimate hint had been used – with specific […]

    Pingback by SQL Profiles - 2 « Oracle Scratchpad — May 30, 2008 @ 9:44 am GMT May 30,2008 | Reply

  11. How do I get list of enabled sql profiles? (including profile name).

    Comment by dan — June 25, 2008 @ 11:44 am GMT Jun 25,2008 | Reply

  12. Dan,

    select name, status
    from dba_sql_profiles
    where status = 'ENABLED'

    Comment by Jonathan Lewis — June 25, 2008 @ 10:20 pm GMT Jun 25,2008 | Reply

  13. Hi Jonathan,
    I cannot change the parameter cursor_sharing in my database. So I use sql_profiles with the force_match option set to true to solve certain queries that use literals. I cannot change the code of those queries, so sql profiles is very helpful.

    I have certain queries that use literals where the plan seem optimal, the plan is fine, the performance of the queries is also fine, the problem is that I have hundreds of version of the same query like the following: “select field from table where field = A_number_that_is_always_different;”, So I have a Hard parsing issues with those queries. Remember I cannot change the cursor_sharing and cannot change the code.

    In that case, when I create an tuning task on those queries, the tuning task does not recommend me to create an sql profile. It just says: “There is no recommendation.”. Since I have no recommendation of creating an sql profile, I cannot accept the sql profile with the force_match option, since the tuning task does not recommend one. How can I force the database to recommend me a Sql profile on those queries so that I accept the sql profile with the force_match parameter?

    Comment by steeve — September 3, 2008 @ 4:18 pm GMT Sep 3,2008 | Reply

  14. Steeve,

    That’s tricky. The optimizer thinks it’s doing the best that can be done – so it’s not going to offer you any adjustments to the plan.

    The first idea that springs to mind is to fiddle with the optimizer environment (e.g. adjusting the db_file_multiblock_read_count) on a test database or schema until that the plan you’ve currently got doesn’t change by the tuning tool decides there is a better plan. That sounds like a lot of trial and error, though.

    Alternatively you could construct a set of suitable hints (which is all a profile is) that behaves properly, and use the import_sql_profile procedure to insert them into the database.

    It’s probably not supported – although Oracle Corp might be hard pushed to say why not.

    Christian Antognini has a very good presentation on SQL Profiles that describes what SQL profiels are and shows the method in this pdf file.

    Comment by Jonathan Lewis — September 6, 2008 @ 9:37 pm GMT Sep 6,2008 | Reply

  15. […] Lewis has a couple of notes about SQL Profiles on his site here SQL Profiles (10g) and here SQL Profiles – 2 that are worth a look. I totally agree with his advice to make sure you […]

    Pingback by Kerry Osborne’s Blog » Blog Archive » SQL Tuning Advisor — September 24, 2008 @ 1:34 pm GMT Sep 24,2008 | Reply

  16. Jonathan,

    Great info as always. One of the things that I have noticed is that sql profiles can and do “sour” over time (as mentioned by Alex). Since they do most of their work by applying scaling factors via the opt_estimate hint, when things change (statistics for example) the scaling factors can end up being way off. I’ve had some success using a profile to get the optimizer to do what it should and then creating an outline for the statement to lock it (while I try to figure out why it went wrong in the first place).

    Also, I have been experimenting with setting up a blog and we have installed wordpress on one of our servers to play with. WordPress is pretty smart and it automatically created the pingpack entry above. I didn’t know it would do that – pretty cool! Although it was totally unintentional as we aren’t even sure that’s where it will live yet. I guess everything really is easy if you don’t know what you’re doing. Anyway, keep up the great work.

    Comment by Kerry Osborne — September 24, 2008 @ 2:17 pm GMT Sep 24,2008 | Reply

  17. Kerry,

    Thanks for the pingback, and the note.

    SQL Profiles and Outlines are ways of addressing two different problems – the profiles help if the data ratios are going stay constant of time, regardless of data volume; the outlines help if a specific execution path is always going to be a good idea as time passes, regardless of how the data ratios change. So each has its purpose.

    I like your idea, though, of treating both as a temporary measure, and using an SQL profile as a quick way of getting to an outline that keep you going for a while.

    In passing, outlines are deprecated in 11g – presumably because the technology is just a subset of the SQL Plan Management (SPM) technology.

    Comment by Jonathan Lewis — September 26, 2008 @ 1:50 pm GMT Sep 26,2008 | Reply

  18. Hi Jonathan,

    I have been performing some tuning using dbms_sqltune. Several recommendations have come back for the use of a SQL Profile as well as additional indexes. As some of the tables are rather large I was trying to create the suggested indexes using the nosegment parameter and specifying the _use_nosegment_indexes session level parameter and manually running dbms_sqltune again. Unfortunately all attempts fail to pick up the nosegment indexes. Is this normal or is there a way I can get dbms_sqltune to consider the nosegment index? I have used dbms_stats.gather_index_stats but this has not helped.

    Your response would be much appreciated.

    Comment by Chris — October 30, 2008 @ 3:26 am GMT Oct 30,2008 | Reply

  19. Chris,

    If you enable sql_trace (or 10046 at level 4 in particular) you will see that for each index Oracle checks the number of bytes in dba_segments. This will always return null (which is nvl()ed to zero) for the no_segment indexes. It is probably this check that makes the tuning task ignore such indexes.

    Bear in mind, the tuning task code knows about no_segment indexes, because one of its internal mechanism is to “create” such indexes anyway.

    Comment by Jonathan Lewis — October 30, 2008 @ 7:57 am GMT Oct 30,2008 | Reply

  20. […] SQL Profiles (10g) […]

    Pingback by Summary Stats « Oracle Scratchpad — April 27, 2009 @ 7:05 pm GMT Apr 27,2009 | Reply

  21. […] SQL Profiles (10g) […]

    Pingback by Viewing Figures « Oracle Scratchpad — April 20, 2010 @ 6:39 pm GMT Apr 20,2010 | Reply

  22. […] As per Jonathan Lewis post, […]

    Pingback by Plan Stability using Sql Profiles and SQL Plan Management | Weblog — December 20, 2011 @ 5:45 am GMT Dec 20,2011 | Reply

  23. Hello,

    I have noticed that following query returns empty attr1 in my DB version
    attr5 contains actual hint. Just for information.

    select rat.attr1
            wri$_adv_tasks     tsk,
            wri$_adv_rationale rat
   = {sql tuning task name}
    and     rat.task_id = ;
    SQL> select rat.attr1
      2  from
      3          sys.wri$_adv_tasks     tsk,
      4          sys.wri$_adv_rationale rat
      5  where
      6 = 'TASK_1291'
      7  and     rat.task_id = ;
    25 rows selected.
    Elapsed: 00:00:00.04
    SQL> 1 select rat.attr5
    SQL> r
      1  select rat.attr5
      2  from
      3          sys.wri$_adv_tasks     tsk,
      4          sys.wri$_adv_rationale rat
      5  where
      6 = 'TASK_1291'
      7* and     rat.task_id =
    COLUMN_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", "SEQUENCE", scale, length=3 distinct=10000 nulls=0 min=1 max=10000)
    COLUMN_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", "SEQUENCE", scale, length=3 distinct=10000 nulls=0 min=1 max=10000)
    COLUMN_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", "TXNKEY", scale, length=2 distinct=19 nulls=0 min=2 max=20)
    COLUMN_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", "TXNKEY", scale, length=2 distinct=19 nulls=0 min=2 max=20)
    TABLE_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", scale, blocks=28954 rows=182916.895)
    TABLE_STATS("SYS"."STREAMS$_APPLY_SPILL_MSGS_PART", scale, blocks=28954 rows=182916.895)
    25 rows selected.
    Elapsed: 00:00:00.14
    SQL> select banner from v$version;
    Oracle Database 11g Enterprise Edition Release - 64bit Production
    PL/SQL Release - Production
    CORE      Production
    TNS for Solaris: Version - Production
    NLSRTL Version - Production
    Elapsed: 00:00:00.62

    Comment by Mikhail — May 11, 2012 @ 7:25 am GMT May 11,2012 | Reply

    • Mikhail,

      Thanks for the update.
      That’s the problem with publishing comments on undocumented mechanisms – Oracle changes the mechanisms.
      I’ve also added a footnote to the article.

      Comment by Jonathan Lewis — May 24, 2012 @ 10:49 am GMT May 24,2012 | Reply

  24. Hi,
    Can I change plan hash value in run time I mean while query is running and I want to change it plan immediately without starting new session of statement, What need to set set for same in 12c.
    I have some query which running and taking time, We had get better plan of that query after gathering stats now without breaking statement I want that current running sql statement use new generated better plan.


    Comment by PP — August 19, 2015 @ 9:01 am GMT Aug 19,2015 | Reply

  25. PP

    Using a feature known as “adaptive execution”, 12c automatically creates plans that allow it to decide as a query executes which of two join methods (nested loop or hash) it should use at various points through the plan but

    (a) the optimizer designs the whole adaptive plan once based on the information available when the query is optimised
    (b) there has to be some “uncertainty” in the basic optimizer arithmetic to make it decide that such an approach is necessary
    (c) the run-time choices are made based on the actual amount of data that starts to pass through the plan as the query is running, not on the latest state of the statistics

    Beyond the variablility allowed by this feature there is no way that a query that is currently running can change plans in mid-execution; and gathering new statistics won’t affect a runnning query – even one that allows for adaptive execution.

    Note: depending on how you gather stats, you may find that your plan is unchanged even after you’ve gathered some stats that ought to make a difference: there is a “delayed invalidation” mechanism that appeared some versions back, so you may need to ensure that you’ve set the no_invalidate parameter to the gather_stats call to false.

    Comment by Jonathan Lewis — August 19, 2015 @ 9:14 am GMT Aug 19,2015 | Reply

    • Thanks for quick help
      I have one more doubt that if i will set OPTIMIZER_ADAPTIVE_FEATURES = TRUE; will effect my database like some of query which running perfectly fine, will it not change their plan also.
      I have env where I want to use this feature for some of queries not for all.
      Will it better we can set this feature where there are heavy insert or modification of data. But I dont want for some queries where there are hardly modification.
      Please suggest what will be best way to do


      Comment by PP — August 19, 2015 @ 10:51 am GMT Aug 19,2015 | Reply

      • PP

        You really can’t expect specific advice on how to deal with production database issues in a comment to a blog.
        Since optimizer_adaptive_features defaults to TRUE, I have to assume that someone in your organisation has deliberately set it to FALSE. If that’s the case then there should be a document justifying the decision and pointing at the problems by setting it to TRUE; this may help you decide how to proceed.

        There is a generic strategy for upgrade issues of this type – if it’s possible:

        EITHER enable the feature globally, then disable it for specific SQL statements with a hint
        OR disable the feature globally, then enable it for specific SQL statement with a hint

        Bearing in mind that optimizer_adaptive_features is the “master switch” for several different feature we can’t quite follow that advice, but if you want to take advantage of adaptive plans for just a few queries there’s a hint: /*+ adaptive_plan */ that might be a short term option for you. (Then you can always capture SQL Plan Baselines for the best plans once they’ve adapted themselves).

        The question of extremely dynamic data (which, in principle, could mean a requirement to change plans frequently and rapidly) isn’t met 100% by adaptive plans. The feature may help – especially if the critical queries tend to use literals rather than bind variables and therefore need to be optimised every time – but the topic is too large for a blog comment.

        Comment by Jonathan Lewis — August 19, 2015 @ 7:19 pm GMT Aug 19,2015 | Reply

        • Thanks Jonathan :)
          Last question is to enable on SQL statement we can use hint /*+adaptive_plan */
          what will be option for disable to particular SQL statement if we had enabled globally.


          Comment by PP — August 22, 2015 @ 2:42 pm GMT Aug 22,2015

        • PP

          If you set the parameter to disable adaptive plans then the hint /*+ adaptive_plan */ in a query will (I assume, but haven’t tested) enable adaptive plans for that query. If you have allowed adaptive plans by leaving the parameter to its default then the hint /*+ no_adaptive_plan */ in a query will (again, I assume) stop the optimizer from generating an adaptive plan for that query.

          Note that there is a paramter optimizer_adaptive_reporting_only. If you set this to true the optimizer will create plans that are adaptive so that you can report them to see what the alternatives and whether or not they would be taken, but it will not use them. Tim Hall has a number of articles on this and related features:

          Comment by Jonathan Lewis — August 22, 2015 @ 7:14 pm GMT Aug 22,2015

  26. Thanks, I will implement and let you know the result :)

    Comment by PP — August 24, 2015 @ 10:40 pm GMT Aug 24,2015 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at


Get every new post delivered to your Inbox.

Join 5,681 other followers