Oracle Scratchpad

February 12, 2007

SQL Profiles – 2

Filed under: CBO,Execution plans,Hints,trace files — Jonathan Lewis @ 8:11 pm GMT Feb 12,2007

Following yesterday’s note on SQL Profiles, someone asked how I detect that an opt_estimate hint had been used – with specific reference to the “index_scan” option. The reason for this particular choice is that other opt_estimate hints have a highly visible impact in the 10053 trace files, but this one doesn’t.

Here are a few extracts from the Single Table Access section of the trace:

a) Following: opt_estimate(@sel$1, table, t1@sel$1, scale_rows=10) (multiply row count by 10)

   Table: T1  Alias: T1
    Card: Original: 4000    >> Single Tab Card adjusted from: 10.00  to: 100.00
  Rounded: 100  Computed: 100.00  Non Adjusted: 10.00      

b) Following: opt_estimate(@sel$1, join, (t2@sel$1, t1@sel$1), scale_rows=225) (multiply join cardinality by 225)

  >> Join Card adjusted from 0.00  to: 1.00, prelen=2
Adjusted Join Cards: adjRatio=2666.67 cardHjSmj=1.00 cardHjSmjNPF=1.00
cardNlj=1.00 cardNSQ=1.00 cardNSQ_na=0.00

As you can see, the impact of the opt_estimate() shows up very clearly in these two cases, but when we try: opt_estimate(@sel$1, index_scan, t1@sel$1, t1_i1, scale_rows=0.05) (divide index selectivity by 20), we have to check the before and after trace files (note particularly the values for ix_sel and ix_sel_with_filters:
Before:

   Access Path: index (AllEqRange)
    Index: T1_I1
    resc_io: 168.00  resc_cpu: 1278402
    ix_sel: 0.05  ix_sel_with_filters: 0.05
    Cost: 168.19  Resp: 168.19  Degree: 1
  Best:: AccessPath: IndexRange  Index: T1_I1
         Cost: 168.19  Degree: 1  Resp: 168.19  Card: 10.00  Bytes: 0     

After:

   Access Path: index (AllEqRange)
    Index: T1_I1
    resc_io: 12.00  resc_cpu: 89967
    ix_sel: 0.0025  ix_sel_with_filters: 0.0025
    Cost: 12.01  Resp: 12.01  Degree: 1
  Best:: AccessPath: IndexRange  Index: T1_I1
         Cost: 12.01  Degree: 1  Resp: 12.01  Card: 10.00  Bytes: 0     

In this case, we don’t get any indication of “adjusted values” – we actually have to check the before and after to see what has changed – and infer from the drop in cost that we have used a reduced cardinality. This can be particularly awkward to interpret in cases where the targetted index is not a suitable candidate for a single table access path, but becomes available as an access path later in the join evolution.

[Back to part 1]

28 Comments »

  1. Thank you !

    Comment by Gregory — February 12, 2007 @ 10:06 pm GMT Feb 12,2007 | Reply

  2. I have the 10053 trace with this line
    Table: NARROW_TABLE Alias: NARROW_TABLE
    Card: Original: 132616.000000 >> Single Tab Card adjusted from:11.441291 to:28.000000
    Some SQL Profile was used for this SQL statement.
    Where the information about this adjustment is located?
    Does exist the way to view this information from data dictionary?

    Thank you.

    Comment by Yuri A.P. — January 28, 2008 @ 8:43 pm GMT Jan 28,2008 | Reply

  3. Yuri,

    The view dba_sql_profiles lists the profiles. The text of the query is stored as a CLOB column called sql_text in this view.

    If you have lots of profiles and it’s hard to find the relevant profile by querying this table, you could use explain plan to generate an execution plan for the query. It should report (in the notes section) something like:

    SQL profile “SYS_SQLPROF_01456abbc16a4000” used for this statement

    I think the value reported is stored as the name in the view.

    Comment by Jonathan Lewis — January 28, 2008 @ 10:25 pm GMT Jan 28,2008 | Reply

  4. Thank for your answer!
    But I’m interest in some different thing.
    I’m curious in which form does the SQL Profile been stored?
    And upon which SQL Profile information bases the cardinality adjustment done
    by CBO,
    such as “Single Tab Card adjusted from:11.441291 to:28.000000”.
    I know some about SQL Profile, for example

    SQL> select p.other_xml
    2 from dba_sqltune_plans p, dba_sql_profiles s
    3 where p.task_id=s.task_id
    4 and s.name=’profile_lastdoc_1′
    5 and p.attribute=’Using SQL profile’
    6 and p.id=1;

    But in this column I can`t find anything about cardinality adjustment factor.

    Comment by Yuri A.P. — January 29, 2008 @ 8:39 pm GMT Jan 29,2008 | Reply

  5. The content of the SQL profile is not stored in a “public” view, but it can be found in table sql$profattr – for example:

    
    select
           sp.sp_name, sa.attr#, sa.attr_val
    from
           sqlprof$      sp,
           sqlprof$attr  sa
    where
           sp.signature = sa.signature
    and     sp.category  = sp.category
    order by
           sp.sp_name,
           sa.attr#
    ;
    
    SP_NAME                             ATTR#
    ------------------------------ ----------
    ATTR_VAL
    -------------------------------------------------------------------------
    SYS_SQLPROF_0145f1e5167c4000            1
    OPT_ESTIMATE(@"SEL$1", JOIN, ("T2"@"SEL$1", "T1"@"SEL$1"), SCALE_ROWS=15)
    
    SYS_SQLPROF_0145f1e5167c4000            2
    OPT_ESTIMATE(@"SEL$1", TABLE, "T2"@"SEL$1", SCALE_ROWS=200)
    
    SYS_SQLPROF_0145f1e5167c4000            3
    OPTIMIZER_FEATURES_ENABLE(default)
    

    In this example, we have two adjustments – the single table cardinality of table t2 in query block sel$1 should be multiplied by 200; and the join cardinality of (t1,t2) when they are joined in query block sel$1 should be multiplied by 15.

    Comment by Jonathan Lewis — January 29, 2008 @ 10:22 pm GMT Jan 29,2008 | Reply

  6. Thank you very much!
    By the way, I have discovered that on 11g database such information can be retrieved from non-documented view DBMSHSXP_SQL_PROFILE_ATTR.

    Comment by Yuri A.P. — January 30, 2008 @ 8:34 pm GMT Jan 30,2008 | Reply

  7. I have yet another question. I`m testing new features of 11g, for example multi-column statistics. I have write the script for repeating of my experiment. This script include following statements: drop/create table, gather “traditional” statistics and generate the plan, gather multi-column statistics and generate the plan, and finally, execute sql tuning task, which creates the sql profile (my previous question was about this) and generate the plan again. I run this script many times with minor modification. But at some N+1 excuting – sql profile have not created! Can you recommend me any method for diagnosis CBO, why it do not produce the sql profile? Is it exist the suitable trace event?

    Comment by Yuri A.P. — February 7, 2008 @ 8:24 pm GMT Feb 7,2008 | Reply

  8. Yuri, you might look at event _stn_trace – try setting it to 255 or 8191 (from memory it’s a bitmapped parameter, but I haven’t used it since the 10g beta).

    Comment by Jonathan Lewis — February 7, 2008 @ 8:58 pm GMT Feb 7,2008 | Reply

  9. I rerun my script without any changes and in one case sql profile was generated, in other case was generated another sql profile and sometimes sql profile was not generated at all. Looking into sql tuning task trace I discovered following issue: for each run of my script different SCALE_ROWS factors was generated. Taking into account the testing data generation method — dbms_random fashion and this statement from «Perfomance Tuning Guide»:
    «SQL Profile addresses this problem by collecting additional information using sampling and partial execution techniques…»
    Is it possible that the sum of factors of random data in one hand and the partial execution in another leads to instability of the sql profile generation? Does it exist any «hidden» parameter, than can force the CBO to make the «full execution techniques»?

    Comment by Yuri A.P. — February 23, 2008 @ 11:50 am GMT Feb 23,2008 | Reply

  10. Yuri,

    Interesting observation – and perhaps all you need to demonstrate the problem is a 10046 trace so that you can see the sampling SQL that the tuning tool is producing.

    I haven’t checked this recently, but I guess the tuning tool samples each table (which may mean a random 32 blocks) using the visible predicates, then samples pair-wise joins, and so on.

    Your hypothesis that the randomness of (relatively small) samples could produce non-repeatable results sounds good to me, especially when combined with the effect of sampling on joins.

    If it’s the sampling that’s the issue there are two possible options I can think of that may help.

    One is to set optimizer_dynamic_sampling to 10 before running the test – if it applies at all in these circumstances, then it will cause full table sampling.

    The other is to set the time-limit on the optimisation phase to “unlimited”. The default is 30 minutes (I think) which may bias Oracle towards taking small samples.

    Comment by Jonathan Lewis — February 24, 2008 @ 2:02 pm GMT Feb 24,2008 | Reply

  11. I have found an interesting result. In case, when I does not gathering optimizer statistics, with enought time_limit SQL Tuning task is stably generating the best profile. Probably, the very aggressive dynamic sampling have took place. In case, when optimizer statistics was gathered, the instable picture was observed. In this case the time_limit parameter and optimizer_dynamic_sampling parameter practically does not have any influence on SQL Tuning task results. Of course, time_limit is not too small. Scripts and results I have published on my website (link below).

    Comment by Yuri A.P. — March 8, 2008 @ 2:36 pm GMT Mar 8,2008 | Reply

  12. Yuri,
    The switch to partition-wise joins in your test case is particularly interesting. It seems a little odd that this didn’t appear in every case.

    Perhaps this is a case of the tuning tool “trusting” the statistics as a basic guideline and working from there to correct the plan when the overall plan is so unsuitable that a sample is needed to adopt a completely different approach.

    One difficulty (for the optimizer) may be that you have a unique index that covers three columns when the first of the three is generated by a sequence and (if I’ve read the code correctly) is inherently unique.

    Comment by Jonathan Lewis — March 11, 2008 @ 6:31 am GMT Mar 11,2008 | Reply

  13. Yes, you have interpreted my code correctly. This is an imitation of my production system.
    Thank you very much.

    Comment by Yuri A.P. — March 20, 2008 @ 8:08 pm GMT Mar 20,2008 | Reply

  14. you may also want to check out:
    Subject: Automatic SQL Tuning – SQL Profiles
    Doc ID: Note:271196.1 Type: BULLETIN
    Last Revision Date: 15-MAY-2008 Status: PUBLISHED

    Comment by JM — May 29, 2008 @ 6:47 pm BST May 29,2008 | Reply

  15. […] SQL Profiles (10g) Filed under: CBO, Execution plans, Hints, Performance, Statistics, Tuning — Jonathan Lewis @ 7:43 pm UTC Feb 11,2007 [Forward to part 2] […]

    Pingback by SQL Profiles (10g) « Oracle Scratchpad — May 30, 2008 @ 9:43 am BST May 30,2008 | Reply

  16. […] Lewis has a couple of notes about SQL Profiles on his site here SQL Profiles (10g) and here SQL Profiles – 2 that are worth a look. I totally agree with his advice to make sure you know what the profile is […]

    Pingback by Kerry Osborne’s Blog » Blog Archive » SQL Tuning Advisor — October 2, 2008 @ 8:29 pm BST Oct 2,2008 | Reply

  17. […] Using My Profile, where he expresses some doubts as well – he’s also written a bit about SQL Profiles on his site as you might […]

    Pingback by Kerry Osborne’s Oracle Blog » Blog Archive SQLT - coe_xfr_sql_profile.sql - Kerry Osborne’s Oracle Blog — July 24, 2010 @ 8:00 pm BST Jul 24,2010 | Reply

  18. If I have a query which performs best in rbo execution path and would like to use this path even though in cost mode, can it be done using sql profiles ?

    This was done by the following steps
    1. Alter optimizer mode to rbo.
    2. Creating profile of step 1.( In dba_sql_profiles it puts hint of /*+rule+/ in sql_text field)
    3. changing mode to cost.
    4. alter sesiion to the category as in 2.
    5. alter optimizer mode to cost.
    6. running the query followed by explain plan to see if it has actually used outline hints under rbo
    Since the sql will not match as /*+rule*/ is additional literal in sql_text of dba_sql_profiles, is their a way to update this or some other way u can suggest to handle this ?

    Basically to avoid restructure of a query can query optimizer be forced to ignore cbo path and use stored profile or outline instead ?

    Thx

    Comment by Anil Bishnoie — October 6, 2010 @ 12:27 pm BST Oct 6,2010 | Reply

    • I am surprised that the stored text for the SQL adds a /*+ rule */ hint – but (I haven’t tried this particular type of test).

      Since profiles are about numbers, rather than plans, I would use stored outlines to achieve your target – run the query with create_stored_outlines enabled, and optimizer_mode set to rule, and the actual text and actual rule-based plan should be stored. The run switch back to cost based and enable use_stored_outlines.

      I think there’s a feature in 11.2 that would then allow you to convert a stored outline into an SQL Baseline. (Remember, SQL Profiles allow execution plans to change over time because mostly they use “arithmetic” hints, but a stored outline or SQL Baseline fixes a plan through “micro-management” hints.)

      Comment by Jonathan Lewis — October 7, 2010 @ 6:49 pm BST Oct 7,2010 | Reply

  19. “_stn_trace” does not seem to be available anymore in 11G R2.

    Comment by Guy Lambregts — September 15, 2011 @ 9:05 am BST Sep 15,2011 | Reply

  20. In 12c, “_stn_trace” has been transformed from an underscore parameter to an event

    Comment by Dontcheff — October 24, 2012 @ 12:02 am BST Oct 24,2012 | Reply

  21. […] if you check Jonathan Lewis’s article on SQL Profiles and the comments after it, you will notice that: _stn_trace does not seem to be available anymore […]

    Pingback by SQL Tuning Advisor Internals « Julian Dontcheff's Database Blog — January 27, 2013 @ 5:07 pm GMT Jan 27,2013 | Reply

  22. Hello,

    Thank you for the detailed information. I did something similar , by creating a sql profile on a problematic sql which was taking longer than the normal, slapped a sql profile and its running fine. But, come next day, the profile is there, back to slowness again.

    Any idea why?

    Regards.

    Comment by pavan — August 26, 2014 @ 6:46 pm BST Aug 26,2014 | Reply

    • Pavan,

      I can tell you what you should be looking at:
      a) What was the normal plan before the profile
      b) What was the slow plan before the profile
      c) What was the fine plan after creating the profile
      d) What was the slow plan after creating profile

      A simple explanation would be that (a) and (b) were the same, but some feature of the data requested meant the normal plan had to take longer because it was processing more data than normal. Producing a profile for the abnormal data set may have produced a plan (c) that was faster for the abnormal data set, but turned out to be slower (d) for the typical data set.

      Comment by Jonathan Lewis — August 27, 2014 @ 7:32 am BST Aug 27,2014 | Reply

      • Hello Jonathan,

        Thank you for the insight into the issue. I noticed for the sql, that every time a new sql id being generated when a different set of bind variables are keyed in, it slows down and then i am back at running a sql tuning advisor, and it recommends to create a new profile for that sql id. Is there a way, instead of creating multiple sql profiles for multiple sql id’s , to bypass the issue?

        Thank You.

        Comment by pavan — August 27, 2014 @ 3:11 pm BST Aug 27,2014 | Reply

        • Pavan,

          You wouldn’t get a new SQL_ID for a new set of bind variables – it’s either a new set of literal values, or perhaps you’ve got a code generator that generates a new table alias (or set of column aliases) every time it creates the “same” SQL statement. In the latter case there’s nothing you can do about it, but if it’s a new set of literals with the text otherwise unchanged, and the same profile will suitable for each different set of literals then you can call the accept_sql_profile() procedure with the addition of setting its “force_match” input parameter to true. (This has an effect analogous to the cursor_sharing init parameter – it allows the optimizer to match the profile to any statement that looks the same after hiding the literals.)

          Comment by Jonathan Lewis — August 27, 2014 @ 4:14 pm BST Aug 27,2014

  23. […] SQL available to see the contents of an SQL Profile in 11g and 12c. (I published some simple code several years ago for 10g, but Oracle changed the base tables in 11g). The answer is yes, probably on the Internet somewhere, […]

    Pingback by dbms_sqldiag | Oracle Scratchpad — June 12, 2017 @ 12:48 pm BST Jun 12,2017 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.