Oracle Scratchpad

May 11, 2009

Cardinality feedback

Filed under: CBO,Execution plans,Statistics,Troubleshooting — Jonathan Lewis @ 8:09 pm BST May 11,2009

One of my faviourite presentations in Collaborate 09 was a quick tip (30 minutes) on Tuning by Cardinality Feedback – i.e. comparing the optimizer’s estimates of cardinality with the actual rowcounts returned when you run the query.

The strategy is one that I first saw demonstrated in a presentation by Wolfgang Breitling a few years ago at one of the Hotsos seminars – but this latest example demonstrated an important variation on the theme in a short, precise, lesson.

I’ve asked the author (Michelle Deng of Sanofi Aventis) if I could post the presentation on my blog and she said I could, so here’s a pdf file of the slides. Since you won’t have Michelle’s comments running in time with the slides I’ve set the file to open in a new window so that you see the slides and read my comments at the same time.

The first few slides describe what we mean by cardinality and tell us ways in which we can get at the estimated cardinality (E-rows) and actual cardinality (A-rows).

Slide 9 shows us a query that took more than 22 hours to complete and the view that the query was based on. The week_fact is a very large partitioned table in that view and you might notice it is referenced twice: once as a scalar subquery in the from clause (where a couple of “)” have turned into “|” symbols) and then again as the main source of data from the view.

Slide 10 shows the execution plan (and estimated cardinality) from running explain plan and calling dbms_xplan.display, and you can see that Michelle has highlighted operation 19 – the full scan of prod_fctr_mv and the filter predicates used.

You will note that operation 8 is a nested loop, which joins the result of a hash join at operation 9 (estimated cardinality one) to partition 5 of the week_fact table using the week_fact_idx_ie3 index at operations 23, 24 and 25.

Results with a cardinality of one should always be treated with a little suspicion, and this part of the plan is suspect.  The cardinality of one starts with the (necessary) full scan of partition 5 of fact_week, which drives a nested loop with the full scan of prod_fctr_mv that Michelle has highlighted.

At this point, we could consider running the query again with the /*+ gather_plan_statistics */ hint in place – and in another 22 hours or so we would be able to compare estimates with actuals to see how accurate that cardinality estimate of one row really was!

Since running the whole query again was clearly not sensible (someone asked about using the 10g Tuning Advisor and was told that it hadn’t completed a run) Michelle used the filter predicates for operation 19 to query prod_fctr_mv and see how much data would be returned at that particular point in the plan.  The result, along with a few others, appears in slide 11.  Oracle’s estimate at this point is one row; but the actual number of rows is 42, which means that the big index range scan required by the plan would be executed 42 times, rather than once – and that’s a pretty big mistake given the amount of data that’s likely to be accessed by that range scan.

There were four separate conditions against prod_fctr_mv, and on slides 12 and 13  we see that the optimizer has frequency histograms on the four relevant columns, and can predict accurately the number of rows returned by each individual predicate – which means the problem is in the combination of the predicates.

Slide 14 defines the problem and demonstrates the degree of correlation, and slide 15 lists a few strategies for dealing with the problem. In this case Michelle adopted the strategy of setting the parameter optimizer_dynamic_sampling to level 4 – which means that when a query applies two or more predicates to a table that table will be sampled (32 or 64 blocks by default, depending on whether or not the table already has any statistics) to check the dependency of those predicates, thus allowing the optimizer to generate a dynamic selectivity as the query is optimised.

With this setting the optimizer discovered the cardinality problem with prod_fctr_mv and changed the plan from one that took over 22 hours to one that completed in less than 44 minutes, using the plan shown in slide 16. The change in plan is largely due to the more accurate cardinality estimate (highlighted in line 34 of the plan) for the full scan of prod_fctr_mv.


The key point I wanted to emphasise about this example was that you can compare estimated cardinality with actual row counts to pinpoint the line in the execution plan where the optimizer broke down – but you don’t necessarily need to run the query to find that point. It’s a lesson that everyone should learn.


A common oversight when talking about comparing estimates and actuals is to forget that to mention that the E-rows is per call to the operation while A-rows is summed over all calls, so your should be thinking of A-rows = E-rows * Starts. (See this blog note)

Even then, for partitioned tables and parallel execution, there’s another complication to consider (discussed here).



  1. Oracle’s estimate at this point is one row – but the actual number of rows is 42

    You couldn’t make it up!

    That’s a very elegant and direct presentation – thanks to both of you for sharing it here.

    Comment by Doug Burns — May 11, 2009 @ 11:13 pm BST May 11,2009 | Reply

  2. For anyone puzzled by Doug’s starting comment, search google with the phrase the answer to life the universe and everything

    Comment by Jonathan Lewis — May 12, 2009 @ 6:44 am BST May 12,2009 | Reply

  3. Jonathan,

    Not sure why you referred to the inline view as scalar subquery.

    Though the presentation and your article mention A-rows, is it fair to say that the test case here is all about tuning using the explain plan only? Having worked on a data warehouse at another pharma where selecting from v$session was a privilege I never enjoyed I recognize here the only tuning method I could use in that environment.

    There is obviously very good information in Michelle’s presentation and your article is as thorough and clear as usual. Looking for some conclusions though, would it be sensible to say that optimizer_dynamic_sampling should be set to 4 in a data warehouse environment? Or would you leave that at its default and maybe change it at session or query level only when deemed necessary?

    Not sure if the following are fair comments or not!

    Since I know the difference between “nrx” and “trx” I cannot but wonder if, at the end of the day, 44 minutes was acceptable for a query aggregating prescriptions for one market only down to a couple of rows. I know pharmas compete in general in about 10-12 therapeutic markets and, assuming they used that somehow in their partitioning scheme (it seems so), we’re looking at targeting a subset of 3-4 million rows if everything is done right.

    They seem to use these “%_rowid” columns as surrogate keys. The inline view on the fact table in order to have “f.week_rowid >= (maxweek.mx_week_rowid -51)” seems like a rather odd way to target the last 52 weeks of market data.

    I guess I’m just trying to highlight the difference between tuning a business query vs. tuning numbers in a query. The target of this presentation was obviously to show a shortcoming of the optimizer and ways to address it. But moving beyond that, maybe a high-frequency of run-away queries like this is also an indication of a design in need of re-assessment.

    Comment by 42 — May 12, 2009 @ 8:40 pm BST May 12,2009 | Reply

  4. 42,

    Although generically it is an inline view it is also a scalar subquery, but I used that term only to emphasise the fact that it is expected to supply a single row and column to the rest of the query and doesn’t have any complicated impact on the rest of the query.

    Although I’ve mentioned A-rows but not seen the A-rows column, it is still about comparing the optimizer’s prediction with “better knowledge” – so it’s not really just about explain plan.

    You could make a very good argument for setting the optimizer_dynamic_sampling to 4 or 5 on a data warehouse; it doesn’t solve every problem though, so the choice still depends on how often you think such a choice would make a significant differenc – personally if I saw a lot of queries that were clearly going bad because of the dependent columns issue I would advise testing at level 4 or 5. In this specific case, and in the absence of more detailed knowledge of the use of this query and the rest of the system, I might have chosen to associate a dynamic_sampling() hint with the critical table rather than setting a system parameter. [I don’t recall whether Michelle made any comment about being unable to modify the text or whether the application was 3rd party]

    I agree with your differentiation between tuning queries – especially tuning queries by numbers – and tuning an application; and one of the first questions to ask about a long-running query is: “what would be a reasonable amount of work to derive this result?”.

    Beyond noticing the rather large partition scan to find a max value, though, I didn’t try to work out what the query was supposed to achieve and whether it could have been done completely differently, I just sat back and enjoyed the explanation of this particular tuning method.

    Comment by Jonathan Lewis — May 12, 2009 @ 9:01 pm BST May 12,2009 | Reply

  5. What about creating an index on those 3 columns to fix the problem? The explain plan showed that it was applying 3 separate conditions at the same time to that table, each condition on one of these columns. An index on (MKT_ID, MKT_SSK, EXTRACT_MKT_SSK) would satisfy that combined set of predicates.

    My actual reasoning was not about improving performance by eliminating the full table scan, but about providing the optimizer with better data about the number of distinct value combinations over those three columns. For an index there is the DISTINCT_KEYS statistic, visible in USER_IND_STATISTICS. I wonder if the presence of such an index with such a statistic would enable the optimiser to make a better guess on the number of matching rows within the execution plan i.e. the cardinality at that point in the plan. And as a result it would choose a different execution plan, because the costs were now different i.e. more accurate.

    As I said, I was thinking from the point of view about providing the optimiser with more accurate statistics, and wondering how else it might know that these columns were correlated. And realised that it has this DISTINCT_KEYS statistics for indexes, which would be much more accurate than simply multiplying together the NUM_DISTINCT value of each column in this case. If this worked, it might also have the side benefit of avoiding the full table scan on PROD_FCTR_MV.

    Of course indexes take up space, etc., so the benefits versus the costs needs to be weighed up. But it has also got me thinking whether it would be a good practise to create composite indexes across correlated columns so that the optimizer has better statistics about them, and can produce better cardinality counts within execution plans? Just an idea on my part. I can see that I need to create some correlated data and see what happens under different queries.


    Comment by John Brady — May 13, 2009 @ 9:56 am BST May 13,2009 | Reply

    • John,

      Oracle can use DISTINCT_KEYS of an index for some cases to get more precise cardinality estimation in (with additional setting). See this topic for details.

      Comment by Timur Akhmadeev — May 14, 2009 @ 5:08 pm BST May 14,2009 | Reply

  6. Many thanks for that Jonathan (and Michelle), a lesson indeed not to be lost.

    Comment by SeánMacGC — May 13, 2009 @ 2:36 pm BST May 13,2009 | Reply

  7. John,

    That is another possible strategy and, given how small the critical table, it’s a good one to consider.

    Just to emphasise your point – the intention is NOT to change a full tablescan into an indexed access path by creating an index, the intention is to tell the optimizer about the number of combinations available in the distinct_keys statistic of the index.

    Depending on the version of Oracle, and the uniqueness of the index, the distinct_keys statistic can be used in various ways by the optimizer.

    Comment by Jonathan Lewis — May 13, 2009 @ 7:18 pm BST May 13,2009 | Reply

  8. In 11g, isn’t this also a candidate for extended statistics on a virtual column, as described in ?

    Comment by James Park — May 14, 2009 @ 3:00 am BST May 14,2009 | Reply

  9. James,

    Correct – the “multi-column” variant of 11g’s extended statistics would probably have worked in this case.

    We might have to be a little careful, though, as we can’t be certain whether or not the number of distinct combinations was less than 255 – which would dictate whether we wanted to create a frequency histogram, or simply rely on the number of distinct combination.

    Comment by Jonathan Lewis — May 14, 2009 @ 8:50 am BST May 14,2009 | Reply

  10. Hi Jonathan,
    Okay, we can compare E-rows and A-rows to find out where the problem lies. Wouldn’t it be nice if Oracle could do this itself? After completing a query it can (theoretically) compare predicted and actual numbers and “fine-tune” statistics (probably new types of statistics) to reflect the difference. And the query will run faster next time, automagically.
    Too fantastic? What do you think about it?

    Comment by Egor — May 20, 2009 @ 7:49 am BST May 20,2009 | Reply

  11. Jonathan,

    I just wanted to point out that in cases where you have statements that take simply too long to complete for a full analysis e.g. using DBMS_XPLAN.DISPLAY_CURSOR, it still can be useful to run them with row source statistics enabled, because (at least in recent releases, e.g. the row source statistics get refreshed even when canceling the execution after a while.

    So using e.g. DBMS_XPLAN.DISPLAY_CURSOR on that incomplete execution reveals the part of the execution plan where the time was spent so far and allows to pinpoint any discrepancies in estimates and actual row source statistics without having to wait for the completion of the statement.

    I guess this should have worked with Michelle’s particular example, too.


    Comment by Randolf Geist — May 20, 2009 @ 9:21 am BST May 20,2009 | Reply

  12. Egor,
    Isn’t that what adaptive cursor sharing in 11g is about – alebit limited to circumstances where there may be skew relating to bind variables ?

    Comment by Gary — May 20, 2009 @ 10:39 pm BST May 20,2009 | Reply

  13. […] Lewis' review of a Collaborate '09 presentation on tuning by cardinality feedback, and the slides from that presentation, helped me refresh my approach to a report tuning […]

    Pingback by Only Four Left… » Recently relevant (to me) links — June 21, 2010 @ 10:16 pm BST Jun 21,2010 | Reply

  14. On the comment on EGOR above, 11g provides “Cardinality Feedback used for this statement” and changes the plan dynamically. I have noticed this at one of my customer site and plan to write on this. The plan changed for good, in my case, after few executions. This can be controlled by _optimizer_use_feedback.


    Comment by Vivek Sharma — September 14, 2010 @ 4:55 pm BST Sep 14,2010 | Reply

  15. Hi Jonathan,

    I have following query and I generated execution plan for it. I am not able to understand the cardinality.

    Query : 
    WHERE T_ACCOUNT.AC_SK            IN 
    AND T_ACCT_SUM.ACCT_BAL  <= 1000.00
    AND T_ACCOUNT.AC_SK               = T_ACCT_SUM.AC_SK
    AND T_ACCT_SUM.BAL_DATE = '31-OCT-13';
    Execution plan :
    | Id  | Operation                             | Name          | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                      |               |   132 |  4752 |   124   (2)|
    |   1 |  HASH UNIQUE                          |               |   132 |  4752 |   124   (2)|
    |   2 |   MERGE JOIN                          |               |   132 |  4752 |   123   (1)|
    |   3 |    PARTITION RANGE SINGLE             |               |   128K|  2501K|    85   (0)|
    |   4 |     INLIST ITERATOR                   |               |       |       |            |
    |*  5 |      TABLE ACCESS BY LOCAL INDEX ROWID| T_ACCT_SUM    |   128K|  2501K|    85   (0)|
    |*  6 |       INDEX RANGE SCAN                | PK_T_ACCT_SUM |     5 |       |    32   (0)|
    |*  7 |    SORT JOIN                          |               |   106 |  1696 |    39   (3)|
    |   8 |     INLIST ITERATOR                   |               |       |       |            |
    |   9 |      TABLE ACCESS BY INDEX ROWID      | T_ACCOUNT     |   106 |  1696 |    38   (0)|
    |* 10 |       INDEX UNIQUE SCAN               | PK_T_ACCOUNT  |   106 |       |    29   (0)|
    Predicate Information (identified by operation id):
       5 - filter("T_ACCT_SUM"."ACCT_BAL"<=1000.00)
       6 - access(("T_ACCT_SUM"."AC_SK"=167056 OR "T_ACCT_SUM"."AC_SK"=276364 OR
                  "T_ACCT_SUM"."AC_SK"=292510 OR "T_ACCT_SUM"."AC_SK"=375386 OR 
    "T_ACCT_SUM"."AC_SK"=375387 OR
                  "T_ACCT_SUM"."AC_SK"=410216 OR "T_ACCT_SUM"."AC_SK"=550806 OR 
    "T_ACCT_SUM"."AC_SK"=584324 OR
                  "T_ACCT_SUM"."AC_SK"=584325 OR "T_ACCT_SUM"."AC_SK"=585607 OR 
    "T_ACCT_SUM"."AC_SK"=594305 OR
                  "T_ACCT_SUM"."AC_SK"=594306 OR "T_ACCT_SUM"."AC_SK"=597625 OR 
    "T_ACCT_SUM"."AC_SK"=597626 OR
                  "T_ACCT_SUM"."AC_SK"=597627 OR "T_ACCT_SUM"."AC_SK"=597628 OR 
    "T_ACCT_SUM"."AC_SK"=597629 OR
                  "T_ACCT_SUM"."AC_SK"=597632 OR "T_ACCT_SUM"."AC_SK"=597633 OR 
    "T_ACCT_SUM"."AC_SK"=597634 OR
                  "T_ACCT_SUM"."AC_SK"=597636 OR "T_ACCT_SUM"."AC_SK"=597637 OR 
    "T_ACCT_SUM"."AC_SK"=597638 OR
                  "T_ACCT_SUM"."AC_SK"=597639 OR "T_ACCT_SUM"."AC_SK"=597640 OR 
    "T_ACCT_SUM"."AC_SK"=597641 OR
                  "T_ACCT_SUM"."AC_SK"=597642 OR "T_ACCT_SUM"."AC_SK"=597643 OR 
    "T_ACCT_SUM"."AC_SK"=597644 OR
                  "T_ACCT_SUM"."AC_SK"=597645 OR "T_ACCT_SUM"."AC_SK"=597646 OR 
    "T_ACCT_SUM"."AC_SK"=597647 OR
                  "T_ACCT_SUM"."AC_SK"=597648 OR "T_ACCT_SUM"."AC_SK"=597649 OR 
    "T_ACCT_SUM"."AC_SK"=597650 OR
                  "T_ACCT_SUM"."AC_SK"=597651 OR "T_ACCT_SUM"."AC_SK"=597652 OR 
    "T_ACCT_SUM"."AC_SK"=597653 OR
                  "T_ACCT_SUM"."AC_SK"=597654 OR "T_ACCT_SUM"."AC_SK"=597655 OR 
    "T_ACCT_SUM"."AC_SK"=597656 OR
                  "T_ACCT_SUM"."AC_SK"=597657 OR "T_ACCT_SUM"."AC_SK"=597658 OR 
    "T_ACCT_SUM"."AC_SK"=597659 OR
                  "T_ACCT_SUM"."AC_SK"=597660 OR "T_ACCT_SUM"."AC_SK"=597661 OR 
    "T_ACCT_SUM"."AC_SK"=597662 OR
                  "T_ACCT_SUM"."AC_SK"=597663 OR "T_ACCT_SUM"."AC_SK"=597664 OR 
    "T_ACCT_SUM"."AC_SK"=597665 OR
                  "T_ACCT_SUM"."AC_SK"=608683 OR "T_ACCT_SUM"."AC_SK"=709753 OR 
    "T_ACCT_SUM"."AC_SK"=739855 OR
                  "T_ACCT_SUM"."AC_SK"=769598 OR "T_ACCT_SUM"."AC_SK"=769599 OR 
    "T_ACCT_SUM"."AC_SK"=769600 OR
                  "T_ACCT_SUM"."AC_SK"=789267 OR "T_ACCT_SUM"."AC_SK"=831812 OR 
    "T_ACCT_SUM"."AC_SK"=831813 OR
                  "T_ACCT_SUM"."AC_SK"=831814 OR "T_ACCT_SUM"."AC_SK"=831815 OR 
    "T_ACCT_SUM"."AC_SK"=831816 OR
                  "T_ACCT_SUM"."AC_SK"=831817 OR "T_ACCT_SUM"."AC_SK"=886799 OR 
    "T_ACCT_SUM"."AC_SK"=891301 OR
                  "T_ACCT_SUM"."AC_SK"=891302 OR "T_ACCT_SUM"."AC_SK"=896746 OR 
    "T_ACCT_SUM"."AC_SK"=905441 OR
                  "T_ACCT_SUM"."AC_SK"=928452 OR "T_ACCT_SUM"."AC_SK"=952683 OR 
    "T_ACCT_SUM"."AC_SK"=971664 OR
                  "T_ACCT_SUM"."AC_SK"=971665 OR "T_ACCT_SUM"."AC_SK"=971666 OR 
    "T_ACCT_SUM"."AC_SK"=971667 OR
                  "T_ACCT_SUM"."AC_SK"=971668 OR "T_ACCT_SUM"."AC_SK"=971669 OR 
    "T_ACCT_SUM"."AC_SK"=971670 OR
                  "T_ACCT_SUM"."AC_SK"=986885 OR "T_ACCT_SUM"."AC_SK"=987360 OR 
    "T_ACCT_SUM"."AC_SK"=987859 OR
                  "T_ACCT_SUM"."AC_SK"=992765 OR "T_ACCT_SUM"."AC_SK"=1069330 OR 
    "T_ACCT_SUM"."AC_SK"=1070973 OR
                  "T_ACCT_SUM"."AC_SK"=1075750 OR "T_ACCT_SUM"."AC_SK"=1201860 OR 
    "T_ACCT_SUM"."AC_SK"=1201861 OR
                  "T_ACCT_SUM"."AC_SK"=1201862 OR "T_ACCT_SUM"."AC_SK"=1283345 OR 
    "T_ACCT_SUM"."AC_SK"=1283346 OR
                  "T_ACCT_SUM"."AC_SK"=1312578 OR "T_ACCT_SUM"."AC_SK"=1312579 OR 
    "T_ACCT_SUM"."AC_SK"=1312702 OR
                  "T_ACCT_SUM"."AC_SK"=1312703 OR "T_ACCT_SUM"."AC_SK"=1312704 OR 
    "T_ACCT_SUM"."AC_SK"=1415629 OR
                  "T_ACCT_SUM"."AC_SK"=1446798 OR "T_ACCT_SUM"."AC_SK"=1446799 OR 
    "T_ACCT_SUM"."AC_SK"=1520530 OR
                  "T_ACCT_SUM"."AC_SK"=1834159 OR "T_ACCT_SUM"."AC_SK"=1859876 OR 
    "T_ACCT_SUM"."AC_SK"=1894444 OR
                  "T_ACCT_SUM"."AC_SK"=1932920 OR "T_ACCT_SUM"."AC_SK"=1964377 OR 
    "T_ACCT_SUM"."AC_SK"=2033976 OR
                  "T_ACCT_SUM"."AC_SK"=2077465 OR "T_ACCT_SUM"."AC_SK"=2185914) AND
       7 - access("T_ACCOUNT"."AC_SK"="T_ACCT_SUM"."AC_SK")
      10 - access("T_ACCOUNT"."AC_SK"=167056 OR "T_ACCOUNT"."AC_SK"=276364 OR 
    "T_ACCOUNT"."AC_SK"=292510 OR
                  "T_ACCOUNT"."AC_SK"=375386 OR "T_ACCOUNT"."AC_SK"=375387 OR 
    "T_ACCOUNT"."AC_SK"=410216 OR
                  "T_ACCOUNT"."AC_SK"=550806 OR "T_ACCOUNT"."AC_SK"=584324 OR 
    "T_ACCOUNT"."AC_SK"=584325 OR
                  "T_ACCOUNT"."AC_SK"=585607 OR "T_ACCOUNT"."AC_SK"=594305 OR 
    "T_ACCOUNT"."AC_SK"=594306 OR
                  "T_ACCOUNT"."AC_SK"=597625 OR "T_ACCOUNT"."AC_SK"=597626 OR 
    "T_ACCOUNT"."AC_SK"=597627 OR
                  "T_ACCOUNT"."AC_SK"=597628 OR "T_ACCOUNT"."AC_SK"=597629 OR 
    "T_ACCOUNT"."AC_SK"=597632 OR
                  "T_ACCOUNT"."AC_SK"=597633 OR "T_ACCOUNT"."AC_SK"=597634 OR 
    "T_ACCOUNT"."AC_SK"=597636 OR
                  "T_ACCOUNT"."AC_SK"=597637 OR "T_ACCOUNT"."AC_SK"=597638 OR 
    "T_ACCOUNT"."AC_SK"=597639 OR
                  "T_ACCOUNT"."AC_SK"=597640 OR "T_ACCOUNT"."AC_SK"=597641 OR 
    "T_ACCOUNT"."AC_SK"=597642 OR
                  "T_ACCOUNT"."AC_SK"=597643 OR "T_ACCOUNT"."AC_SK"=597644 OR 
    "T_ACCOUNT"."AC_SK"=597645 OR
                  "T_ACCOUNT"."AC_SK"=597646 OR "T_ACCOUNT"."AC_SK"=597647 OR 
    "T_ACCOUNT"."AC_SK"=597648 OR
                  "T_ACCOUNT"."AC_SK"=597649 OR "T_ACCOUNT"."AC_SK"=597650 OR 
    "T_ACCOUNT"."AC_SK"=597651 OR
                  "T_ACCOUNT"."AC_SK"=597652 OR "T_ACCOUNT"."AC_SK"=597653 OR 
    "T_ACCOUNT"."AC_SK"=597654 OR
                  "T_ACCOUNT"."AC_SK"=597655 OR "T_ACCOUNT"."AC_SK"=597656 OR 
    "T_ACCOUNT"."AC_SK"=597657 OR
                  "T_ACCOUNT"."AC_SK"=597658 OR "T_ACCOUNT"."AC_SK"=597659 OR 
    "T_ACCOUNT"."AC_SK"=597660 OR
                  "T_ACCOUNT"."AC_SK"=597661 OR "T_ACCOUNT"."AC_SK"=597662 OR 
    "T_ACCOUNT"."AC_SK"=597663 OR
                  "T_ACCOUNT"."AC_SK"=597664 OR "T_ACCOUNT"."AC_SK"=597665 OR 
    "T_ACCOUNT"."AC_SK"=608683 OR
                  "T_ACCOUNT"."AC_SK"=709753 OR "T_ACCOUNT"."AC_SK"=739855 OR 
    "T_ACCOUNT"."AC_SK"=769598 OR
                  "T_ACCOUNT"."AC_SK"=769599 OR "T_ACCOUNT"."AC_SK"=769600 OR 
    "T_ACCOUNT"."AC_SK"=789267 OR
                  "T_ACCOUNT"."AC_SK"=831812 OR "T_ACCOUNT"."AC_SK"=831813 OR 
    "T_ACCOUNT"."AC_SK"=831814 OR
                  "T_ACCOUNT"."AC_SK"=831815 OR "T_ACCOUNT"."AC_SK"=831816 OR 
    "T_ACCOUNT"."AC_SK"=831817 OR
                  "T_ACCOUNT"."AC_SK"=886799 OR "T_ACCOUNT"."AC_SK"=891301 OR 
    "T_ACCOUNT"."AC_SK"=891302 OR
                  "T_ACCOUNT"."AC_SK"=896746 OR "T_ACCOUNT"."AC_SK"=905441 OR 
    "T_ACCOUNT"."AC_SK"=928452 OR
                  "T_ACCOUNT"."AC_SK"=952683 OR "T_ACCOUNT"."AC_SK"=971664 OR 
    "T_ACCOUNT"."AC_SK"=971665 OR
                  "T_ACCOUNT"."AC_SK"=971666 OR "T_ACCOUNT"."AC_SK"=971667 OR 
    "T_ACCOUNT"."AC_SK"=971668 OR
                  "T_ACCOUNT"."AC_SK"=971669 OR "T_ACCOUNT"."AC_SK"=971670 OR 
    "T_ACCOUNT"."AC_SK"=986885 OR
                  "T_ACCOUNT"."AC_SK"=987360 OR "T_ACCOUNT"."AC_SK"=987859 OR 
    "T_ACCOUNT"."AC_SK"=992765 OR
                  "T_ACCOUNT"."AC_SK"=1069330 OR "T_ACCOUNT"."AC_SK"=1070973 OR 
    "T_ACCOUNT"."AC_SK"=1075750 OR
                  "T_ACCOUNT"."AC_SK"=1201860 OR "T_ACCOUNT"."AC_SK"=1201861 OR 
    "T_ACCOUNT"."AC_SK"=1201862 OR
                  "T_ACCOUNT"."AC_SK"=1283345 OR "T_ACCOUNT"."AC_SK"=1283346 OR 
    "T_ACCOUNT"."AC_SK"=1312578 OR
                  "T_ACCOUNT"."AC_SK"=1312579 OR "T_ACCOUNT"."AC_SK"=1312702 OR 
    "T_ACCOUNT"."AC_SK"=1312703 OR
                  "T_ACCOUNT"."AC_SK"=1312704 OR "T_ACCOUNT"."AC_SK"=1415629 OR 
    "T_ACCOUNT"."AC_SK"=1446798 OR
                  "T_ACCOUNT"."AC_SK"=1446799 OR "T_ACCOUNT"."AC_SK"=1520530 OR 
    "T_ACCOUNT"."AC_SK"=1834159 OR
                  "T_ACCOUNT"."AC_SK"=1859876 OR "T_ACCOUNT"."AC_SK"=1894444 OR 
    "T_ACCOUNT"."AC_SK"=1932920 OR
                  "T_ACCOUNT"."AC_SK"=1964377 OR "T_ACCOUNT"."AC_SK"=2033976 OR 
    "T_ACCOUNT"."AC_SK"=2077465 OR

    Table T_ACCT_SUM is range partitioned on BAL_DATE column. ACCT_BAL column has height balanced
    Database is version 4 node RAC system.
    Execution plan is collected using "SET AUTOTRACE ON" command.

    On line 6, cardinality calculated is "5" using index range scan. But on line 5, cardinality has
    increased to 128K for same table. My assumption was that, cardinality returned by index should be
    greater than or same as the cardinality for its corresponding table. Here, the increase is huge.
    Can you please help me understand what I am missing?


    Comment by Amit — February 19, 2014 @ 8:56 pm GMT Feb 19,2014 | Reply

    • Amit,

      This isn’t a forum – your question is the type of thing that can be addressed by members of the OTN database forum or the Oracle-L list.

      Having said that – explain plan is probably showing you the plan that would be used at run-time; and it’s saying “partition range single”, so the stats used should be the partition-level stats. This being the case it ought to be relatively simple for the optimizer to be consistent between the index and table (in this version of Oracle). My guess is that the optimizer has managed to take a code path that doesn’t allow for the inlist iterator when considering index selectivity, and then used a different strategy for the table selectivity. Whatever it has done it’s clearly a defect when when the number of rows predicted for the table exceeds the number of rowids predicted by the index access, so I’d raise it with Oracle Support.

      Comment by Jonathan Lewis — February 20, 2014 @ 12:25 pm GMT Feb 20,2014 | Reply

      • Hi Jonathan,

        I apologize for posting a question here, but I wanted to get your opinion about this behavior. Thank you for providing your inputs.
        I have requested DBA to check with oracle support.
        To further understand this behavior, I generated trace for this SQL and found that oracle is using same plan and cardinality at runtime.

        Rows (1st) Rows (avg) Rows (max)  Row Source Operation
        ---------- ---------- ----------  ---------------------------------------------------
                13         13         13  HASH UNIQUE (cr=559 pr=0 pw=0 time=22742 us cost=123 size=4212 card=117)
                21         21         21   HASH JOIN  (cr=559 pr=0 pw=0 time=8073 us cost=122 size=4212 card=117)
               106        106        106    INLIST ITERATOR  (cr=206 pr=0 pw=0 time=2128 us)
               106        106        106     TABLE ACCESS BY INDEX ROWID T_ACCOUNT (cr=206 pr=0 pw=0 time=690 us cost=38 size=1696 card=106)
               106        106        106      INDEX UNIQUE SCAN PK_T_ACCOUNT (cr=149 pr=0 pw=0 time=308 us cost=29 size=0 card=106)(object id 597788)
                21         21         21    PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=353 pr=0 pw=0 time=376 us cost=83 size=1098040 card=54902)
                21         21         21     INLIST ITERATOR  (cr=353 pr=0 pw=0 time=356 us)
                21         21         21      TABLE ACCESS BY LOCAL INDEX ROWID T_ACCT_SUM PARTITION: KEY KEY (cr=353 pr=0 pw=0 time=1194 us cost=83 size=1098040 card=54902)
               214        214        214       INDEX RANGE SCAN PK_T_ACCT_SUM PARTITION: KEY KEY (cr=186 pr=0 pw=0 time=491 us cost=32 size=0 card=5)(object id 603269)

        Also, I tried changing In-LIST condition “T_ACCOUNT.AC_SK IN (” to “T_ACCT_SUM.AC_SK IN (” (as this column is used in join condition and should not change the logic), and that SQL predicted correct cardinality changing original hash join to nested loops.

        | Id  | Operation                              | Name                 | Rows  | Bytes | Cost (%CPU)|
        |   0 | SELECT STATEMENT                       |                      |    54 |  1944 |   104   (1)|
        |   1 |  HASH UNIQUE                           |                      |    54 |  1944 |   104   (1)|
        |   2 |   NESTED LOOPS                         |                      |       |       |            |
        |   3 |    NESTED LOOPS                        |                      |    54 |  1944 |   103   (0)|
        |   4 |     PARTITION RANGE SINGLE             |                      |    54 |  1080 |    87   (0)|
        |   5 |      INLIST ITERATOR                   |                      |       |       |            |
        |*  6 |       TABLE ACCESS BY LOCAL INDEX ROWID| T_ACCT_SUM           |    54 |  1080 |    87   (0)|
        |*  7 |        INDEX RANGE SCAN                | PK_T_ACCT_SUM        |     6 |       |    32   (0)|
        |*  8 |     INDEX UNIQUE SCAN                  | PK_T_ACCOUNT         |     1 |       |     1   (0)|
        |   9 |    TABLE ACCESS BY INDEX ROWID         | T_ACCOUNT            |     1 |    16 |     1   (0)|


        Comment by Amit — February 21, 2014 @ 5:20 pm GMT Feb 21,2014 | Reply

        • The tkprof output says that Oracle is doing a hash join instead of a merge join, it’s swapped the join order, and the estimate for table rows is 54,902 rather than 128K.

          Looking at the run-time plan we can now see that the “single” partition range has pstart/pstop of KEY/KEY. I should have noticed earlier that your input date is using a two-digit year, which means Oracle doesn’t know how to interpret it until run time – so instead of using partition-level stats throughout it’s (probably) used some partition-level and some global level stats, resulting in the contradictory estimates you see. Always use a proper date, or a complete date string with full explicit conversion.

          Comment by Jonathan Lewis — February 21, 2014 @ 5:40 pm GMT Feb 21,2014

  16. […] Following a comment from lascoltodelvenerdi, I’ve asked Michelle Deng if I can post a copy of her presentation with some notes on my blog, and she has given her permission for me to do so. The result is at this URL […]

    Pingback by IOUG Day 1 | Oracle Scratchpad — September 13, 2021 @ 7:10 pm BST Sep 13,2021 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: