Oracle Scratchpad

February 2, 2007

10g Upgrade

Filed under: Oracle,Statistics,Troubleshooting,Upgrades — Jonathan Lewis @ 7:17 pm GMT Feb 2,2007

When you upgrade from 9i to 10g (or higher) you may find lots of execution plans suddenly “go wrong”. There are three main reasons for this. The first is that 10g automatically enables CPU costing, and if you weren’t using it in 9i this is likely to make a difference – often an improvement but you may be unlucky. The second is that there are numerous new optimisation strategies in 10g, and some of them may not be good for your particular data patterns, or your particular types of query.

The other big reason is that the dbms_stats packages keeps changing its default behaviour. Apart from the fact that there is an automatic job (see view dba_scheduler_jobs) created to gather stale stats every day, the code that you used to run in your own stats collection routines may be doing something completely different.

Here’s the definition of the procedure to gather table stats in its Oracle 9i form:

procedure gather_table_stats(
        ownname                 varchar2,
        tabname                 varchar2,
        partname                varchar2        default null,
        estimate_percent        number          default null,
        block_sample            boolean         default FALSE,
        method_opt              varchar2        default 'FOR ALL COLUMNS SIZE 1',
        degree                  number          default null,
        granularity             varchar2        default 'DEFAULT',
        cascade                 boolean         default FALSE,
        stattab                 varchar2        default null,
        statid                  varchar2        default null,
        statown                 varchar2        default null,
        no_invalidate           boolean         default FALSE

And here’s the same procedure from 10g:

procedure gather_table_stats (
        ownname                 varchar2,
        tabname                 varchar2,
        partname                varchar2        default null,
        estimate_percent        number
                        default to_estimate_percent_type(get_param('ESTIMATE_PERCENT')),
        block_sample            boolean         default FALSE,
        method_opt              varchar2
                        default get_param('METHOD_OPT'),
        degree                  number
                        default to_degree_type(get_param('DEGREE')),
        granularity             varchar2
                        default get_param('GRANULARITY'),
        cascade                 boolean
                        default to_cascade_type(get_param('CASCADE')),
        stattab                 varchar2        default null,
        statid                  varchar2        default null,
        statown                 varchar2        default null,
        no_invalidate           boolean
                        default to_no_invalidate_type(get_param('NO_INVALIDATE')),
        stattype                varchar2        default 'DATA',
        force                   boolean         default FALSE

Of course, it’s not easy to get a direct view of all the changes because of those calls to the function dbms_stats.get_param(), but the changes are a threat. Here’s how you can see what’s happened to the “method_opt”:

select dbms_stats.get_param('METHOD_OPT') from dual;           


Because of this particular change, if you didn’t specify a method_opt in your scripts under 9i you were not generating histograms; under 10g you will be collecting histograms on any columns that Oracle thinks might be suitable candidates. The effect could be dire.

So, before you start trouble-shooting specific SQL statements after the upgrade, go back to your stats collection scripts and modify the code to set any default values to the actual value they would have had in 9i. Then re-run the collection and see if most of your problems have gone away.

As a quick check (before thrashing the system to death rebuilding all the statistics) you might like to see how many histograms you actually have. The following query will identify columns with (real) histograms.

select  owner, table_name, column_name, count(*)
from    dba_tab_histograms
group by
        owner, table_name, column_name
        count(*) > 2

For further details on the dbms_stats package, check the script $ORACLE_HOME/rdbms/admin/dbmsstat.sql.


  1. Thanks Jonathan, most instructive. 10g dbms_stats is a major improvement… just so long as we’re aware of what exactly is going on. Lies, Damned Lies, and Statistics (but improving all the time)!

    Comment by SeanMacGC — February 2, 2007 @ 10:05 pm GMT Feb 2,2007 | Reply

  2. Nice. Some refreshing stuff. Interesting, neat, cleanly presented and to the point. And giving you something to think about and investigate some more. Jonathan Lewis style! ;-)

    Another interesting topic/article “Cost-Based Query Tranformation in Oracle” at

    Thought you might want to comment/expand on that, as a topic for a future blog post ;-)


    Comment by Cos — February 2, 2007 @ 10:19 pm GMT Feb 2,2007 | Reply

  3. Sean, important point that I should have made clearer: as you say, it’s not that the new method is bad, it’s simply different and if you’re trying to identify the important issues you want to try and address one thing at a time.

    Cos, I’ve seen it; it’s a good read, but a little heavy going in places. It may help some people to recognise some of the changes that have appeared in their upgrade.

    Comment by Jonathan Lewis — February 2, 2007 @ 10:25 pm GMT Feb 2,2007 | Reply

  4. […] of DBAs are tackling upgrades to Oracle 10, and here are a couple items on this. One post on the 10g upgrade is from Jonathan Lewis’s Oracle Scratchpad: “When you upgrade to 10g from 9i you may […]

    Pingback by Pythian Group Blog » Log Buffer #31: a Carnival of the Vanities for DBAs — March 10, 2007 @ 3:06 am GMT Mar 10,2007 | Reply

  5. Simply one of the simplest explanation I have ever gone through. Now I better understand why I’m facing so many performance issues after upgrading to 10g. Thanks a lot Jonathan.

    Comment by Santosh Kumar — January 3, 2008 @ 3:59 pm GMT Jan 3,2008 | Reply

  6. […] degradation after 10g upgrade.  I quickly went there and did a search and found out this URL, excellently explaining the change in the behaviour of dbms_stats package.  In […]

    Pingback by 10g Upgrade - issues with DBMS_STATS package « Mani’s Oracle Scratchpad — January 28, 2008 @ 5:48 am GMT Jan 28,2008 | Reply

  7. Jonathan,

    I know I am late to the table, we have just finally upgraded to 10g. Performance was bad until we made sure all instances had good system stats.

    We have a couple of schemas which still perform badly. One is a warehousing schema which contains many partitioned tables. I am not sure why but when we gather stats manually on the affected tables (those performing badly during data loads). Performance returns to normal. It seems like the auto stats process is not gathering good stats on these tables. The script we have been using did not set METHOD_OPT so we were using the 10g default.
    The second schema is an OLTP system which performs many updates to quantity columns, only a small percentage of the other attributes change. Oracle has gathered many histograms on this schema.
    I would like to change the METHOD_OPT parameter to the 9i default. However, since the auto stats gathering process also gathers stats on system tables. I am concerned that changing the default METHOD_OPT will impact stats on the system tables. Is there a danger in setting ‘METHOD_OPT’ to ‘FOR ALL COLUMNS SIZE 1’ for the gathering of stats on the system tables?

    We have been using all the defaults with disastrous results using OEM to view tablespaces and files. Response is pitiful against LMT (as much as 5 minutes) but great against the system tablespace which is still DMT (1 -2 seconds). This applies only to 10g databases. We still have a few because 3rd party vendors are not ready. A TAR with oracle did not help. I gave up after more than a month of running queries and submitting trace file after trace file.

    Do you have any comments on this?

    Comment by Sam — August 12, 2008 @ 4:12 pm BST Aug 12,2008 | Reply

  8. Sam,

    Any change on a production system to a global feature like stats collection could have unpredictable side-effects – so I can’t tell you whether there is any danger in stopping histogram collection on the syatem schema. You could side-step that issue by writing your own job that does “gather schema stats” on the system schema before allowing Oracle’s default stats collection to run.

    The pitiful response against tablespaces and files could be related to a problem with statistics – it is possible for queries for used extents to run very inefficiently, a problem that could be exacerbated by partitioned tables with lots of partitions. If you are licenced to use the Performance Pack, you might try to capture the SQL used by OEM for these reports and see if the SQL tuning tool gives you any advice about statistics.

    Comment by Jonathan Lewis — August 19, 2008 @ 6:17 pm BST Aug 19,2008 | Reply

  9. What scripts can determine as to whether or not a histogram in 10G is worthy?

    I have my stats package using AUTO and I have over 600 tables with 80K columns.

    Comment by Steven — September 10, 2008 @ 12:59 am BST Sep 10,2008 | Reply

  10. I have same issue in Oracle Apps environment. How to deal with the situation when you are using FND_STATS for gathering stats?

    I tried the following but they didn’t help:
    alter session set “_optimizer_cost_based_transformation” =off;

    alter session set “_gby_hash_aggregation_enabled” = FALSE;

    Waiting for your reply.
    Santosh Kumar

    Comment by Santosh Kumar — November 24, 2008 @ 7:38 am GMT Nov 24,2008 | Reply

  11. […] Filed under: Infrastructure, Statistics — Jonathan Lewis @ 6:35 pm UTC Feb 20,2010 In a blog that’s now three years old I pointed out the issue of the automatic stats collection job and the way it could affect […]

    Pingback by Time on Stats « Oracle Scratchpad — February 20, 2010 @ 6:35 pm GMT Feb 20,2010 | Reply

  12. @ Santosh Kumar:
    You don’t need to specify anything for FND_STATS. The default for FND_STATS is to Invalidate the cursor.

                                 retcode OUT NOCOPY VARCHAR2,
                                 ownname     IN         VARCHAR2,
                                 tabname     IN         VARCHAR2,
                                 percent     IN         NUMBER,
                                 degree      IN         NUMBER,
                                 partname    IN         VARCHAR2,
                                 backup_flag IN         VARCHAR2,
                                 granularity IN         VARCHAR2,
                                 hmode       IN         VARCHAR2 DEFAULT 'LASTRUN',
                                 invalidate  IN         VARCHAR2 DEFAULT 'Y' )

    Cherrish Vaidiyan

    Comment by Cherrish Vaidiyan — August 28, 2014 @ 10:36 pm BST Aug 28,2014 | Reply

    • Cherrish,

      I don’t know why I missed the previous two questions in September 2008, but I doubt if the people who posted them are hoping for an answer 6 years later !
      And I think the question was about identifying requirements for histograms, not about invalidating cursors.

      Comment by Jonathan Lewis — September 1, 2014 @ 10:11 am BST Sep 1,2014 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by