Oracle Scratchpad

September 13, 2019

UKOUG 2019

Filed under: Advertisements — Jonathan Lewis @ 9:20 am BST Sep 13,2019

At the UKOUG TechFest 2019 event in Brighton this year I’ll be chairing the CBO Panel session on Tuesday 3rd December at 16:45 (note the day and time have changed) The panellists will be Maria Colgan, Nigel Bayliss, Christian Antognini and Richard Foote.

It will be possible to propose questions on the day – written or spoken, but if you have a question that you’d like to give the panellists a little warning about then you can:

Tweeting is particularly good (and I’ll copy any tweeted questions to this note) because 280 characters is long enough to be an interesting question but short enough for people to identify the key point.

By email

Are there any types of query, or ways of writing queries, that make it particularly difficult for the optimizer to find a good plan?

Is there a good way to deal with predicates like “status = ‘X’ and order_date > sysdate – 7” to get a good cardinality estimate? We have a histogram on status, but almost all the rows for status X are in the last 7 days and the optimizer’s estimate is low by a factor of a few hundred.

What’s your preferred method for adding hints to a pre-existing SQL statement, SQL Plan Baseline, SQL Profile, SQL Patch, other ? How does your choice vary with circumstances ?

When you need to apply hints to a statement do you go all-in and include every transformation, access path, and join method for the whole plan, or do you only fix the part where issue occur and let the optimizer sort out all the other parts for itself ?

If the optimizer insists on giving you a bad plan and you know that there’s a good plan possible (and you know what it is and why the optimizer isn’t getting it) would you choose to tell the optimizer what to do  (e.g. index FFS, join order), or would you try to correct the arithmetic errors in its estimates (opt_estimate() hints) ?

 

By Tweet

OUTLINE_LEAF – what is it and how to think about it ?

If we reorg/ rebuild a table or index so no data change or maybe compress table data how critical is it to collect new stats , datapump as an example? I believe will import old stats and data is implicitly reorg’d rather than generate new stats.

Any plans on fixing issues with hinting ANSI joins?  [ed: It’s messy – the first transformation to “traditional” syntax gives every table after the first two its own query block]

Is it possible to supply extended stats through hints ?

In early versions WITH clause looked like it could get quite different plan even if not materialized (aka should have behaved as inline view.) Looks much improved over the years, but is there still known cases where care must be taken using WITH clause instead of inline view
What is the best new CBO/performance feature (12.2 or later) and… the most problematic

 

By DM

Why does a tablescan read from the start of the table (low extent_id) to the end (high extent_id) of the table (unless you set event 10460). Since most DML is likely to be applied to the most recently added data this means there could be a lot of change that has to be undone by the time the tablescan gets to the end of the table.   Wouldn’t it generally be more efficient to read from the end to the start?

 

6 Comments »

  1. Why is the behaviour of function-based indexes on XMLTYPE columns so unpredictable?

    Comment by JAYT22 — November 24, 2019 @ 10:29 pm GMT Nov 24,2019 | Reply

    • Jayt22,

      Thanks for the question. I’ll add it to the agenda.
      Could you give an example of the DDL for one of the function-based indexes you’ve found problematic, thanks.

      Regards
      Jonathan Lewis

      Comment by Jonathan Lewis — November 25, 2019 @ 4:04 pm GMT Nov 25,2019 | Reply

  2. Real-Time Statistics in 19c (Exadata only) augment existing statistics, but only statistics that can be done by tracking rows during table monitoring, and that do not require scans of the whole table: so row and block counts, and widening range of column min/max values. Not NDV, not decreases to max value or increases to min values. No changes to histograms.
    Histograms take precedence over column statistics for cardinality calculations, and similarly for RTS they take precedence over real-time column statistics.
    Where does the introduction of RTS leave us with advice for creating histograms? Is this another argument for not using histograms by default, and only introducing them selectively?

    Comment by David Kurtz — November 25, 2019 @ 1:16 pm GMT Nov 25,2019 | Reply

  3. Sorry Jonathan, my issue appears to be with something hardware related. In addition the query was not using the FB, it was an XMLINDEX domain index – and it turned out the index was nothing to do with the problem.

    There was an FB index though and I misdirected myself because the FB index was able to index only one element of a collection object… which led me onto a wild goose chase.

    I suppose a version of the question could be to examine why FB indexes on XML types are being deprecated – and part of the answer is probably in my last sentence.

    Comment by JAYT22 — November 26, 2019 @ 3:40 pm GMT Nov 26,2019 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.