A brief note on reading execution plans.
| Id | Operation | Name | Time |
| 0 | SELECT STATEMENT | | 00:00:17 |
|* 1 | HASH JOIN | | 00:00:17 |
| 2 | VIEW | VW_SQ_1 | 00:00:01 |
| 3 | HASH GROUP BY | | 00:00:01 |
| 4 | TABLE ACCESS FULL | EMP | 00:00:01 |
| 5 | VIEW | | 00:00:17 |
| 6 | TABLE ACCESS BY INDEX ROWID| EMP | 00:00:17 |
| 7 | INDEX FULL SCAN | E_D | 00:00:01 |
Predicate Information (identified by operation id):
1 - access("DEPT_NO"="OUTER"."DEPT_NO")
There was a question about the notorious dba_extents view on the comp.databases.oracle.server newsgroup a little while ago.
This is a view that has been popularly abused over the years despite warnings such as a note of mine in 2006 and even back as far as 2001 this one from Connor McDonald.
There are many reasons why this is a nasty view – and the number of reasons has grown as Oracle has evolved – but David FitzJarell has highlighted an important point in one of his blog postings that was worth a mention. So here it is.
Some time ago I posted an extract from a short presentation I had given at the UKOUG annual conference about 12 years previously.
When I found that set of slides, I also found the paper copies of another set of slides I had used at another UKOUG SIG event in 1995 where I had been explaining the mechanisms used by the cost based optimizer to decide whether or not to use an index. I thought it might be quite revealing to reproduce those slides to show how much (or how little) things have changed since then.
In a recent article on changes to the calculation of cardinality with frequency histograms I made a comment about writing programs to create fake but realistic and complete frequency histograms to work around the problems of dramatic changes in execution plan that can occur if you let Oracle create the histograms by gathering stats.
As a follow-up to that comment, here’s a little script to demonstrate the method (for a numeric column), and a few points to remember when using this method.
How do you find indexes that might be worth the effort of the rebuild.
One option is to report indexes that take up much more space than they need to – and there are some fairly easy ways to find those indexes. 10g, for example, gave us the create_index_cost procedure in the dbms_space package so, for a single index, we can write a little routine that does something like the following example – running on a 10.2.0.3 database:
Here’s another of those ideas that are so fundamental that you should always keep them in mind when dealing with an Oracle database.
The fundamental strategy embedded in the optimizer is based on just two key points:
- How much data are you after.
- Where did you put it.
If you “tune” SQL by fiddling with it until it goes fast enough then you’ve missed the point and you’re wasting time.
If you start your tuning exercise by thinking about how much data you’re supposed to collect and how randomly scattered it is, then you’re going to minimise the time you spend working out the best way of acquiring that data efficiently.
[The Philosophy Series]
When I am in America for a few days I usually try to find a local user group and ask them if they’d like to arrange an evening slot for their members. (Last time I was at Oracle Open World I went so far as to do a day trip up to Calgary because I didn’t think I was likely to go there any other way).
Collaborate 09 was no exception, and I contacted the CFOUG who arranged an event on the Monday evening at which I did a presentation called “Optimising through Understanding” where I talk about the analysis that could go into one simple SQL statement, producing reasons why any of several different execution plans might be the most appropriate depending on circumstances.
The presentation is a variation of one I have now given several times – and there is even a voice-over webinar version of it that I linked to from an earlier blog item.
After the event James Taylor, the chairman of CFOUG, asked if I could let him have a copy of the presentation for their website – so I sent him a pdf file of the slides, and I’ve also posted the same pdf file on my blog.
This is the title of a presentation I have given a few times in the last couple of years – most recently at Collaborate 09 in Orlando.
The IOUG has now posted on their website the presentation and a short article I wrote to go with it; but that’s for members only, so it seemed reasonable to publish the same thing on my own blog as well.
So here are links for a pdf file of the presentation, and the article in Word format.
Not so much a little gem today as a little surprise and a few consequential thoughts. In a presentation on optimising star transformations the presenter pointed out that bitmap indexes are only available in Oracle Enterprise Edition.
I attended four sessions today (one of them was a 2-hour session), and the highlight of the day for me was a lunchtime “quick tip” from Michelle Deng of Sanofi Aventis who gave a 30-minute talk with the title “Cardinality Analysis – a life saver for DBAs and Developers”.
I try to avoid hinting SQL if possible as it is very hard to do well, but there are a few hints that give an overview of how a query should operate without trying to control the detail of what the optimizer does. These are the hints that I call the “strategic hints” (possibly “query block hints” would be a better name – but there may be hints at the query block level that I wouldn’t qualify as strategic), and an example came up on the comp.databases.oracle.server newsgroup recently that looks like an ideal example of how a couple of them could be used.
A recent post on comp.databases.oracle.server asked:
I have read various articles on virtual columns? I still do not understand their advantages other than they save some disk space, one can put logic of virtual columns in a trigger which will save information in a real column. Real column will need some additional disk space, but it will save some cpu time when one is doing selects on “virtual” columns.
Someone recently sent me a request about a piece of SQL they could not optimise. I don’t usually respond to private requests – it’s not an effective use of my time – but their example was something that pops up relatively frequently as a “bug” – so I thought I’d mention it here.
The SQL looked like this:
insert into tab3
select -- small result set
tab1@dblink t1 -- large data set
tab1.col1 in (
tab2 -- small data set
When it comes to setting the optimizer_mode parameter you often hear people say that first_rows_N (for one of the legal values of N) should be used for OLTP systems and all_rows should be used for decision support and data warehouse systems.
There is an element of truth in the statement – but it’s really a hangover from the early days of CBO, and remembrance of the old first_rows optimizer mode (** See footnote).
It’s quite surprising that I still see people arguing about the fastest way to “count the rows in a table”; usually with suggestions that one or other of the following queries will be faster than the rest:
- select count(*) from tab;
- select count(1) from tab;
- select count(primary_key_column) from tab;