I posted a little holiday quiz – timed to appear just before midnight (GMT) on 24th December – that asked about the number of rows sorted and the memory used for queries like:
rownum <= 10
The number and variety of the responses was gratifying. It’s always interesting to see how many important little details appear as people start to tackle even fairly straight-forward questions like this.
Here’s an interesting post and test case from Gregory Guillou (WeDoStreams blog). It features an SQL statement that is re-optimised the second time you run it.
Since it’s running on 18.104.22.168 your first thought is likely to be “SQL Plan Management”, or “Adaptive Cursor Sharing” – except the first feature wasn’t enabled, and the statement doesn’t include any bind variables.
Gregory emailed me about this one, and it was the thing that finally persuaded me to tear down a laptop and install 64-bit OEL with 11.2 – and I ran his test case and got the same results. (If you build his sample schema, you’ll need about 1GB of free space).
When I wrote Practical Oracle 8i, version 8.1.5 was the most recent version of Oracle but version 8.1.6 came out just before I finished writing – and the only thing in 8.1.6 I thought important enough to add to the book was a section on Analytic Functions because they were the best new coding feature in the product.
Since then I’ve always warned people to be a little careful about how they use analytic functions because of the amount of sorting they can introduce. My suggestion has always been to crunch “large” volumes of data down to “small” volumes of data before applying any analytic functions to add “intelligence” to the intermediate result.
Here’s a mechanism for examining indexes in some detail if you think that something odd may be going on inside them. It’s a feature that I first decribed in Practical Oracle 8i, although the book doesn’t mention a problem with it that I subsequently discovered that means I always test it carefully on a small index before I use it on a large one. Here’s a demonstration – cut from an SQL*Plus session on 10.2.0.3:
I was planning to supply the answers to Index Quiz 2 as a comment – but there’s a lot of block dumps involved, and it’s easier to do that in postings.
Question 1: I’ve created a table and index with initrans 4, then inserted one row into the table. How many ITL (interested transaction list – see glossary) entries will there be in the first block of the index when you dump it.
Answer 1: The table block will show 4 entries in the ITL, obeying your setting for initrans, but the index block will show only two entries – unless you’re using Oracle 8i or earlier (Basically indexes tend to ignore the setting for initrans except when you rebuild an index, or create it on existing data.):
Here’s a simple script that I created a short time ago while investigating a memory problem on a client site. The purpose of writing the script was, as always, to strip the client’s code back to a bare minimum in an attempt to work out the root cause of a problem. (Warning: if you want to run this script, your Oracle shadow process will grab about 1GB of PGA RAM )
Today’s little quiz – just for fun, but prompted by a few comments in this posting on OTN: (more…)
Here’s a little oddity that I came across at a client site recently.
The client called me in because they were having problems with Oracle error “ORA-01555: snapshot too old” appearing in a particular task after a few thousand seconds (typically 5,000 to 6,000) even though they had set the undo_retention to 14,400 seconds and had a huge undo tablespace running with autoextend enabled on the data files.
Someone posted a very pertinent question about blocks splits and index rebuilds on the OTN Database forum yesterday covering a detail of the cost/benefit equation that I don’t think I’ve mentioned before.
Since I answered the question on the forum I’ve posted a link here to make it available to a wider audience.
(This is a strategy I may adopt more frequently in the future – there’s a lot of useful material of mine all over the internet, and I really ought to make sure I don’t spend time repeating myself when simple pointer would do).
[Updated May 2009: The forum item has since gone missing – so maybe linking to notes I’ve written in public groups is not so smart after all]. Fortunately I happen to have been saving copies of the thread as it progressed, and the little point that I wanted to highlight was just the following:
“When you create or rebuild an index Oracle does not honour the pctfree setting in the branch blocks. Since the branch blocks are effectively 100% packed, the very first leaf block split in each branch block (except the last one) will almost certainly result in a branch block split. This means a single row insert into an 8KB block could result in 16KB+ of redo for the leaf block plus 16KB+ for the branch block split.”
[Further reading on rebuilding indexes]
I see that Tom Kyte has found a nasty little bug waiting to trap a few unlucky people as they patch to 10.2.0.4, or upgrade to 11g.
Here’s a surprising anomaly that showed up in a question on the OTN forums a little while ago. Consider a simple query that uses a hash join between two tables.
pq_distribute(t2 hash hash)
from t1, t2
where t2.n1 = t1.n1
and t2.small_vc = t1.small_vc
When it runs serially the join completes in memory and the only I/O you see comes from the two tablescans. When the query runs parallel something causes a spill to the temporary tablespace.
[Forward to Part 2]
Towards the end of April, I published a note about manual optimisation, and mentioned in one of the comments (#11) that as part of the discussion of the (slightly suspect) mechanism I had introduced I would eventually get around to talking about sorted hash clusters. So I’ve finally managed to make a start.
I’ve probably got a couple of comments about 10053 trace files lurking somewhere on this blog and on my website – and when I mention the 10053 I usually remember to say that it’s a last resort that I only use when I think there may be a bug that needs to be nailed.
So here’s a bit of a 10053 – which I only looked at because I thought it was going to show me a bug.
The worst type of Oracle bug is the one that seems to appear randomly and can’t be reproduced on demand. (Such as when Oracle support says “please send us a reproducible test case”).
Here’s one such (probable) bug that showed up at a client site that was reporting performance problems with a query that, on random days, chose a bad execution plan. The client was running one of the earlier versions of 9.2, and using the following call to dbms_stats to collect fresh table stats for each table in turn every night.
If you ever thought you could understand what’s printed in a trace file, you might like to read this comment from a recent SR:
The trace file is reporting the steps that are needed to troubleshoot an issue. The comments tags and entries in the files may not be consistent in every scenario and their meaning can change between tasks and versions. As they are not designed or intended for direct interpretation by customers, no rationale or consistency is guaranteed.
Further, no official documentation is provided for the raw trace files. The only behaviour that is considered ‘expected’ behaviour is that which is contained in the documentation and manuals.
Of course, this does make life a little confusing – is a note that’s been published on Metalink (such as 39817.1 and 338137.1) not official after all ?