I’ve recently spent some time working with a client to get the maximum benefit from their KEEP pool – I’ll be publishing some interesting demonstrations when I get some time – and thought that some of you would be keen to hear about bug 8897574.
The bug applies to 126.96.36.199 – luckily my client is still on 10.2 – and has the following abstract: KEEP BUFFER POOL DOES NOT WORK.
If you have looked at SQL Profiles (see for example Kerry Osborne’s blog) then you may have come across the force_match option for enabling or importing a SQL profile. I received an email recently asking a few questions about this feature. (more…)
If you run a query using first_rows_N optimisation you could run into a massive performance problem in cases where the optimizer thinks the complete result set is quite large when it is actually very small.
If both conditions are true the optimizer may choose a very resource-intensive execution path “expecting” to stop (or at least pause between fetches) after N rows – hoping to give the impression that it can respond very quickly – but find that the query has to run to completion because the N rows simply don’t exist.
I posted a little holiday quiz – timed to appear just before midnight (GMT) on 24th December – that asked about the number of rows sorted and the memory used for queries like:
rownum <= 10
The number and variety of the responses was gratifying. It’s always interesting to see how many important little details appear as people start to tackle even fairly straight-forward questions like this.
A few weeks ago I wrote a note demonstrating the way in which Oracle’s strategy for hash partitioning is engineered to give an even data distribution when the number of partitions is a power of two. In one of the comments, Christo Kutrovsky pre-empted my planned follow-up by mentioning the hashing function ora_hash() that appeared in 10g.
A few months ago Saibabu Devabhaktuni let me know of an interesting issue he had had with indexes misbehaving, and pointed me to a blog note he had written to describe it. In the note he supplies a test case where Oracle walks through hundreds of blocks on an index freelist to find a single block that could be used as the target for a leaf node split.
The most interesting thing to note is that Sai has demonstrated a case where issuing a commit between the bulk delete and the subsequent insert results in a performance problem. On the other hand if you read my description in this blog posting, you’ll see that I have described a scenario where the commit between the delete and the insert is a very good idea – especially if you can fit a coalesce in between to clear up the mess.
I’ve just jotted down a few notes about “log file sync” waits, “log file parallel write” waits, and the nologging option in response to a question on OTN about redo activity when creating a large index. The ensuing conversation also picks up various topics relating also to backup, recovry and dataguard.
It is fairly well-known that bitmap indexes are very dense structures that can behave badly if their underlying tables are subject to even fairly low levels of insert, update or delete activity. Problems include contention, space management and performance, and these problens have spawned a couple of well-known guidelines relating to bitmap indexes:
- Avoid concurrent modification of data by multiple processes – otherwise you end up with processes dead-locking
- Drop/disable bitmap indexes before data loads and rebuild them afterwards.
Of course, with a little care and experimentation, you may find that you don’t need to apply the second guideline in all cases – especially for bulk inserts.
In response to a question about a system that needed to set a very large shared_pool_size, I’ve just posted on the OTN database forum a brief summary of the impact of the cursor_sharing parameter on your system. It seemed like a waste not to link to it from the blog.
An earlier posting on my blog about one of the little oddities that can appear when you use cursor_sharing.
There’s an interesting conversation about hash clusters growing on the OTN database forum at the moment; I’ve made a couple of contributions to it, and think it’s worth reading.
For some other comments I’ve made about hash clusters in the past:
I mentioned the hidden parameter _smm_isort_cap recently in my solution to a problem with analytic functions applied to large volumes of data; but in that note I didn’t give you much detail about what it does.
The description given in x$ksppi is: “maximum work area for insertion sort(v1)” which refers to the mechanism used for all sort operations in earlier releases of Oracle. In 10gR2, however, Oracle introduced a new sorting mechanism (commonly called the V2 sort) which uses less memory, less CPU, and can reduce the chances of a sort operation spilling to disc. But the new sort mechanism doesn’t get used for all sort operations – so it’s worth checking in v$sql_workarea(_active) – and even the 10032 trace file – from time to time to see which operations have used the V1 sort and which have used the V2 sort. Here, for example is a simple query against a 10.2.0.3 system just after startup:
I’ve been involved over the last few days in a discussion on the OTN Database forum of a strangely behaved index.
The thread started with someone asking if a corrupted index could be the root cause of a job slowing down [probably not, a corrupted index is more likely to cause a job to crash] but the interesting bit came along later as a follow-up post introducing an index that seemed to grow unreasonably large for no apparent reason – apparently bypassing the automatic reuse of empty index blocks that normally takes place.
I made a throwaway comment in a recent posting about using powers of two for the number of partitions when using hash partitioning. The article in question was talking about globally partitioned indexes, but the “power of 2″ principle was first associated with tables.
Here’s a simple demonstration of hash partitioning in action demonstrating why Oracle adopted this “power of 2″ rule. We start by creating a table that doesn’t obey the rule – with six partitions – and collect stats on it to see how many rows go into each partition:
After wrting a short series of notes on a problem with indexes wasting a lot of space and growing to three (or even four) times the size you might expect due to a problem with concurrency and ITL (interested transaction list) entries accumulating, I thought it would be a good idea to create a little index for the series to make it easier to read them in order:
After describing how to deal most effectively – but only after approval from Oracle Support – with the problem of indexes wasting space on unnecessary ITL entries, I left you with a short list of “supportable” options for addressing the problem. In this note I’m going to outline a few pros and cons of each of those options. The list was as follows: (more…)