I’ve written a few notes about anomalies in subquery factoring (with subquery) in the past, principally making a fuss about the fact that moving an inline view into a “with subquery” can cause a plan to change even when the internal code moves the subquery back in line. With the arrival of 12c one of my first sets of tests was to rerun all the examples to see how many of them had been addressed. I hadn’t written about as many examples as I had thought, and some of them had been fixed before 12c, but here are few references to a couple of outstanding items that I thought worth a mention:
[Further reading on "subquery factoring"]
I’ve posted this note as a quick way of passing on an example prompted by a twitter conversation with Timur and Maria about Bloom filters:
The Bloom filter (capital B because it’s named after a person) is not supposed to appear in Oracle plans unless the query is executing in parallel but here’s an example which seems to use a serial Bloom filter. Running in 18.104.22.168 and 22.214.171.124 (the results shown are the latter – the numbers are slightly different between versions):
In part 2 of this mini-series I’ll be describing the new mechanism for the simple frequency histogram and the logic of the Top-N frequency histogram. In part 3 I’ll be looking at the new hybrid histogram.
You need to know about the approximate NDV before you start examining the 12c implementation of the frequency and top-frequency histograms – but there’s a thumbnail sketch at the end of the posting if you need a quick reminder.
There are a few enhancements in 12c that might make a big difference to performance for a small investment in effort. One of the important enhancements comes from changes in histograms – which improve speed of collection with accuracy of results. The changes are so significant that I chose the topic as my presentation at OpenWorld last year.
A comment from Greg Rahn in response to my posting yesterday prompted me to do a quick follow-up (test time ca. 3 minutes, write-up time, ca. 50 minutes – thanks for the temptation, Greg ;). Greg asked if the “Top N” would push down for a parallel query, so all I had to do was re-run my script with a parallel hint in place. (Such is the joy of constructing test cases – when you get a new idea you may not need to do much work to test it.)
There have been a couple of nice posts about the “Top N” (or First N / Next N) syntax that has appeared in 12c, here and here, for example. I particularly like the first set of examples because they include some execution plans that give you a good idea of what’s going on under the covers. “Under the covers” is important, because if you don’t actually have a large data set to test on you might not realise what impact a “Top N” query might have on a production data set.
Starting from a comment on an old statspack/AWR page, with a near-simultaneous thread appearing on OTN, (do read both) here’s a quick summary of getting statspack onto 12c with containers. (For non-container databases it’s a standard install).
Weighing in at a massive 54 characters – the longest parameter name in 12c is:
Followed very closely by (you guessed it)
By my count there are 109 new v$ and gv$ dynamic performance views in 12c (so far) – and Glen Fawcett has posted a short note on a group that may be of particular benefit to anyone who finds they really have to delve into esoteric I/O problems from time to time. For the less exotic, there’s v$io_outliers and v$lgwrio_outliers which give details about any very slow I/Os – for the more exotic there’s v$kernel_io_outliers – which is the really fascinating one.
Here’s a short session capture focused on v$io_outliers:
Following a comment from Marcin Przepiorowski on my last post, it crossed my mind to check whether “with” functions can be deterministic – the answer seems to be “not yet”. Here’s a simple script that you can run from end to end to check current and future releases – it compares inline (with) and standalone functions when the “deterministic” keyword has been used.
Here’s a quirky little thing I discovered about 5 minutes after installing 12c Beta 1 (cut-n-pasted from SQL*Plus):
create or replace view v$my_stats
sn.statistic# = ms.statistic#
create or replace view v$my_stats
ERROR at line 1:
ORA-00999: invalid view name
You can’t create views with names that start with V$ or GV$ in the sys schema. Presumably to eliminate the risk of someone’s clever view definition from overwriting and disabling one of the distributed dynamic performance views by accident.
Now that 12c is out, here’s an idea that might save you some time even if you have no intention of migrating to, or even testing, the product for a couple of years. Download the “List of bugs fixed in 12c”: you may find that it’s the best starting point when you’re trying to solve a problem in your current version of Oracle.
A slightly more sophisticated version of the same thing – download and install the product, then take a dump of v$system_fix_control – that may also give you some insight into anomalies (that are not necessarily declared as bugs) in the way Oracle – and the optimizer in particular – behave. I updated the referenced note to add in a couple of figures for 12.1 – but one figure that’s not there is the number of database parameters: now at 368 in the v$ and 3,333 in the x$ (in my Beta 3 release).
Just one of those little snippets about 12c that might help someone.
Further to an earlier post, online rebuild works in 12c even when the key is “too long”. The internal code has changed completely, and there is no sign of the problematic journal table that caused the problem in earlier versions.
The news is out that 12c is now available for download (Code, Docs and Tutorials). There are plenty of nice little bits in it, and several important additions or enhancements to the optimizer, but there’s one feature that might prove to be very popular:
SQL> alter table p1 move partition solo ONLINE;
For anyone looking for information on 12c, there are several posts about OpenWorld at the dbi Services blog (see links at right). In particular there’s a summary post about the “pluggable database” describing how you could plug a database into a “container” database, clone it inside the container database, then unplug it and put it somewhere else.
When I heard about the feature, it crossed my mind that there were two “obvious” targets for this technology that Oracle had in mind – in no particular order:
- Consolidation onto Exadata – I’ve seen an Oracle presentation about a customer who moved 18 databases from 14 servers onto 2 Exadata quarter racks; that’s a lot of processes that have to be running per rack simply to keep the many instances idling. If you plugged all the database into a single instance you should get no application interference between databases and a minimal set of background processes.
- Applications as a Service (or should that be Software as a Service – SaaS): if a 3rd party wants to run your Peoplesoft system for you, they would probably prefer to run one database with multiple Peoplesoft databases plugged into it.
Currently “real” consolidation means lots of work to change multiple databases into multiple schemas in a single database, and worrying about public synonyms; running multiple copies of the same application in the same database demonstrates the most extreme example of how pluggable databases bypass the problem. Just think how nice it would be, as a service provider, to keep a single “empty” Peoplesoft pluggable database in your container database which you clone whenever you sign up with a new customer. And, as a customer, if you want to change your service provider, perhaps you could insist that you supplier unplugs your Peoplesoft database so that you can plug it in at your new service provider.