By my count there are 109 new v$ and gv$ dynamic performance views in 12c (so far) – and Glen Fawcett has posted a short note on a group that may be of particular benefit to anyone who finds they really have to delve into esoteric I/O problems from time to time. For the less exotic, there’s v$io_outliers and v$lgwrio_outliers which give details about any very slow I/Os – for the more exotic there’s v$kernel_io_outliers – which is the really fascinating one.
Here’s a short session capture focused on v$io_outliers:
Following a comment from Marcin Przepiorowski on my last post, it crossed my mind to check whether “with” functions can be deterministic – the answer seems to be “not yet”. Here’s a simple script that you can run from end to end to check current and future releases – it compares inline (with) and standalone functions when the “deterministic” keyword has been used.
Here’s a quirky little thing I discovered about 5 minutes after installing 12c Beta 1 (cut-n-pasted from SQL*Plus):
create or replace view v$my_stats
sn.statistic# = ms.statistic#
create or replace view v$my_stats
ERROR at line 1:
ORA-00999: invalid view name
You can’t create views with names that start with V$ or GV$ in the sys schema. Presumably to eliminate the risk of someone’s clever view definition from overwriting and disabling one of the distributed dynamic performance views by accident.
I thought I’d try to spend some of today catching up on old comments – first the easier ones, then the outstanding questions on Oracle Core.
The very first one I looked at was about pushing predicates, and the specific comment prompted me to jot down this little note about the 10053 trace file (the CBO trace).
Here’s an interesting anomaly from the OTN database forum – PL/SQL track. It’s a single row delete from a table that does a huge number of db block gets and (in the first example shown) physical reads. Here’s the supplied output with autotrace enabled:
I’ve written about dynamic sampling in the past, but here’s a little wrinkle that’s easy to miss. How do you get the optimizer to work out the correct cardinality for a query like (the table creation statement follows the query):
Some time ago – actually a few years ago – I wrote a note about the hint /*+ gather_plan_statistics */ making some informal comments about the implementation and relevant hidden parameters. I’ve recently discovered a couple of notes from Alexander Anokhin describing the feature in far more detail and describing some of the misleading side effects of the implementaiton. There are two parts (so far): part 1 and part 2.
Dominic Brooks published a note recently about some very nasty SQL – originally thinking that it was displaying a run-time problem due to the extreme number of copies of the lnnvl() function the optimizer had produced. In fact it turned out to be a parse-time problem rather than a run-time problem, but when I first read Dominic’s note I was sufficiently surprised that I decided to try modelling the query.
Here’s a quick and dirty script to create a procedure (in the SYS schema – so be careful) to check the Hakan Factor for an object. If you’re not familiar with the Hakan Factor, it’s the value that gets set when you use the command “alter table minimize records_per_block;”.
The problem of slow queries on v$lock just came up again on the OTN database forum, so I thought I’d better push out a post that’s been hanging around on my blog for the last few months. This is actually mentioned in MOS in note 1328789.1: “Query Against v$lock Run from OEM Performs Slowly” which points out that it is basically a problem of bad statistics and all you have to do is collect the stats.
For your entertainment – there’s nothing up my sleeves, this was a simple cut-n-paste after real-time typing with no tricks:
20:39:51 SQL> create table t1 (t1 timestamp);
20:39:55 SQL> insert into t1 values(systimestamp);
1 row created.
20:39:59 SQL> select t1 - systimestamp from t1;
1 row selected.
Here’s a deadlock graph that might cause a little confusion:
Resource Name process session holds waits process session holds waits
TX-001a0015-00014787 34 90 X 32 3 S
TX-00190008-0000601b 32 3 X 34 90 S
session 90: DID 0001-0022-00000327 session 3: DID 0001-0020-000009E9
session 3: DID 0001-0020-000009E9 session 90: DID 0001-0022-00000327
Rows waited on:
Session 90: obj - rowid = 00030CE5 - AAAwzlAAGAABDiuAAA
(dictionary objn - 199909, file - 6, block - 276654, slot - 0)
Session 3: obj - rowid = 00030CE5 - AAAwzlAAGAABp8gAAA
(dictionary objn - 199909, file - 6, block - 433952, slot - 0)
You’ve probably seen questions on the internet occasionally about finding out how frequently an object has been modified. The question is a little ambiguous – does it mean how much change has occurred, or how many DML statements have been executed; either may be an interesting measure. Of course, Oracle gave us a method of answering the first question a long time ago: v$segstat (or v$segment_statistics if you don’t mind suffering the join) and the resulting content in the AWR or Statspack reports:
Here’s a summary of a recent posting on OTN:
I have two indexes (REFNO, REFTYPESEQNO) and (REFNO,TMSTAMP,REFTYPESEQNO). When I run the following query the optimizer uses the second index rather than the first index – which is an exact match for the predicates, unless I hint it otherwise:
From time to time I’ve looked at an AWR report and pointed out to the owner the difference in work load visible in the “SQL ordered by” sections of the report when they compare the summary figure with the sum of the work done by the individual statements. Often the summary will state that the captured SQL in the interval represents some percentage of the total workload in the high 80s to mid 90s – sometimes you might see a statement that the capture represents a really low percentage, perhaps in the 30s or 40s.
You have to be a little sensible about interpreting these figures, of course – at one extreme it’s easy to double-count the cost of SQL inside PL/SQL, at the other you may notice that every single statement reported does about the same amount of work so you can’t extrapolate from a pattern to decide how significant a low percentage might be. Nevertheless I have seen examples of AWR reports where I’ve felt justified in suggesting that at some point in the interval some SQL has appeared, worked very hard, and disappeared from the library cache before the AWR managed to capture it.
Now, from Nigel Noble, comes another explanation for why the AWR report might be hiding expensive SQL – a bug, which doesn’t get fixed until 12.2 (although there are backports in hand).