Oracle Scratchpad

April 27, 2017

Quiz Night

Filed under: Oracle,Performance,Troubleshooting — Jonathan Lewis @ 5:29 pm BST Apr 27,2017

If this is the closing section of thetkprof output from the trace file of a single end-user session that has a performance problem, what’s the most obvious deduction you can make about the cause of the problem, and what sort of action would you take next ?


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      296      0.06       0.10          0        651          0           0
Execute    440      1.39       2.24          7       4664          0         146
Fetch      345     29.38      48.27          0    1709081          0         346
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1081     30.84      50.63          7    1714396          0         492

Misses in library cache during parse: 5
Misses in library cache during execute: 7

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     497        0.00          0.00
  SQL*Net message from client                   496       27.03         50.35
  direct path read                                1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   128199      6.94      11.46          0       2740          0           0
Execute 2274845    371.25     605.60         30   10031162          0       68200
Fetch   2225314     10.94      18.17          5     879297          0      577755
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4628358    389.14     635.23         35   10913199          0      645955

Misses in library cache during parse: 9701
Misses in library cache during execute: 134

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  direct path read                                1        0.00          0.00
  latch: shared pool                              3        0.00          0.00

12666  user  SQL statements in session.
  495  internal SQL statements in session.
13161  SQL statements in session.
********************************************************************************
Trace file: {concealed file name}.trc
Trace file compatibility: 11.1.0.7
Sort options: default

       1  session in tracefile.
   12666  user  SQL statements in trace file.
     495  internal SQL statements in trace file.
   13161  SQL statements in trace file.
    5266  unique SQL statements in trace file.
 39046623  lines in trace file.
     742  elapsed seconds in trace file.


There’s no absoutely right answer to the last question, by the way – there are a couple of variations in approach that are likely to get to the same result in the same time, and some of those variations might have made looking at the tail end data the 2nd or 3rd step in the sequence.

Update

One of the skills of using your time effectively when trouble-shooting is the way you balance the time you spend noting the little details that might be relevant, and the time you then spend following up each detail to check for relevance and corroborating evidence. The amateurs might throw themselves into in-depth research on the first detail they notice; or if they’ve started with a short list of observations simply spend too much time on the first before moving on to the second.

An exercise like looking at this tail-end this tkprof file and talking about what you spot and what you might do is just a little exercise in how to pace yourself as you tackle a problem. So, from my perspective, here a couple of obvious starting points:

  • I said it was an end-user session complaining about performance – the last line of the file tells use that the elapsed time was 742 seconds, and the summary of recursive statement tells us there were 2.2 million executions. How can ANYTHING an end-user wants to do “quickly” require 3,000 executions per second of 12 minutes? I can ask that question because I know that “recursive” executions aren’t necessarily “sys-recursive”, SQL statements executed inside a PL/SQL block are also recursive – on top of that I can see, anyway, that there are 12,466 (probably all different) USER statements in the file – the user (or the user’s code) is doing something it almost certainly shouldn’t be doing. Those 2.2M executions are responsible for 10M buffer visits – does that suggest a lot of single row processing ?
  • Associated with the 13,161 statements in the trace file there are 9,700 misses in the library cache during parse – that means “hard” parsing, probably means the user is getting through about 800 “new” statements per minute – but there are only 5,266 unique statements so part of the problem is that some recently used statements (or child cursors, at least) are being flushed from the library cache: again that suggests that they’re doing something wrong.
  • As one of the commentators pointed out – a detail that I hadn’t noted initially – the number of “current” buffers is zero. This isn’t a mini-batch updating the database, it’s just a report (done badly).

What would I do next ? If it wasn’t already immediately obvious I’d look for the statements that were responible for the very large numbers of executions; a call to tkprof with sort=execnt as a parameter would push the high execution counts to the top of the file.  (The worst one had over 900,000 executions, the next 600,000). Then I’d do a grep, sed, sort with uniq -c to find the statement which (I’d guess – and there were 3 of them responsible for a few thousand variants each) were the generated texts using concatenation instead of bind variables. Then I’d go and find the owner of the code and sit down with them to work out how it should be re-engineered.

 

April 21, 2017

Undo Understood

Filed under: Infrastructure,Oracle,Read Consistency,Troubleshooting,undo — Jonathan Lewis @ 10:45 am BST Apr 21,2017

It’s hard to understand all the ramifications of Oracle’s undo handling, and it’s not hard to find cases where the resulting effects are very confusing. In a recent post on the OTN database forum resulted in one response insisting that the OP was obviously updating a table with frequent commits from one session while querying it from another thereby generating a large number of undo reads in the querying session.

It’s a possible cause of the symptoms that had been described – although not the only possible cause, especially since the symptoms hadn’t been described completely. It’s actually possible to see this type of activity when there are no updates and no outstanding commits taking place at all on the target table. Unfortunately it’s quite hard to demonstrate this with a quick, simple, script in recent versions of Oracle unless you do some insanely stupid things to make the problem appear – but I know how to do “insanely stupid” in Oracle, so here we go; first some data creation:

rem
rem     Script:         undo_rec_apply_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2017
rem

create table t2(v1 varchar2(100));
insert into t2 values(rpad('x',100));
commit;

create table t1
nologging
pctfree 99 pctused 1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 8e4 -- > comment to bypass WordPress formatting issue
;

alter table t1 add constraint t1_pk primary key(id)
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

The t2 table is there as a target for a large of updates from a session other than the one demonstrating the problem. The t1 table has been defined and populated in a way that puts one row into each of 80,000 blocks (though, with ASSM and my specific tablespace definition of uniform 1MB extents, the total space is about 80,400 blocks). I’ve got a primary key declaration that allows me to pick single rows/blocks from the table if I want to.

At this point I’m going to do a lot of updates to the main table using a very inefficient strategy to emulate the type of thing that can happen on a very large table with lots of random updates and many indexes to maintain:


begin
        for i in 1..800 loop
                update t1 set v1 = upper(v1) where id = 100 * i;
                execute immediate 'alter system switch logfile';
                execute immediate 'alter system flush buffer_cache';
                commit;
                dbms_lock.sleep(0.01);
        end loop;
end;
/

set transaction read only;

I’m updating every 100th row/block in the table with single row commits, but before each commit I’m switching log files and flushing the buffer cache.

This is NOT an experiment to try on a production system, or even a development system if there are lots of busy developers or testers around – and if you’re running your dev/test in archivelog mode (which, for some of your systems you should be) you’re going to end up with a lot of archived redo logs. I have to do this switch to ensure that the updated blocks are unpinned so that they will be written to disc and flushed from the cache by the flush buffer cache. (This extreme approach would not have been necessary in earlier versions of Oracle, but the clever developers at Oracle Corp. keep adding “damage limitation” touches to the code that I have to work around to create small tests.) Because the block has been flushed from memory before the commit the session will record a “commit cleanout failures: block lost” on each commit. By the time this loop has run to completion there will be 800 blocks from the table on disc needing a “delayed block cleanout”.

Despite the extreme brute force I use in this loop, there is a further very important detail that has to be set before this test will work (at least in 11.2.0.4, which is what I’ve used in my test runs). I had to start the database with the hidden parameter _db_cache_pre_warm set to false. If I don’t have the database started with this feature disabled Oracle would notice that the buffer cache had a lot of empty space and would “pre-warm” the cache by loading a few thousand blocks from t1 as I updated one row – with the side effect that the update from the previous cycle of the loop would be cleaned out on the current cycle of the loop. If you do run this experiment, remember to reset the parameter and restart the instance when you’ve finished.

I’ve finished this chunk of code with a call to “set transaction read only” – this emulates the start of a long-running query: it captures a point in time (through the current SCN) and any queries that run in the session from now on have to be read-consistent with that point in time. After doing this I need to use a second session to do a bit of hard work – in my case the following:

execute snap_rollstats.start_snap

begin
        for i in 1..10000 loop
                update t2 set v1 = upper(v1);
                update t2 set v1 = lower(v1);
                commit;
        end loop;
end;
/

execute snap_rollstats.end_snap

The calls to the snap_rollstats package simply read v$rollstat and give me a report of the changes in the undo segment statistics over the period of the loop. I’ve executed 10,000 transactions in the interval, which was sufficient on my system to use each undo segment header at least 1,000 times and (since there are 34 transaction table slots in each undo segment header) overwrite each transaction table slot about 30 times. You can infer from these comments that I have only 10 undo segments active at the time, your system may have many more (check the number of rows in v$rollstat) so you may want to scale up that 10,000 loop count accordingly.

At this point, then, the only thing I’ve done since the start of my “long running query” is to update another table from another session. What happens when I do a simple count() from t1 that requires a full tablescan ?

alter system flush buffer_cache;

execute snap_filestat.start_snap
execute snap_my_stats.start_snap

select count(v1) from t1;

execute snap_my_stats.end_snap
execute snap_filestat.end_snap

I’ve flushed the buffer cache to get rid of any buffered undo blocks – again an unreasonable thing to do in production but a valid way of emulating the aging out of undo blocks that would take place in a production system – and surrounded my count() with a couple of packaged call to report the session stats and file I/O stats due to my query. (If you’re sharing your database then the file I/O stats will be affected by the activity of other users, of course, but in my case I had a private database.)

Here are the file stats:


--------------
Datafile Stats
--------------
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs    Max      Writes      Blocks   Avg Csecs    Max
File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------    ---      ------      ------   ---------    ---
    1          17          17       1.000        .065          17        .065           0        .000      6           0           0        .000     15
/u01/app/oracle/oradata/TEST/datafile/o1_mf_system_938s4mr3_.dbf
    3         665         665       1.000        .020         665        .020           0        .000      6           0           0        .000     15
/u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         631      80,002     126.786        .000           2        .045         629        .000      6           0           0        .000     17
/u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__cz1w7tz1_.dbf

As expected I’ve done a number of multiblock reads of my data tablespace for a total of roughly 80,000 blocks read. What you may not have expected is that I’ve done 665 single block reads of the undo tablespace.

What have I been doing with all those undo blocks ? Check the session stats:


Session stats
-------------
Name                                                                     Value
----                                                                     -----
transaction tables consistent reads - undo records applied              10,014
transaction tables consistent read rollbacks                                10

We’ve been reading undo blocks so that we can create read-consistent copies of the 10 undo segment headers that were active in my instance. We haven’t (and you’ll have to trust me on this, I can’t show you the stats that aren’t there!) reported any “data blocks consistent reads – undo records applied”.

If you want to see a detailed explanation of what has happened you’ll need to read Oracle Core (UK source), chapter 3 (and possibly chapter 2 to warm yourself up for the topic). In outline the following type of thing happens:

  • Oracle gets to the first block updated in t1 and sees that there’s an ITL (interested transaction list) entry that hasn’t been marked as committed (we flushed the block from memory before the commit cleanout could take place so the relevant transaction is, apparently, still running and the row is still marked as locked).
  • Let’s say the ITL entry says the transaction was for undo segment 34, transaction table slot 11, sequence 999. Oracle reads the undo segment header block for undo segment 34 and checks transaction table slot 11, which is now at sequence 1032. Oracle can infer from this that the transaction that updated the table has committed – but can’t yet know whether it committed before or after the start of our “long running query”.
  • Somehow Oracle has to get slot 11 back to sequence 999 so that it can check the commit SCN recorded in the slot at that sequence number. This is where we see “undo records applied” to make the “transaction table read consistent”. It can do this because the undo segment header has a “transaction control” section in it that records some details of the most recent transaction started in that segment. When a transaction starts it updates this information, but saves the old version of the transaction control and the previous version of its transaction table slot in its first undo record, consequently Oracle can clone the undo segment header block, identify the most recent transaction, find its first undo record and apply it to unwind the transaction table information. As it does so it has also wound the transaction control section backwards one step, so it can use that (older) version to go back another step … and so on, until it takes the cloned undo segment header so far back that it takes our transaction table slot back to sequence 999 – and the job is done, we can now check the actual commit SCN.  (Or, if we’re unlucky, we might receive an ORA-01555 before we get there)

So – no changes to the t1 table during the query, but lots of undo records read because OTHER tables have been changing.

Footnote:

In my example the tablescan used direct path reads – so the blocks that went through delayed block cleanout were in private memory, which means they weren’t in the buffer cache and didn’t get written out to disc. When I flushed the buffer cache (again to emulate aging out of undo blocks etc.) and repeated the tablescan Oracle had to go through all that work of creating read consistent transaction tables all over again.

Footnote 2:

Despite the grand claim implied by the title I hope everyone realises that this is just one little detail of how undo handling can interfere with their expectations of performance.

 

 

March 27, 2017

Index out of range

Filed under: CBO,Indexing,Oracle,Troubleshooting — Jonathan Lewis @ 8:42 am BST Mar 27,2017

I’ve waxed lyrical in the past about creating suitable column group statistics whenever you drop an index because even when the optimizer doesn’t use an index in its execution path it might have used the number of distinct keys of the index (user_indexes.distinct_keys) in its estimates of cardinality.

I’ve also highlighted various warnings (here (with several follow-on URLs) and here) about when the optimizer declines to use column group statistics. One of those cases is when a predicate on one of the columns goes “out of  range” – i.e. is below the column low_value or above the column high_value. Last night it suddenly crossed my mind that if we drop an index and replace it with a column group we might see an example of inconsistent behaviour: what happens when the index exists but the predicate is out of range – would you find that dropping the index and replacing it with a column group would give you different cardinality estimates for out of range predicates ?

Here’s the demonstration of what happened when I created a simple test on 12.1.0.2:


rem
rem     Script:         index_v_colgrp.sql
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             12.1.0.2
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        mod(rownum-1,100)               n1,
        mod(rownum-1,100)               n2,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format problem
;

begin dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index t1_i1 on t1(n1, n2);

set autotrace traceonly explain

I’ve created a table with 1M rows, where n1 and n2 are perfectly correlated – there are 100 distinct pairs of values (ranging from (0,0) to (99,99)). Now with autotrace enabled I’ll execute three queries – two with an index on the table of which one will be the baseline plan for predicates that are “in-range” and the other will take the predicates out of range, and the third after I’ve dropped the index and substituted a matching column group to see what I get for the “out of range” plan. The plans may produce different paths as the index disappears, of course, but what we’re only interested in the cardinality estimates in this experiment.

Here’s the code to run the three queries:


select  padding
from    t1
where
        n1 = 50
and     n2 = 50
;

select  padding
from    t1
where
        n1 = 110
and     n2 = 110
;

drop index t1_i1;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for columns (n1, n2) size 1'
        );
end;
/

select  padding
from    t1
where
        n1 = 110
and     n2 = 110
;

And the three execution plans:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 10000 |  1044K|  2142   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 10000 |  1044K|  2142   (4)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=50 AND "N2"=50)


---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |    79 |  8453 |    83   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    79 |  8453 |    83   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |    79 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=110 AND "N2"=110)


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    79 |  8453 |  2142   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    79 |  8453 |  2142   (4)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=110 AND "N2"=110)

In summary:

  • With the index in place and the predicates in range the optimizer used user_indexes.distinct_keys to calculate cardinality.
  • With the index in place and the predicates (or just one of them, in fact) out of range the optimizer used the individual column selectivities with linear decay.
  • With a column group instead of an index the optimizer behaved exactly as it used to with the index in place.

So my concern that substituting column groups for indexes was unfounded – the optimizer was being silly (legal disclaimer: that’s just my opinion) with indexes, and the silly (ditto) behaviour with column groups hasn’t changed anything.

I’ll have to go back a couple of versions of Oracle to repeat these tests – maybe this behaviour with user_indexes.distinct_keys in place is relatively recent, but it’s another reason why execution plans may change suddenly and badly as time passes when “nothing changed”.

 

March 23, 2017

min/max Upgrade

Filed under: Bugs,CBO,Execution plans,Indexing,Oracle,Troubleshooting — Jonathan Lewis @ 8:53 am BST Mar 23,2017

A question came up on the OTN database forum a little while ago about a very simple query that was taking different execution paths on two databases with the same table and index definitions and similar data. In one database the plan used the “index full scan (min/max)” operation while the other database used a brute force “index fast full scan” operation.

In most circumstances the starting point to address a question like this is to check whether some configuration details, or some statistics, or the values used in the query are sufficiently different to result in a significant change in costs; and the first simple procedure you can follow is to hint each database to use the plan from the opposite database to see if this produces any clues about the difference – it’s a good idea when doing this test to use one of the more verbose formatting options for the call to dbms_xplan.

In this case, though, the OP discovered a note on MoS reporting exactly the problem he was seeing:

Doc ID 2144428.1: Optimizer Picking Wrong ‘INDEX FAST FULL SCAN’ Plan vs Correct ‘INDEX FULL SCAN (MIN/MAX)’

which referred to

Bug 22662807: OPTIMIZER PICKING INDEX FFS CAN INSTEAD OF MIN/MAX

Conveniently the document suggested a few workarounds:

  • alter session set optimizer_features_enable = ‘11.2.0.3’;
  • alter session set “_fix_control” = ‘13430622:off’;
  • delete object stats [Ed: so that dynamic sampling takes place … maybe a /*+ dynamic_sampling(alias level) */ hint would suffice].

Of the three options my preference would (at least in the short term) be the _fix_control one. Specifically, from the v$system_fix_control view, we can see that it addresses the problem very precisely with the description: “index min/max cardinality estimate fix for filter predicates”.

The example in the bug note showed a very simple statement (even more simple than the OP’s query which was only a single table query anyway), so I thought I’d build a model and run a few tests to see what was going on. Luckily, before I’d started work, one of the other members of the Oak Table network sent an email to the list asking if anyone knew how the optimizer was costing an example he’d constructed – and I’ve finally got around to looking at his example, and here’s the model and answer(s), starting with the data set:


rem
rem     Script:         test_min_max.sql
rem     Dated:          March 2017
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem             11.2.0.3
rem

create table min_max_test nologging
as
with ids as (
        select /*+ Materialize */ rownum  id from dual connect by rownum <= 50000 -- > comment to protect formatting
),
line_nrs as (
        select /*+ Materialize */  rownum line_nr from dual connect by rownum <= 20 -- > comment to protect formatting
)
select
        id, line_nr ,rpad(' ', 800, '*') data
from
        line_nrs, ids
order by
        line_nr, id
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'min_max_test',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index mmt_ln_id on min_max_test (line_nr, id) nologging;
create index mmt_id    on min_max_test (id)          nologging;

The table has two critical columns: each id has 20 line_nr values associated with it, but the way the data was generated means that the line numbers for a given id are scattered across 20 separate table blocks.

There are two indexes – one on the id which will allow us to find all the rows for a given id as efficiently as possible, and one (slightly odd-looking in this context) that would allow us to find a specific row for a given line_nr and id very efficiently. Two things about these indexes – in a live application they should both be compressed on the first (only, in the case of index mmt_id) column, and secondly the necessity of the mmt_id index is questionable and it might be an index you could drop if you reversed the order of the columns in mmt_ln_id. The thing about these indexes, though, is that they allow us to demonstrate a problem. So let’s query the data – twice, hinting each index in turn:


variable b1 number;
exec :b1 := 50000;

set serveroutput off

select
        /*+ index(t(id)) */
        min(line_nr)
from
        min_max_test t
where
        id = :b1
;

select * from table(dbms_xplan.display_cursor);

select
        /*+ index(t(line_nr, id)) */
        min(line_nr)
from
        min_max_test t
where
        id = :b1
;

select * from table(dbms_xplan.display_cursor);

It’s fairly safe to make a prediction about the execution plan and cost of the first query – it’s likely to be a range scan that accesses a couple of branch blocks, a leaf block and 20 separate table blocks followed by a “sort aggregate” – with a cost of about 23.

It’s a little harder to make a prediction about the second query. The optimizer could infer that the min(line_nr) has to be close to the left hand section of the index, and could note that the number of rows in the table is the same as the product of the number of distinct values of the two separate columns, and it might note that the id column is evenly distributed (no histogram) across the data, so it might “guess” that it need only range scan all the entries for the first line_nr to find the appropriate id. So perhaps the optimizer will use the index min/max range scan with a cost that is roughly 2 branch blocks plus total leaf blocks / 20 (since there are 20 distinct values for line_nr); maybe it would divide the leaf block estimate by two because “on average” – i.e. for repeated random selections of value for id – it would have to scan half the leaf blocks. There were 2,618 leaf blocks in my index, so the cost should be close to either 133 or 68.

Here are the two plans – range scan first, min/max second:


select  /*+ index(t(id)) */  min(line_nr) from  min_max_test t where id = :b1
-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |    23 (100)|          |
|   1 |  SORT AGGREGATE                      |              |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| MIN_MAX_TEST |    20 |   160 |    23   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | MMT_ID       |    20 |       |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID"=:B1)

select  /*+ index(t(line_nr, id)) */  min(line_nr) from  min_max_test t where  id = :b1
-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |    22 (100)|          |
|   1 |  SORT AGGREGATE             |           |     1 |     8 |            |          |
|   2 |   FIRST ROW                 |           |     1 |     8 |    22   (0)| 00:00:01 |
|*  3 |    INDEX FULL SCAN (MIN/MAX)| MMT_LN_ID |     1 |     8 |    22   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("ID"=:B1)

Spot on with the estimate for the simple range scan – but what did we do wrong with the estimate for the min/max scan ? You might notice in the first example the “table access by rowid batched” and realise that this is running on 12c. Here’s the plan if I get if I set the optimizer_features_enable back to 11.2.0.3 before running the second query again:


select  /*+ index(t(line_nr, id)) */  min(line_nr) from  min_max_test t where  id = :b1
-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |   136 (100)|          |
|   1 |  SORT AGGREGATE             |           |     1 |     8 |            |          |
|   2 |   FIRST ROW                 |           |     1 |     8 |   136   (1)| 00:00:01 |
|*  3 |    INDEX FULL SCAN (MIN/MAX)| MMT_LN_ID |     1 |     8 |   136   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("ID"=:B1)

Using the 11.2.0.3 optimizer model the plan has a cost that’s very close to our prediction – we’ll see why there’s a slight difference in a moment. If we set the optimizer_features_enable to 11.2.0.4 the cost drops back to 22. So for our example 11.2.0.3 will use the simple “index range scan” and an upgrade to 11.2.0.4 (or higher) will switch to the “index full scan (min/max)”. If you look at the OTN posting the impact of the change in costing is exactly the other way around – 11.2.0.3 uses the min/max path, 11.2.0.4 uses the simple index range scan.

The techy bit

You really don’t need to know this – experimenting with the optimizer_features_enable (or _fix_control) will give you plans that show you all the numbers you need to see to check whether or not you’ve run into this particular problem – but if you’re interested here’s a little bit from the two 10053 trace files. We need only look at a few critical lines. From the 11.2.0.3 costing for the min/max scan:


Index Stats::
  Index: MMT_ID  Col#: 1
  LVLS: 2  #LB: 2202  #DK: 50000  LB/K: 1.00  DB/K: 20.00  CLUF: 1000000.00  NRW: 1000000.00
  Index: MMT_LN_ID  Col#: 2 1
  LVLS: 2  #LB: 2618  #DK: 1000000  LB/K: 1.00  DB/K: 1.00  CLUF: 125000.00  NRW: 1000000.00

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for MIN_MAX_TEST[T]
  Column (#1): ID(NUMBER)
    AvgLen: 5 NDV: 50536 Nulls: 0 Density: 0.000020 Min: 1.000000 Max: 50000.000000
  Table: MIN_MAX_TEST  Alias: T
    Card: Original: 1000000.000000  Rounded: 20  Computed: 19.787874  Non Adjusted: 19.787874

 ****** Costing Index MMT_LN_ID
  Access Path: index (Min/Max)
    Index: MMT_LN_ID
    resc_io: 135.000000  resc_cpu: 961594
    ix_sel: 1.000000  ix_sel_with_filters: 1.9788e-05
    Cost: 135.697679  Resp: 135.697679  Degree: 1

I was running 12.1.0.2 so there were a few extra bits and pieces that I’ve deleted (mostly about SQL Plan Directives and in-memory). Critically we can see that the stats collection has a small error for the ID column – 50,536 distinct values (NDV) instead of exactly 50,000. This seems to have given us a cost for the expected index range of: 2 (blevel) + ceiling(2618 (leaf blocks) * 50536 / 1000000) = 2 + ceil(132.3) = 135, to which we add a bit for the CPU and get to 136. (Q.E.D.)

Then we switch to costing for 11.2.0.4:


SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for MIN_MAX_TEST[T]
  Column (#1): ID(NUMBER)
    AvgLen: 5 NDV: 50536 Nulls: 0 Density: 0.000020 Min: 1.000000 Max: 50000.000000
  Table: MIN_MAX_TEST  Alias: T
    Card: Original: 1000000.000000  Rounded: 20  Computed: 19.787874  Non Adjusted: 19.787874

 ****** Costing Index MMT_LN_ID
  Access Path: index (Min/Max)
    Index: MMT_LN_ID
    resc_io: 21.787874  resc_cpu: 156872
    ix_sel: 1.000000  ix_sel_with_filters: 1.9788e-05
    Cost: 22.324608  Resp: 22.324608  Degree: 1

We still have the small error in the number of distinct values for id, so the estimated number of rows that we need to access from the table for a given id (before “aggregating” to find its minimum line_nr) is 19.787874 (Computed: / Non Adjusted:) rather than exactly 20. Notice, then, that the cost of using the index is 19.787874 + 2 which looks suspiciously like adding the blevel to the number of table blocks to get a cost and forgetting that we might have to kiss a lot of frogs before we find the prince. Basically, in this example at least, it looks like the costing algorithm has NOTHING to do with the mechanics of what actually has to happen at run-time.

Footnote

This is only an initial probe into what’s going on with the min/max scan; there are plenty more patterns of data that would need to be tested before we could have any confidence that we had produced a generic model of how the optimizer does its calculations – the only thing to note so far is that there IS a big change as  you move from 11.2.0.3 to later versions: the case on OTN showed the min/max scan disappearing on the upgrade, the example above shows the min/max disappearing on the downgrade – either change could be bad news for parts of a production system.

There are a couple of related bugs that might also be worth reviewing.

  • Bug 11834402 : CBO CHOOSES A SLOW INDEX FULL SCAN OVER A MUCH FASTER INDEX RANGE SCAN
  • Bug 13430622 : INDEX SCAN IN VERY SLOW FOR ONE PREDICATE AND FAST FOR OTHERS

There is a note, though that this last bug was fixed in 12.1

Footnote 2

When experimenting, one idea to pursue as the models get more complex and you’re using indexes with more than two columns is to test whether the presence of carefully chosen column group statistics might make a difference to the optimizer’s estimates of cardinality (hence cost) of the min/max scan.

March 21, 2017

Deception

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 2:41 pm BST Mar 21,2017

One of the difficulties with trouble-shooting is that’s it very easy to overlook, or forget to go hunting for, the little details that turn a puzzle into a simple problem. Here’s an example showing how you can read a bit of an AWR report and think you’ve found an unpleasant anomaly. I’ve created a little model and taken a couple of AWR snapshots a few seconds apart so the numbers involved are going to be very small, but all I’m trying to demonstrate is a principle. So here’s a few lines of one of the more popular sections of an AWR report:

SQL ordered by Gets                       DB/Inst: OR32/or32  Snaps: 1754-1755
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - Buffer Gets   as a percentage of Total Buffer Gets
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets:         351,545
-> Captured SQL account for   65.0% of Total

     Buffer                 Gets              Elapsed
      Gets   Executions   per Exec   %Total   Time (s)  %CPU   %IO    SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
      8,094          20        404.7    2.3        0.0 114.1   2.3 017r1rur8atzv
Module: SQL*Plus
UPDATE /*+ by_pk */ T1 SET N1 = 0 WHERE ID = :B1

We have a simple update statement which, according to the hint/comment (that’s not a real hint, by the way) and guessing from column names, is doing an update by primary key; but it’s taking 400 buffer gets per execution!

It’s possible, but unlikely, that there are about 60 indexes on the table that all contain the n1 column; perhaps there’s a massive read-consistency effect going on thanks to some concurrent long-running DML on the table; or maybe there are a couple of very hot hotspots in the table that are being constantly modified by multiple sessions; or maybe the table is a FIFO (first-in, first-out) queueing table and something funny is happening with a massively sparse index.

Let’s just check, first of all, that the access path is the “update by PK” that the hint/comment suggests (cut-n-paste):


SQL> select * from table(dbms_xplan.display_cursor('017r1rur8atzv',null));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
SQL_ID  017r1rur8atzv, child number 0
-------------------------------------
UPDATE /*+ by_pk */ T1 SET N1 = 0 WHERE ID = :B1

Plan hash value: 1764744892

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |       |       |       |     3 (100)|          |
|   1 |  UPDATE            | T1    |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| T1_PK |     1 |    14 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=:B1)

The plan is exactly as expected – so where do we look next to find out what’s going on? I’m a great believer in trying to make sure I have as much relevant information as possible; but there’s always the compromise when collecting information that balances the benefit of the new information against the difficulty of gathering it – sometimes the information that would be really helpful is just too difficult, or time-consuming, to collect.

Fortunately, in this case, there’s a very quick easy way to enhance the information we’ve got so far. The rest of the AWR report – why not search for that SQL_ID in the rest of the report to see if that gives us a clue ? Unfortunately the value doesn’t appear anywhere else in the report. On the other hand there’s the AWR SQL report (?/rdbms/admin/awrsqrpt.sql – or the equivalent drill-down on the OEM screen), and here’s a key part of what it tells us for this statement:


Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                                36            1.8     0.0
CPU Time (ms)                                    41            2.0     0.1
Executions                                       20            N/A     N/A
Buffer Gets                                   8,094          404.7     2.3
Disk Reads                                        1            0.1     0.0
Parse Calls                                      20            1.0     0.4
Rows                                          2,000          100.0     N/A
User I/O Wait Time (ms)                           1            N/A     N/A
Cluster Wait Time (ms)                            0            N/A     N/A
Application Wait Time (ms)                        0            N/A     N/A
Concurrency Wait Time (ms)                        0            N/A     N/A
Invalidations                                     0            N/A     N/A
Version Count                                     1            N/A     N/A
Sharable Mem(KB)                                 19            N/A     N/A
          -------------------------------------------------------------

Spot the anomaly?

We updated by primary key 20 times – and updated 2,000 rows!

Take another look at the SQL – it’s all in upper case (apart from the hint/comment) with a bind variable named B1 – that means it’s (probably) an example of SQL embedded in PL/SQL. Does that give us any clues ? Possibly, but even if it doesn’t we might be able to search dba_source for the PL/SQL code where that statement appears. And this is what it looks like in the source:

        forall i in 1..m_tab.count
                update  /*+ by_pk */ t1
                set     n1 = 0
                where   id = m_tab(i).id
        ;

It’s PL/SQL array processing – we register one execution of the SQL statement while processing the whole array, so if we can show that there are 100 rows in the array the figures we get from the AWR report now make sense. One of the commonest oversights I (used to) see in places like the Oracle newsgroup or listserver was people reporting the amount of work done but forgetting to consider the equally important “work done per row processed”. To me it’s also one of the irritating little defects with the AWR report – I’d like to see “rows processed” in various of the “SQL ordered by” sections of the report (not just the “SQL ordered by Executions” section), rather than having to fall back on the AWR SQL report.

Footnote:

If you want to recreate the model and tests, here’s the code:


rem
rem     Script:         forall_pk_confusion.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017
rem
rem     Last tested
rem             12.1.0.2
rem

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        2 * trunc(dbms_random.value(1e10,1e12))         n1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format problem
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1', 
                method_opt       => 'for all columns size 1'
        );
end;

alter table t1 add constraint t1_pk primary key(id);

declare

        cursor c1 is
        select  id
        from    t1
        where   mod(id,10000) = 1
        ;

        type c1_array is table of c1%rowtype index by binary_integer;
        m_tab c1_array;

begin

        open c1;

        fetch c1
        bulk collect
        into m_tab
        ;

        dbms_output.put_line('Fetched: ' || m_tab.count);

        close c1;

        forall i in 1..m_tab.count
                update  /*+ by_pk */ t1
                set     n1 = 0
                where   id = m_tab(i).id
        ;

        dbms_output.put_line('Updated: ' || sql%rowcount);

end;
/

select
        v.plan_table_output
from
        v$sql   sql,
        table(dbms_xplan.display_cursor(sql.sql_id, sql.child_number)) v
where
        sql_text like 'UPDATE%by_pk%'
;

select
        executions, rows_processed, disk_reads, buffer_gets
from    v$sql  
where   sql_id = '017r1rur8atzv'
;

March 9, 2017

Quiz Night

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 10:34 pm BST Mar 9,2017

The following is a straight, continuous, untouched, cut-n-paste from an SQL*Plus session on 12.1.0.2. How come the update doesn’t execute in parallel – noting that parallel DML has been enabled and the tablescan to identify rows to be updated does execute in parallel ?


SQL> desc t1
 Name                                                                            Null?    Type
 ------------------------------------------------------------------------------- -------- ------------------------------------------------------
 OWNER                                                                           NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                                     NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                           VARCHAR2(128)
 OBJECT_ID                                                                       NOT NULL NUMBER
 DATA_OBJECT_ID                                                                           NUMBER
 OBJECT_TYPE                                                                              VARCHAR2(23)
 CREATED                                                                         NOT NULL DATE
 LAST_DDL_TIME                                                                   NOT NULL DATE
 TIMESTAMP                                                                                VARCHAR2(19)
 STATUS                                                                                   VARCHAR2(7)
 TEMPORARY                                                                                VARCHAR2(1)
 GENERATED                                                                                VARCHAR2(1)
 SECONDARY                                                                                VARCHAR2(1)
 NAMESPACE                                                                       NOT NULL NUMBER
 EDITION_NAME                                                                             VARCHAR2(128)
 SHARING                                                                                  VARCHAR2(13)
 EDITIONABLE                                                                              VARCHAR2(1)
 ORACLE_MAINTAINED                                                                        VARCHAR2(1)

SQL> select * from t1 minus select * from all_objects;

OWNER           OBJECT_NAME          SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE             CREATED   LAST_DDL_
--------------- -------------------- ---------------------- ---------- -------------- ----------------------- --------- ---------
TIMESTAMP           STATUS  T G S       NAMESPACE EDITION_NAME         SHARING       E O
------------------- ------- - - - --------------- -------------------- ------------- - -
TEST_USER       T1                                              159331         159331 TABLE                   09-MAR-17 09-MAR-17
2017-03-09:22:16:36 VALID   N N N               1                      NONE            N


1 row selected.

SQL> alter session force parallel dml;

Session altered.

SQL> set serveroutput off
SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 121765358

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  UPDATE               | T1       |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property
   - PDML disabled because single fragment or non partitioned table used


29 rows selected.

SQL> select * from v$pq_tqstat;

DFO_NUMBER      TQ_ID SERVER_TYPE       NUM_ROWS      BYTES  OPEN_TIME AVG_LATENCY      WAITS   TIMEOUTS PROCESS         INSTANCE     CON_ID
---------- ---------- --------------- ---------- ---------- ---------- ----------- ---------- ---------- --------------- -------- ----------
         1          0 Producer              8997     363737 ##########           0         14          0 P004                   1          0
                                            9721     409075 ##########           0         12          0 P007                   1          0
                                            9774     408591 ##########           0         12          0 P005                   1          0
                                            9844     396816 ##########           0         12          0 P003                   1          0
                                            9965     403926 ##########           0         13          0 P006                   1          0
                                            9727     388829 ##########           0         12          0 P002                   1          0
                                            9951     399162 ##########           0         14          0 P001                   1          0
                                           10345     408987 ##########           0         13          0 P000                   1          0
                      Consumer             78324    3179123 ##########           0          0          0 QC                     1          0



9 rows selected.

If you want to see the fully parallel plan, it would look like this (after running the query above against v$pq_tqstat I executed one statement that I’m not showing before carrying on with the statements below):


SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 3991856572

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM) | :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    UPDATE             | T1       |       |       |            |          |  Q1,00 | PCWP |            |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property


28 rows selected.

SQL> select object_name, object_type from user_objects;

OBJECT_NAME          OBJECT_TYPE
-------------------- -----------------------
T1                   TABLE

1 row selected.

Answer coming some time tomorrow.

March 7, 2017

Guesswork

Filed under: Infrastructure,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 7:57 pm BST Mar 7,2017

A recent posting on the OTN database forum described a problem with an insert (as select) statement that sometimes ran extremely slowly: nothing interesting yet, there could be plenty of boring reasons for that to happen. The same SQL statement (by SQL_ID) might take 6 hours to insert 300K rows one night while taking just a few minutes to insert 900K another night (still nothing terribly interesting).

An analysis of the ASH data about the statement showed that the problem was on the “LOAD TABLE CONVENTIONAL” operation – which starts to get interesting if you also assume that someone who was competent to look at ASH would probably have noticed whether or not the time spent was on (the first obvious guess) some variant of “TX enqueue” waits or “log file” waits or something amazingly different. It’s also a little more interesting if you’ve noticed that the title of the posting is about “consuming a lot of CPU time” – so where could that be going if we see most of the excess time going on the insert rather than on the select.

Here’s an enormous clue that this might be a “non-standard” problem – the SQL statement starts like this:


INSERT  /*+ ignore_row_on_dupkey_index(tgt OTC_DAT_TV_PROC_STATUS_UK) */
INTO OTC_DAT_TV_PROC_STATUS TGT (
    {list of columns}
)
SELECT  /*+ parallel(8) */
    ...

See the hint which says “ignore rows if they raise duplicate key errors against index otc_dat_tv_proc_status_uk” ? Think what it might take to implement code that obeys the hint – Oracle can’t know that a row is duplicating a value until it has inserted the row (to get a rowid) then tried to maintain the index and discovered the pre-existing value, at which point it raises an exception, then handles it by undoing the single row insert (I wonder what that does to array inserts, and the logic of triggers), then carries on with the next row. (I suppose the code could check every relevant index before doing the insert – but that would mean a very big, and possibly very resource-intensive, change to all existing “insert a row” code.)

Surely we’re going to see some dramatic effects if a large fraction of our rows result in duplicate values.

Warning – the guess might not be right but it’s much more interesting than all the other guesses you might make at this point and a good excuse for doing a bit of experimentation and learning. So here’s a simple model to test the “excess work” hypothesis – running against 11.2.0.4:

rem
rem     Script:         ignore_dupkey.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017
rem

drop table t2;
drop table t1;

create table t1
nologging
as
select  *
from    all_objects
where   rownum <= 50000 ; -- > comment to protect WordPress formatting

create table t2
nologging
as
select  *
from    all_objects
where   rownum <= 50000 ; -- > comment to protect WordPress formatting

update t1 set object_id = object_id + (select max(object_id) from t1)
;

commit;

create unique index t1_i1 on t1(object_id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

All I’ve done is create two copies of the same data – with an update in place that changes one set of data so that the object_id doesn’t overlap with the object_id in the other set. The update statement is something that I’m going to include in one test but exclude from the second. Now for the statement that tests for the effects of the hint:


execute snap_events.start_snap
execute snap_my_stats.start_snap

insert
        /*+ ignore_row_on_dupkey_index(t1 (object_id)) */
        into t1
select  *
from    t2
;

execute snap_my_stats.end_snap
execute snap_events.end_snap


There are several different things I could do once I’ve got this basic test set up if I want to refine what I’m testing and isolate certain aspects of the mechanism, but all I’ll do for this note is a simple report of some figures from the two tests – one where the object_id values don’t overlap and one where t1 and t2 are exact copies of each other. The two procedures wrapping the statement are just my standard routines for capturing changes in v$mystat and v$session_event for my session.

For the insert where I’ve done the update to avoid any duplicates appearing the insert completed in about 0.3 seconds, generating 10MB of redo and 2MB of undo.

When I removed the update statement the (continuously failing) insert took 35.5 seconds to complete, of which almost all the time was CPU time. The redo jumped to 478MB with 14MB of undo. The extreme level of redo surprised me slightly especially since the scale of the change was so much greater than that of the undo – I think it may be due to a problem with Oracle needing to unwind one row from an (internal) array insert before retrying. Here, taken from the session stats of the problem run, is a little indication of why the time (and especially the CPU time) increased so much:

Name                                                                     Value
----                                                                     -----
rollback changes - undo records applied                                 95,014
active txn count during cleanout                                        24,627
cleanout - number of ktugct calls                                       24,627
HSC Heap Segment Block Changes                                          97,509
Heap Segment Array Inserts                                              97,509
recursive calls                                                        682,574
recursive cpu usage                                                      2,193
session logical reads                                                1,341,077

Obviously there’s a lot of work done rolling back changes that should not have been made (though why it’s reported as 95,000 rather than 100,000 I don’t know and I’d rather not do a trace of buffer activity to find out) with an associated extra load of data block activity. Most visible, though, is the huge number of recursive calls with, perhaps associated, a very large number of session logical reads. Clearly it’s worth enabling extended tracing to see what’s going on – if you haven’t already guessed what some of those calls are about. Here’s an extract from the top of an appropriate tkprof output:


tkprof test_ora_32298.trc ignore_dupkey sort=execnt

select /*+ rule */ c.name, u.name
from
 con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
  c.owner# = u.user#


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    48753      1.86       2.08          0          0          0           0
Execute  48753      3.75       4.17          0          0          0           0
Fetch    48753      2.86       3.12          0      97506          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   146259      8.47       9.38          0      97506          0           0

select o.name, u.name
from
 obj$ o, user$ u  where o.obj# = :1 and o.owner# = u.user#

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    48753      1.87       1.99          0          0          0           0
Execute  48753      3.60       3.63          0          0          0           0
Fetch    48753      7.35       7.52          0     243765          0       48753
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   146259     12.84      13.14          0     243765          0       48753

The top two statements in the trace file show Oracle first trying to find the name of the constraint that has been breached, then falling back to searching for the name of the unique index that has caused the error to appear. If I had created a unique constraint rather than just a unique index then the second of these two statement would not have appeared in the trace file (and the run would have been a little quicker – hint: constraints are a good thing).

You’ll notice that the total number of calls from the two statement is roughly 292,500 – far short of the 682,000 reported in the session stats. Unfortunately there was nothing else in the trace files that could be blamed for the outstanding 400,000 missing calls. It’s not really necessary to chase up all the details, though; clearly we can see that this feature is very expensive if lots of duplicates appear – like DML error logging it has probably been created as a way of dealing with occasional errors when handling large volumes of data.

Footnote

Notice that my example uses the “index description” method for specifying the index in the hint; it’s legal with either the name or the description. Interestingly (but, perhaps, unsurprisingly) this is a hint that Oracle has to parse for correctness before optimisation. I made a mistake in my first attempt at writing the update statement leaving me with duplicates in the data so Oracle couldn’t create the unique index; as a consequence of the missing unique index the insert statement reported the Oracle error “ORA-38913: Index specified in the index hint is invalid”.

 

January 30, 2017

ASSM Help

Filed under: 12c,Oracle,Troubleshooting — Jonathan Lewis @ 12:33 pm BST Jan 30,2017

I’ve written a couple of articles in the past about the problems of ASSM spending a lot of time trying to find blocks with usable free space. Without doing a bit of rocket science with some x$ objects, or O/S tracing for the relevant calls, or enabling a couple of nasty events, it’s not easy proving that ASSM might be a significant factor in a performance problem – until you get to 12c Release 2 where a staggering number of related statistics appear in v$sysstat.

I’ve published the full list of statistics (without explanation) at the end of this note, but here’s just a short extract showing the changes in my session’s ASSM stats due to a little PL/SQL loop inserting 10,000 rows, one row at a time into an empty table with a single index:

Name                                  Value
----                                  -----
ASSM gsp:get free block                 185
ASSM cbk:blocks examined                185
ASSM gsp:L1 bitmaps examined            187
ASSM gsp:L2 bitmaps examined              2
ASSM gsp:Search hint                      2
ASSM gsp:good hint                      185

It looks like we’ve checked a couple of “level 2” bitmap blocks (one for the table, one for the index, presumably) to pick a sequence of “level 1” bitmap blocks that have been very good at taking us to a suitable data (table or index) block that can be used.

You might have expected to see numbers more like 10,000 in the output, but remember that PL/SQL has lots of little optimisations built into it and one of those is that it pins a few blocks while the anonymous block is running so it doesn’t have to keep finding blocks for every single row.

In comparison here’s the effect of the same data load when operated as 10,000 separate insert statements called from SQL*Plus:

Name                                  Value
----                                  -----
ASSM gsp:get free block              10,019
ASSM cbk:blocks examined             10,220
ASSM cbk:blocks marked full             201
ASSM gsp:L1 bitmaps examined         10,029
ASSM gsp:L2 bitmaps examined              6
ASSM gsp:L2 bitmap full                   1
ASSM gsp:Search all                       1
ASSM gsp:Search hint                      2
ASSM gsp:Search steal                     1
ASSM gsp:bump HWM                         1
ASSM gsp:good hint                   10,016
ASSM rsv:fill reserve                     1

It’s interesting to note that in this case we see (I assume) a few cases where we’ve done the check for an L1 bitmap block, gone to a data blocks that was apparently free, and discovered that our insert would make to over full – hence the 201 “blocks marked full”.

Critically, of course, this is just another of the many little indications of how “client/server” chatter introduces lots of little bits of extra work when compared to the “Thick DB “ approach.

One final set of figures. Going back to an example that first alerted me to the type of performance catastrophes that ASSM could contribute to, I re-ran my test case on 12.2 and checked the ASSM figures reported. The problem was that a switch from a 4KB or 8KB blocks size to a 16KB bblock size produced a performance disaster. A version of my  test case and some timing results are available on Greg Rahn’s site.

In my test case I have 830,000 rows and do an update that sets column2 to column1 changing it from null to an 8-digit value. With a 16KB block size and PCTFREE set to a highly inappropriate value (in this case the default value of 10) this is what the new ASSM statistics looks like:


Name                                  Value
----                                  -----
ASSM gsp:get free block             668,761
ASSM cbk:blocks examined            671,404
ASSM cbk:blocks marked full           2,643
ASSM gsp:L1 bitmaps examined      1,338,185
ASSM gsp:L2 bitmaps examined        672,413
ASSM gsp:Search all                     332
ASSM gsp:Search hint                668,760
ASSM gsp:Search steal                   332
ASSM gsp:bump HWM                       332
ASSM wasted db state change         669,395

I’d love to know what the figures would have looked like if they had been available in the original Oracle 9.2.0.8 case (my guess is that the “blocks examined” statistic would have been in the order of hundreds of millions); they look fairly harmless in this case even though the database (according to some of the other instance activity stats) did roughly 10 times the work you might expect from a perfect configuration.

Even here, though, where the original catastrophic bug has been addressed, the ASSM stats give you an important clue: we’ve been doing a simple update so why have we even been looking for free space (get free block); even stranger, how come we had to examine 1.3M L1 bitmaps when we’ve only updated 830,000 rows surely the worst case scenario shouldn’t have been worse that 1 to 1; and then there’s that “wasted db state change” – I don’t understand exactly what that last statistic is telling me but when I’m worried about performance I tend to worry about anything that’s being wasted.

In passing – if you want to insert a single row into an unindexed table you can expect Oracle to examine the segment header, then an L2 bitmap block, then an L1 bitmap block to find a data block for the insert. (In rare cases that might be segment header, L3, L2, L1). There are then optimisation strategies for pinning blocks – the session will pin the L1 bitmap block briefly because it may have to check several data blocks it references because they may be full even though they are flagged as having space; similarly the session will pin the L2 bitmap block because it may need to mark an L1 bitmap block as full and check another L1 block. The latter mechanism probably explains why we have examined more L1 bitmaps than L2 bitmaps.

Finally, the full monty

Just a list of all the instance statistics that start with “ASSM”:

ASSM bg: segment fix monitor
ASSM bg:create segment fix task
ASSM bg:mark segment for fix
ASSM bg:slave compress block
ASSM bg:slave fix one segment
ASSM bg:slave fix state
ASSM cbk:blocks accepted
ASSM cbk:blocks examined
ASSM cbk:blocks marked full
ASSM cbk:blocks rejected
ASSM fg: submit segment fix task
ASSM gsp:Alignment unavailable in space res
ASSM gsp:L1 bitmaps examined
ASSM gsp:L2 bitmap full
ASSM gsp:L2 bitmaps examined
ASSM gsp:L3 bitmaps examined
ASSM gsp:Optimized data block rejects
ASSM gsp:Optimized index block rejects
ASSM gsp:Optimized reject DB
ASSM gsp:Optimized reject l1
ASSM gsp:Optimized reject l2
ASSM gsp:Search all
ASSM gsp:Search hint
ASSM gsp:Search steal
ASSM gsp:add extent
ASSM gsp:blocks provided by space reservation
ASSM gsp:blocks rejected by access layer callback
ASSM gsp:blocks requested by space reservation
ASSM gsp:bump HWM
ASSM gsp:get free block
ASSM gsp:get free critical block
ASSM gsp:get free data block
ASSM gsp:get free index block
ASSM gsp:get free lob block
ASSM gsp:good hint
ASSM gsp:reject L1
ASSM gsp:reject L2
ASSM gsp:reject L3
ASSM gsp:reject db
ASSM gsp:space reservation success
ASSM gsp:use space reservation
ASSM rsv:alloc from reserve
ASSM rsv:alloc from reserve fail
ASSM rsv:alloc from reserve succ
ASSM rsv:clear reserve
ASSM rsv:fill reserve
ASSM wasted db state change

November 29, 2016

Delete/Insert #2

Filed under: Oracle,Troubleshooting — Jonathan Lewis @ 4:33 pm BST Nov 29,2016

In the previous post I threw out a couple of options for addressing the requirement to transfer data from one table to another (“cut and paste” rather than just “copy”) without running into odd inconsistency errors. This triggered of a wonderful comment trail of alternatives based on how large the volume might be, how relaxed the concurrency requirements might be, and so on.

A comment by SydOracle1 picked up on my failure to get Oracle working with the “as of SCN” syntax because it kept reporting ORA-08187 and suggested a straightforward use of the VERSIONS strategy. I thought it was quite surprising that this could work given that “as of SCN” didn’t, so I whipped up a quick test to check it – adding a couple of little refinements to the supplied sample – and it worked.

create table t1
as
select	object_id, object_name, owner
from	all_objects
;

create table t2
as
select	* from t1
where	rownum = 0
;

alter table t1 add constraint t1_pk primary key(object_id);

execute dbms_stats.gather_table_stats(user,'t1')

execute dbms_lock.sleep(5)

That dbms_lock.sleep() is very important for the purposes of this demonstration; it has to be just a few seconds otherwise the references back to earlier SCNs could report error: “ORA-01466: unable to read data – table definition has changed”. This is probably a side effect due to the 3 second interval in the capture that Oracle uses in the table smon_scn_time.

So now we do the following:

  • find the current SCN,
  • start a transaction,
  • get the transaction id,
  • delete the data from the source table,
  • find the current SCN again,
  • insert into the target table the data that was deleted by our transaction in the interval spanned by the two SCNs.

 


rem
rem     insert_delete_2.sql
rem     Jonathan Lewis
rem     Nov 2016
rem

column current_scn new_value m_start_scn 
select to_char(current_scn,'FM999999999999999999999') current_scn from v$database;

variable m_tx_id varchar2(20)
exec :m_tx_id := dbms_transaction.local_transaction_id(true)

column xid new_value m_xid
select xid from v$transaction where xidusn || '.' || xidslot || '.' || xidsqn = :m_tx_id;

delete from t1 where owner = 'SYSTEM';
commit;

column current_scn new_value m_end_scn format 999999999999999999
select to_char(current_scn,'FM999999999999999999999') current_scn from v$database;
 
insert	into t2
select	r.*
from	t1  versions between scn &m_start_scn and &m_end_scn r
where 
	versions_operation = 'D'
and	versions_xid = '&m_xid'
and	owner = 'SYSTEM'
;

commit;

There are a couple of variants on getting the transaction ID – I decided to use a function call to start a transaction without doing any work rather than doing the delete and then finding the transaction id that the delete initiated (I could have linked v$session for my SID to v$transaction after the delete). Because of the choice I made I have to do a little bit of messing around in the subsequent code – the function call returns the transaction ID in the form 31.16.19111 (that’s undo segment, slot number, sequence) but the VERSIONS mechanism wants a transaction ID in its HEX form which, for the example shown, would be ‘1F001000A74A0000’. I could have converted the three part form to the other using a messy bit of to_char(,’XXXXXXXX’) code, but I was feeling a little lazy.

To my surprise I didn’t see any ORA-08187 errors – which made me look back at the notes I had jotted down on the couple of tests I’d initially tried to find out what I had been doing wrong. My first attempt did the insert first then tried to do the delete “as of SCN” and failed, so my second attempt tried to do the delete first just in case the problem related to using “as of SCN” in the middle of a transaction:


column current_scn new_value m_scn

prompt  ============
prompt  Insert first
prompt  ============

select to_char(current_scn,'FM999999999999999999') current_scn from V$database;

insert into t2
select  *
from    t1 as of scn &m_scn r
where
        owner = 'SQLTXADMIN'
;

delete from t1 as of scn &m_scn r
where owner = 'SQLTXADMIN'
;

-- ORA-08171: snapshot expression not allowed here

rollback;

prompt  ============
prompt  Delete first
prompt  ============

select to_char(current_scn,'FM999999999999999999') current_scn from V$database;

delete from t1 as of scn &m_scn r
where owner = 'SQLTXADMIN'
;

-- ORA-08171: snapshot expression not allowed here

insert into t2
select  *
from    t1 as of scn &m_scn r
where
        owner = 'SQLTXADMIN'
;

rollback;

Clearly “delete as of scn” is illegal.

Of course, if I’d gone a little further with this idea I might have tried starting with a delete that didn’t use “as of SCN”, and then the code would have succeeded. In fact, though, this wouldn’t be a perfect solution because it would allow a window for error: some other session might delete a relevant row between my call for SCN and my delete, which means my insert would insert a row deleted by another user.

The code could be modified though in its choice of SCN. Provided I started my transaction with the delete I could then query v$transaction for the start SCN for the transaction, and use that as the “as of” SCN for the insert:


delete from t1
where owner = 'SQLTXADMIN'
;

select
        to_char(
                start_scnw * power(2,32) + start_scnb,
                'FM999999999999999999'
        )       current_scn
from
        v$transaction
where   ses_addr = (
                select  saddr
                from    v$session
                where   sid = (
                        select  sid
                        from    V$mystat
                        where   rownum = 1
                )
        )
;

insert into t2
select  *
from    t1 as of scn &m_scn r
where
        owner = 'SQLTXADMIN'
;

By using the delete to initiate the transaction and set the SCN I think we block any window of inconsistency and, apart from the messy little bit of code that finds the transaction entry, we have an even simpler piece of code than the example give by SydOracle.

Collaboration or, kicking ideas around, is a wonderful way to learn.

 

October 24, 2016

Anniversary OICA

Filed under: CBO,Oracle,Performance,Statistics,Troubleshooting — Jonathan Lewis @ 1:00 pm BST Oct 24,2016

Happy anniversary to me!

On this day 10 years ago I published the first article in my blog. It was about the parameter optimizer_index_cost_adj (hence OICA), a parameter that has been a  source of many performance problems and baffled DBAs over the years and, if you read my first blog posting and follow the links, a parameter that should almost certainly be left untouched.

It seems appropriate to mention it today because I recently found a blog posting (dated 3rd May 2013) on the official Oracle Blogs where the director for Primavera advises setting this parameter to 1 (and the optimizer_index_caching parameter to 90) for the Primavera P6 OLTP (PMDB) database. The recommendation is followed by a fairly typical “don’t blame me” warning, viz: “As with any changes that affect query optimization, it is paramount to TEST, TEST and TEST again. At least these settings are easily adjusted or change back to the original value”.

Here’s a thought, though: setting the optimizer_index_cost_adj to the extreme value 1 is a catastrophic change so don’t suggest it unless you are extremely confident that it’s almost certain to be the right thing to do. If you’re confident that it’s a good idea to reduce the parameter to a much smaller value than the default then suggest a range of values that varies from “ideal if it works, but high risk” to “low risk and mostly helpful”. Maybe a suggestion like: “Primavera P6 OLTP (PMDB) tends to work best with this parameter set to a value in the range of 1 to 15” would be a more appropriate comment from someone in a position of authority.

Here’s another thought: if you work for Oracle you could always contact the optimizer group to present them with your argument for the strategy and see what they think about it. Then you can include their opinion when you offer your suggestion.

For what it’s worth, here’s my opinion: as a general rule you shouldn’t be working around performance issues by fiddling with the optimizer_index_cost_adj; as a specific directive do not set it to 1. If you want to encourage Oracle to be enthusiastic about indexes in general then adjust the system statistics (preferably with a degree of truth). If you need to persuade Oracle that particular indexes are highly desirable than you can use dbms_stats.set_index_stats() to adjust the clustering_factor (and avg_data_blocks_per_key) of those indexes. If you are running 11.2.0.4 or later then you can use dbms_stats.set_table_prefs() to set the “table_cached_blocks” parameter for tables where you think Oracle should be particularly keen on using indexes but isn’t; and if your queries are suffering from bad cardinality estimates because of a pattern of multi-column filter predicates create some column group (extended) statistics.

Why am I so firmly set against setting the optimizer_index_cost_adj to 1 ? Because it doesn’t tell Oracle to “use indexes instead of doing tablescans”, it tells Oracle that every index is just about as good as every other index for almost any query. Here’s a pdf file of an article (formerly published on DBAZine and then on my old website) I wrote over twelve years ago explaining the issue. Various links in the article no longer work, and the data pattern was generated to display the problem in 8i and 9i and you would need to modify the data to display the same effect in newer versions of Oracle – but the principle remains the same.

If you would like to see a slightly newer example of how the parameter causes problems. Here’s a thread dated April 2012 from the OTN database forum where a SYS-recursive query caused a performance problem because the parameter was set 1.

 

October 17, 2016

Fixed Stats

Filed under: Execution plans,Oracle,Statistics,Statspack,Troubleshooting — Jonathan Lewis @ 12:43 pm BST Oct 17,2016

There are quite a lot of systems around the world that aren’t using the AWR (automatic workload repository) and ASH (active session history) tools to help them with trouble shooting because of the licensing requirement – so I’m still finding plenty of sites that are using Statspack and I recently came across a little oddity at one of these sites that I hadn’t noticed before: one of the Statspack snapshot statements was appearing fairly regularly in the Statspack report under the “SQL Ordered by Elapsed Time” section – even when the application had been rather busy and had generated lots of other work that was being reported. It was the following statement – the collection of file-level statistics:


select
       ts.name      tsname
     , df.name      filename
     , fs.phyrds
     , fs.phywrts
     , fs.readtim
     , fs.writetim
     , fs.singleblkrds
     , fs.phyblkrd
     , fs.phyblkwrt
     , fs.singleblkrdtim
     , fw.count     wait_count
     , fw.time      time
     , df.file#
  from x$kcbfwait   fw
     , v$filestat   fs
     , v$tablespace ts
     , v$datafile   df
 where ts.ts#    = df.ts#
   and fs.file#  = df.file#
   and fw.indx+1 = df.file#
;

The execution plan didn’t look very friendly, and the volume of I/O it generated (several hundred thousand disk reads) was surprising. The reason why the statement stood out so much in this case was that there was a fairly large number of files in the database (over 1,000) and the default execution plan was showing very bad cardinality estimates that resulted in highly inappropriate cartesian merge joins. At best the statement was taking around 2 minutes to run, at worst it was much, much worse.

This system was running 10g – also something which is still fairly common, though becoming much scarcer – which produced the following execution plan (which  I’ve recreated on a much smaller system):

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |     1 |   535 |     2 (100)| 00:00:01 |
|*  1 |  HASH JOIN                    |                 |     1 |   535 |     2 (100)| 00:00:01 |
|   2 |   MERGE JOIN CARTESIAN        |                 |     5 |  1190 |     0   (0)| 00:00:01 |
|   3 |    NESTED LOOPS               |                 |     1 |   199 |     0   (0)| 00:00:01 |
|   4 |     MERGE JOIN CARTESIAN      |                 |     1 |   173 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL         | X$KCCTS         |     1 |    43 |     0   (0)| 00:00:01 |
|   6 |      BUFFER SORT              |                 |     1 |   130 |     0   (0)| 00:00:01 |
|*  7 |       FIXED TABLE FULL        | X$KCFIO         |     1 |   130 |     0   (0)| 00:00:01 |
|*  8 |     FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |
|   9 |    BUFFER SORT                |                 |   100 |  3900 |     0   (0)| 00:00:01 |
|  10 |     FIXED TABLE FULL          | X$KCBFWAIT      |   100 |  3900 |     0   (0)| 00:00:01 |
|* 11 |   VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  12 |    SORT ORDER BY              |                 |     1 |   957 |     1 (100)| 00:00:01 |
|  13 |     NESTED LOOPS              |                 |     1 |   957 |     0   (0)| 00:00:01 |
|  14 |      NESTED LOOPS             |                 |     1 |   647 |     0   (0)| 00:00:01 |
|  15 |       NESTED LOOPS            |                 |     1 |   371 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FULL       | X$KCCFN         |     1 |   323 |     0   (0)| 00:00:01 |
|* 17 |        FIXED TABLE FIXED INDEX| X$KCVFH (ind:1) |     1 |    48 |     0   (0)| 00:00:01 |
|* 18 |       FIXED TABLE FIXED INDEX | X$KCCFE (ind:1) |     1 |   276 |     0   (0)| 00:00:01 |
|* 19 |      FIXED TABLE FULL         | X$KCCFN         |     1 |   310 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#" AND "FILE#"="FW"."INDX"+1)
   5 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   7 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   8 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
  11 - filter("INST_ID"=USERENV('INSTANCE'))
  16 - filter("FN"."FNNAM" IS NOT NULL AND "FN"."FNTYP"=4 AND BITAND("FN"."FNFLG",4)<>4)
  17 - filter("FN"."FNFNO"="FH"."HXFIL")
  18 - filter("FE"."FEDUP"<>0 AND "FN"."FNFNO"="FE"."FENUM" AND
              "FE"."FEFNH"="FN"."FNNUM" AND "FE"."FETSN"<>(-1))
  19 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM"
              OR ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note particularly the two Cartesian merge joins and the very late filter at operation 1.

Note also the number of times the cardinality estimate is 1 – always a bit of a threat when the query gets complicated: “anything goes following a one for Rows”.

The easy (first thought) solution was simply to gather stats on all the fixed objects in this query:


begin
        dbms_stats.gather_table_stats('sys','x$kcbfwait',method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfe',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfn',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccts',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcfio',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcvfh',   method_opt=>'for all columns size 1');
end;
/

The option to gather fixed objects stats individually with a call to dbms_stats.gather_table_stats() is not commonly known, but it does work.

Here’s the plan (again from the small system) after stats collection:

--------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  1 |  HASH JOIN                     |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  2 |   HASH JOIN                    |                 |     1 |   350 |     2 (100)| 00:00:01 |
|   3 |    MERGE JOIN CARTESIAN        |                 |    25 |  1325 |     0   (0)| 00:00:01 |
|   4 |     NESTED LOOPS               |                 |     4 |   148 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL          | X$KCFIO         |   200 |  6200 |     0   (0)| 00:00:01 |
|*  6 |      FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |     6 |     0   (0)| 00:00:01 |
|   7 |     BUFFER SORT                |                 |     7 |   112 |     0   (0)| 00:00:01 |
|*  8 |      FIXED TABLE FULL          | X$KCCTS         |     7 |   112 |     0   (0)| 00:00:01 |
|*  9 |    VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  10 |     SORT ORDER BY              |                 |     1 |   316 |     1 (100)| 00:00:01 |
|  11 |      NESTED LOOPS              |                 |     1 |   316 |     0   (0)| 00:00:01 |
|  12 |       NESTED LOOPS             |                 |     1 |   248 |     0   (0)| 00:00:01 |
|  13 |        NESTED LOOPS            |                 |     1 |   226 |     0   (0)| 00:00:01 |
|* 14 |         FIXED TABLE FULL       | X$KCCFE         |     4 |   612 |     0   (0)| 00:00:01 |
|* 15 |         FIXED TABLE FIXED INDEX| X$KCCFN (ind:1) |     1 |    73 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FIXED INDEX | X$KCVFH (ind:1) |     1 |    22 |     0   (0)| 00:00:01 |
|* 17 |       FIXED TABLE FULL         | X$KCCFN         |     1 |    68 |     0   (0)| 00:00:01 |
|  18 |   FIXED TABLE FULL             | X$KCBFWAIT      |   400 |  3600 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("FILE#"="FW"."INDX"+1)
   2 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#")
   5 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   6 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
   8 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   9 - filter("INST_ID"=USERENV('INSTANCE'))
  14 - filter("FE"."FEDUP"<>0 AND "FE"."FETSN"<>(-1))
  15 - filter("FN"."FNTYP"=4 AND "FN"."FNNAM" IS NOT NULL AND BITAND("FN"."FNFLG",4)<>4
              AND "FN"."FNFNO"="FE"."FENUM" AND "FE"."FEFNH"="FN"."FNNUM")
  16 - filter("FN"."FNFNO"="FH"."HXFIL")
  17 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM" OR
              ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note the changes in cardinality estimates: they now look a little more realistic and we’re down to one cartesian merge join which (if you have a rough idea of what your X$ tables hold) still looks a little surprising at first sight but not completely unreasonable. A change of plan doesn’t necessarily mean much without the data and time behind it, of course, so here are the two sets of results from a 10g database with a handful of datafiles and tablespaces showing the Row Source Operation sections from the tkprof output before and after stats collection:

Before stats collection:

Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=1957860 us)
  16800   MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=33855662 us)
     42    NESTED LOOPS  (cr=0 pr=0 pw=0 time=73795 us)
   1400     MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=21555 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=3204 us)
   1400      BUFFER SORT (cr=0 pr=0 pw=0 time=7233 us)
    200       FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1210 us)
     42     FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=1859092 us)
  16800    BUFFER SORT (cr=0 pr=0 pw=0 time=67643 us)
    400     FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=2008 us)
      6   VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=68087 us)
      6    SORT ORDER BY (cr=0 pr=0 pw=0 time=68065 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=65989 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=56632 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=47217 us)
      6        FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=19830 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=25568 us)
      6       FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=9849 us)
      6      FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=9715 us)

After stats collection:


Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=196576 us)
      6   HASH JOIN  (cr=0 pr=0 pw=0 time=195829 us)
     42    MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=4390 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=7810 us)
    200      FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1224 us)
      6      FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=150150 us)
     42     BUFFER SORT (cr=0 pr=0 pw=0 time=1574 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=1353 us)
      6    VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=41058 us)
      6     SORT ORDER BY (cr=0 pr=0 pw=0 time=41005 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=39399 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=34229 us)
      6        NESTED LOOPS  (cr=0 pr=0 pw=0 time=15583 us)
      6         FIXED TABLE FULL X$KCCFE (cr=0 pr=0 pw=0 time=1124 us)
      6         FIXED TABLE FIXED INDEX X$KCCFN (ind:1) (cr=0 pr=0 pw=0 time=15067 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=18971 us)
      6       FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=5581 us)
    400   FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=1615 us)

The execution time has dropped from about 2 seconds to less than 1/10th of a second – and all I’ve got is 6 or 7 files and tablespaces.  (Some of the “time=” values in the first plan are very odd, but the final time figure is about right.)

Generating an intermediate result set of 16,800 rows for a tiny number of files is not a good thing – just imagine how big that number would get with 1,000 files and a couple of hundred tablespaces.

I have to say that (for a couple of tiny databases) the 11.2.0.4 and 12.1.0.2 systems I checked this query on had no problem and immediately picked a sensible path. It’s possible that the definition of some of the v$ objects has actually changed or that the optimizer features have changed (some assistence from complex view merging, perhaps) – but if you are still running Statspack, even if it’s on 11g or 12c, then it’s worth checking from time to time how much work goes into executing the snapshot and seeing if you need some fixed object stats to make things a little more efficient.

Footnote:

Several years ago I wrote a short note about how Statspack actually captured its own execution time (from 10g onwards) and how you could run a report on it to check the run time. It’s worth running that report from time to time. I’ve recently updated that note to add the equivalent query against the AWR.

October 6, 2016

My session workload

Filed under: Oracle,Performance,Troubleshooting — Jonathan Lewis @ 1:19 pm BST Oct 6,2016

My old website (www.jlcomp.demon.co.uk) will be disappearing in a couple of weeks – but there are a couple of timeless articles on it that are worth saving and although the popularity of this one has probably been surpassed by Tanel Poder’s Snapper script, or other offerings by Tom Kyte or Adrian Billington, it’s still one of those useful little things to have around – it’s a package to takes a snapshot of your session stats.

The package depends on a view created in the SYS schema, and the package itself has to be installed in the SYS schema – which is why other strategies for collecting the information have become more popular; but if you want to have it handy, here are the two scripts:

rem
rem     Script:         c_mystats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2001
rem     Purpose:        Put names to v$mystat
rem
rem     Last tested
rem             12.1.0.2        -- naming issue
rem             11.2.0.4
rem             10.2.0.5
rem             10.1.0.4
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     Should be run by SYS - which means it has to be re-run
rem     on a full database export/import
rem
rem     It looks as if it is illegal to create a view with a
rem     name starting with v$ in the sys account as from 12c.
rem     (ORA-00999: invalid view name). Hence the JV$ name.
rem
rem     But you can create a public synonym starting "v$"
rem

create or replace view jv$my_stats
as
select
        /*+
                first_rows
                ordered
        */
        ms.sid,
        sn.statistic#,
        sn.name,
        sn.class,
        ms.value
from
        v$mystat        ms,
        v$statname      sn
where
        sn.statistic# = ms.statistic#
;

drop public synonym v$my_stats;
create public synonym v$my_stats for jv$my_stats;
grant select on v$my_stats to public;

rem
rem	Script:		snap_myst.sql
rem	Author:		Jonathan Lewis
rem	Dated:		March 2001
rem	Purpose:	Package to get snapshot start and delta of v$mystat
rem
rem	Last tested
rem		12.1.0.2
rem		11.2.0.4
rem		10.2.0.5
rem		10.1.0.4
rem		 9.2.0.8
rem		 8.1.7.4
rem
rem	Notes
rem	Has to be run by SYS to create the package
rem	Depends on view (j)v$my_stats (see c_mystats.sql)
rem
rem	Usage:
rem		set serveroutput on size 1000000 format wrapped
rem		set linesize 120
rem		set trimspool on
rem		execute snap_my_stats.start_snap
rem		-- do something
rem		execute snap_my_stats.end_snap
rem

create or replace package snap_my_stats as
	procedure start_snap;
	procedure end_snap (i_limit in number default 0);
end;
/

create or replace package body snap_my_stats as

cursor c1 is
	select 
		statistic#, 
		name,
		value
	from 
		v$my_stats
	where
		value != 0
	;


	type w_type is table of c1%rowtype index by binary_integer;
	w_list		w_type;
	empty_list	w_type;

	m_start_time	date;
	m_start_flag	char(1);
	m_end_time	date;

procedure start_snap is
begin

	m_start_time := sysdate;
	m_start_flag := 'U';
	w_list := empty_list;

	for r in c1 loop
		w_list(r.statistic#).value := r.value;
	end loop;

end start_snap;


procedure end_snap (i_limit in number default 0) 
is
begin

	m_end_time := sysdate;

	dbms_output.put_line('---------------------------------');

	dbms_output.put_line('Session stats - ' ||
				to_char(m_end_time,'dd-Mon hh24:mi:ss')
	);

	if m_start_flag = 'U' then
		dbms_output.put_line('Interval:-  '  || 
				trunc(86400 * (m_end_time - m_start_time)) ||
				' seconds'
		);
	else
		dbms_output.put_line('Since Startup:- ' || 
				to_char(m_start_time,'dd-Mon hh24:mi:ss')
		);
	end if;

	if (i_limit != 0) then
		dbms_output.put_line('Lower limit:-  '  || i_limit);
	end if;

	dbms_output.put_line('---------------------------------');

	dbms_output.put_line(
		rpad('Name',60) ||
		lpad('Value',18)
	);

	dbms_output.put_line(
		rpad('----',60) ||
		lpad('-----',18)
	);

	for r in c1 loop
		if (not w_list.exists(r.statistic#)) then
		    w_list(r.statistic#).value := 0;
		end if;

		if (
		       (r.value > w_list(r.statistic#).value + i_limit)
		) then
			dbms_output.put(rpad(r.name,60));
			dbms_output.put(to_char(
				r.value - w_list(r.statistic#).value,
					'9,999,999,999,990')
			);
			dbms_output.new_line;
		end if;
	end loop;

end end_snap;

begin
	select
		logon_time, 'S'
	into
		m_start_time, m_start_flag
	from
		v$session
	where
		sid = 	(
				select /*+ no_unnest */ sid 
				from v$mystat 
				where rownum = 1
			);

end snap_my_stats;
/

drop public synonym snap_my_stats;
create public synonym snap_my_stats for snap_my_stats;
grant execute on snap_my_stats to public;

One point to be cautious about with this package: do not embed it inside anonymous pl/sql blocks, e.g.

begin

        snap_my_stats.start_snap;

        -- some other code

        snap_my_stats.end_snap;

end;
/

There are some statistics in v$my_stats (v$mystat / v$sesstat) which are not updated until the end of a database call – and calling an anonymous pl/sql block counts as a single database call, so some of your statistics (for example “CPU used by this session”) will report misleading values.

October 3, 2016

Kill CPU

Filed under: Oracle,Performance,Troubleshooting — Jonathan Lewis @ 8:58 am BST Oct 3,2016

My old website (www.jlcomp.demon.co.uk) will be disappearing in a couple of weeks – but there are a couple of timeless articles on it that are worth saving and a method for soaking up all the CPU on your system with a simple SQL statement against a small data set is, surely, one of them. Here, then is a little script that I wrote (or, at least, formalised) 15 years ago to stress out a CPU:


rem
rem     Script:         kill_cpu.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2001
rem     Purpose:        Exercise CPU and latches
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem             11.1.0.7
rem             10.2.0.5
rem             10.1.0.4
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     The count(*) will return power((2,n-1))
rem     To run from Oracle 9 and later we have to set parameter
rem             _old_connect_by_enabled = true;
rem
rem     Base calculation (historical):
rem     ==============================
rem     Rule of thumb - ca. 10,000 logical I/Os per sec per 100 MHz of CPU.
rem     (Modern versions of Oracle on modern CPUs - about twice that, maybe
rem     due to the introduction of the "fastpath" consistent gets with the
rem     elimination of some logging activity that used to exist.)
rem
rem     With the value of 23 shown we do 6M buffer visits of which 4M
rem     are "conistent read gets", and 2M are "buffer is pinned count".
rem     (That's power(2,23-1) and power(2,23-2) respectively). For each
rem     row you add to the kill_cpu table you double the run-time.
rem
rem      This is an example of SQL that can take MUCH longer when run
rem      with rowsource_execution_statistics enabled. Mostly spent on 
rem      CPU calling the O/S timer. (On my last test, using a 12c VM
rem      the time jumped from 6 seconds - 23 rows - to 75 seconds when
rem      I set statistics_level to all; but half would be the effect of
rem      running through the VM.)
rem

drop table kill_cpu;

begin

        begin           execute immediate 'purge recyclebin';
        exception       when others then null;
        end;

        begin           execute immediate 'alter session set "_old_connect_by_enabled"=true';
        exception       when others then null;
        end;

end;
/

create table kill_cpu(n, primary key(n))
organization index
as
select  rownum n
from    all_objects
where   rownum <= 23
;
execute snap_my_stats.start_snap 

set timing on 
set serveroutput off

-- alter session set statistics_level = all;

spool kill_cpu

select  count(*) X
from    kill_cpu 
connect by 
        n > prior n
start with 
        n = 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

set serveroutput on
execute snap_my_stats.end_snap

spool off

set timing off
alter session set statistics_level = typical;

The calls to snap_my_stats use a package (owned by sys) that I wrote a long time ago for taking a snapshot of v$mystats; many people use Tanel Poder’s “Snapper” script or Tom Kyte’s script instead.

September 16, 2016

Stats collection time

Filed under: Oracle,Statistics,Troubleshooting — Jonathan Lewis @ 12:58 pm BST Sep 16,2016

Yesterday I posted a note about querying dba_optstat_operations to get a quick report of how long calls to dbms_stats had been taking but said I had another script that helped to fill some of the gaps it left. One of my readers points out fairly promptely that 12c enhances the feature considerably, with a view dba_optstat_operation_tasks that (for example) lists all the tables processed during a single call to gather_schema_stats.

Well, I wrote my script years (if not decades) before 12c came out, so I’m going to publish it anyway.

It definitely falls into the quick and dirty category. It’s not trying to be clever, makes a critical assumption about the recent past, and isn’t a generally useful tool, but I’ve found it a quick way to highlight a possible problem with expensive stats collection routines (especially those written by 3rd parties who have disabled Oracle’s automatic job.)

The concept is very simple: list all the tables in the order they were last analyzed and assume that for any “obviously continuous” stretch of time the start of stats collections for one table coincided with the end of stats collection for the previous one. A little fiddling with the lag() function then allows you to report the time spent on gathering the stats.

rem
rem     stats_check.sql
rem
rem     Quick and dirty to get an idea of
rem     where the time went on the most
rem     recent stats collection job.
rem
rem     Weakness: It assumes that the start
rem     of stats collection of a table is the
rem     end of stats collection for the previous
rem     table.  This may be far from true.
rem

define m_schema = '&1'

set linesize 144
set pagesize 60
set trimspool on

column a_date format a20

spool stats_check

select
        table_name,
        blocks, num_rows, sample_size,
        round(100 * sample_size/nullif(num_rows,0),2)   sample_pct,
        to_char(last_analyzed,'dd-mon-yyyy hh24:mi:ss') a_date,
        to_char(
                round(1440 *
                        (
                        last_analyzed -
                        lag(last_analyzed, 1) over (order by last_analyzed)
                        ), 2
                ),'9,999.00'
        )                        a_time
from
        dba_tables
where
        owner = upper('&m_schema')
order by
        last_analyzed
;

spool off

It’s a very simple script – one detail that’s quite useful is the presence of the data about table size and sample size. But that’s really only the starting point. If you’re gathering stats on a table then most of the time might be spent on gathering stats for histograms or indexes, so there’s plenty of scope to add bits to the script to report number of histograms per table, number of indexes per table, and so on. Personally I tend to do “incremental” trouble-shooting and keep my scripts (and their run time) short, so I’d only worry about specific details if I saw a headline figure that seemed worth a little more effort.

Here’s some (camouflaged) output – which happens to highlight one of the weaknesses of the approach:

TABLE_NAME              BLOCKS   NUM_ROWS SAMPLE_SIZE SAMPLE_PCT A_DATE                  A_TIME
--------------------- --------  --------- ----------- ---------- --------------------  --------
TABLE_AAA                    0          0           0            14-jul-2016 01:27:55       .00
TABLE_BBB              6320666  540356865   108071373         20 14-jul-2016 02:30:45     62.83
TABLE_CCC              9587372  930770535   186154107         20 14-jul-2016 03:36:13     65.47
TABLE_DDD              2272319  104698080    20939616         20 14-jul-2016 04:19:45     43.53
TABLE_EEE              4413696  258746720    51749344         20 14-jul-2016 04:49:13     29.47
TABLE_FFF                    0          0           0            14-jul-2016 05:02:28     13.25
TABLE_GGG              4377656  258740995    51748199         20 14-jul-2016 05:02:28       .00
TABLE_HHH                    0          0           0            14-jul-2016 05:02:28       .00
TABLE_III                   60         52          52        100 14-jul-2016 05:02:29       .02
TABLE_JJJ                   60        170         170        100 14-jul-2016 05:02:30       .02
TABLE_KKK                   60        100         100        100 14-jul-2016 05:02:30       .00
TABLE_LLL                   60       5548        5021       90.5 14-jul-2016 05:02:31       .02
TABLE_MMM                 4660     686575      137315         20 14-jul-2016 05:02:32       .02
...
TABLE_TTT                    0          0           0            14-jul-2016 05:02:38       .00
TABLE_UUU                   60        659         659        100 14-jul-2016 05:02:38       .00
TABLE_VVV               325324   18618685     3723737         20 14-jul-2016 05:04:14      1.60
TABLE_WWW                   60       1106        1106        100 14-jul-2016 05:06:01      1.78
TABLE_XXX                24001    2534810      506962         20 14-jul-2016 05:06:13       .20
TABLE_YYY                    0          0           0            14-jul-2016 05:06:20       .12
TABLE_ZZZ                    0          0           0            14-jul-2016 05:06:20       .00

You’ll notice that the zero row FFF table is reported to have taken 13.25 minutes for stats gathering, while the 4.3M row GGG table table took no time at all. That’s the problem of sorting by the last_analyzed time when it’s accurate only to the second and some tables take less than a second to gather stats. Clearly these two time estimates are the wrong way round. (WWW and XXX look fairly suspect too, but their timestamps are different so there must be some other explanation – but the time is too small to worry about at the moment.)

Tables BBB and CCC are also an example of how variable the results are when you compare cost with time. Although the table and sample had twice the rows table CCC has fewer histograms and indexes. The same variation shows up clearly between EEE and GGG (after allowing for the order error), and slightly less immediately between DDD and all the other large tables.

That brings me to one more threat of mis-interpretation (and a reason why I’m not terribly keen on publishing scripts that I’ve hacked together). If you see Oracle claiming (through this script) that it took no more than a couple of minutes to gather stats on a 5 billion row table it may be true – but maybe that stats collection was only related to gathering stats for a single, fairly new, partition in a partitioned table. There are, no doubt, several boundary cases that a simple script like this will have overlooked – and when I run it I’ll be at a client site with knowledge of the client database and I’ll spot the anomalies and omissions.

Update

One simple enhancement to the script above – indexes take time to analyze as well, and if you query only for tables the time spent analyzing the indexes on a table will be attributed to the NEXT table in the output. All we have to do is change the reference to dba_tables to be an inline view referencing dba_tables and dba_indexes in a UNION ALL:


select
        table_name, index_name,
        blocks, num_rows, sample_size,
        round(100 * sample_size/nullif(num_rows,0),2)   sample_pct,
        to_char(last_analyzed,'dd-mon-yyyy hh24:mi:ss') a_date,
        to_char(
                round(1440 *
                        (
                        last_analyzed -
                        lag(last_analyzed, 1) over (order by last_analyzed)
                        ), 2
                ),'9,999.99'
        )                        a_time
from
        (
        select
                owner, table_name, null as index_name,
                blocks, num_rows, sample_size, last_analyzed
        from
                dba_tables
        union all
        select
                table_owner, table_name, index_name,
                leaf_blocks, num_rows, sample_size, last_analyzed
        from
                dba_indexes
        )
where
        owner = upper('&m_schema')
order by
        last_analyzed
;

You’ll notice that I’ve left the ‘&schema’ outside the view so that I only have to supply a value once; the optimizer is smart enough to push this predicate inside the union all view so that the plan shows the two separate subqueries being operated only for the given schema.

Footnote

A reader has emailed me to point out that one of the cases where this script won’t help (or, at the least, would produce highly misleading results) when the CONCURRENT preference has been set to true, allowing multiple jobs run to gather stats on difference objects at the same.

 

September 15, 2016

Stats time

Filed under: Oracle,Statistics,Troubleshooting — Jonathan Lewis @ 9:41 am BST Sep 15,2016

I don’t really remember how long it’s been since Oracle created an automatic log of how long a call to the dbms_stats package took, though it was probably some time in the 10g time-line. It wasn’t until it had been around for several years, though before I wrote little script (possibly prompted by a comment from Martin Widlake) that I’ve used occasionally since to see what’s been going on in the past, how variable stats collection times have been, and what unexpected dbms_stats call an application may have been making. Here’s what it currently looks like:

rem
rem     Script:         optimizer_operations.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2010
rem     Purpose:
rem
rem     Last tested
rem             11.2.0.4
rem     Not tested
rem             12.1.0.2
rem             10.2.0.5
rem
rem     Notes:
rem     Simple script to list calls to dbms_stats showing
rem     what they did and how long it took.
rem
rem     Notice that gather_database_stats and gather_schema_stats
rem     don't tell you anything about individual tables.
rem

spool optimizer_operations

column start_time       format a35
column operation        format a28
column target           format a28

column duration         format a12
column max_duration     format a12

column utc_start_time   format a28
column first_utc_time   format a28

select
        operation,
        cast(sys_extract_utc(min(start_time)) as timestamp(0))                  first_utc_time,
        --
        to_char(extract(hour from max(end_time - start_time)),'FM00') ||
        ':' ||
        to_char(extract(minute from max(end_time - start_time)),'FM00') ||
        ':' ||
        to_char(round(extract(second from max(end_time - start_time)),2),'FM00.00')     max_duration,
        --
        count(*)        Frequency
from
        dba_optstat_operations
group by
        operation
order by
        operation
;

select
        operation,
        target,
        -- start_time,
        cast(sys_extract_utc(start_time) as timestamp(0))                               utc_start_time,
        to_char(extract(hour from (end_time - start_time)),'FM00') ||
        ':' ||
        to_char(extract(minute from (end_time - start_time)),'FM00') ||
        ':' ||
        to_char(round(extract(second from (end_time - start_time)),2),'FM00.00')        duration
from
        dba_optstat_operations
where
        end_time - start_time >= numtodsinterval(2,'SECOND')
and     target is not null
order by
        start_time
;

spool off

This version of the script first prints a summary showing how many calls to which operation have occured, the worst case scenario for each, and when the first one happened – in case a recent change has introduced a problem – you might find it useful to include other bits of information, like the min(duration) in the first section, it’s not difficult modify the SQL. The second part of the report is then just a list of the individual calls, with a minimum interesting time included.

And here’s some (camouflagued) output


OPERATION                    FIRST_UTC_TIME               MAX_DURATION  FREQUENCY
---------------------------- ---------------------------- ------------ ----------
copy_table_stats             14-JUL-16 11.12.55 PM        00:00:03.43         320
gather_database_stats(auto)  14-JUL-16 06.00.01 AM        01:13:17.25          67
gather_schema_stats          14-JUL-16 04.00.02 AM        05:16:01.22          35
gather_table_stats           14-JUL-16 01.15.59 AM        00:04:07.98         153


OPERATION                    TARGET                             UTC_START_TIME               DURATION
---------------------------- ---------------------------------- ---------------------------- ------------
gather_schema_stats          MY_SCHEMA                          01-AUG-16 04.00.02 AM        04:40:33.03
copy_table_stats             SYS.WRH$_SQLSTAT.WRH$_SQLSTA_24706 01-AUG-16 11.18.15 PM        00:00:02.04
                             69947_32443

gather_schema_stats          MY_SCHEMA                          02-AUG-16 04.00.02 AM        04:57:39.61
gather_schema_stats          MY_SCHEMA                          03-AUG-16 04.00.02 AM        04:40:10.88
gather_schema_stats          MY_SCHEMA                          04-AUG-16 04.00.02 AM        05:00:48.05
gather_schema_stats          MY_SCHEMA                          05-AUG-16 04.00.03 AM        05:07:42.33
gather_schema_stats          MY_SCHEMA                          06-AUG-16 04.00.02 AM        04:50:39.93
gather_table_stats           SYS.WRI$_ADV_ACTIONS               06-AUG-16 12.42.38 PM        00:00:03.01
gather_table_stats           SYS.WRI$_ADV_FINDINGS              06-AUG-16 12.42.41 PM        00:00:02.94
gather_table_stats           SYS.WRI$_ADV_RECOMMENDATIONS       06-AUG-16 12.42.46 PM        00:00:02.03
gather_table_stats           SYS.WRI$_ADV_SQLT_PLANS            06-AUG-16 12.42.49 PM        00:00:04.45
gather_schema_stats          MY_SCHEMA                          07-AUG-16 04.00.02 AM        04:48:57.23
gather_schema_stats          MY_SCHEMA                          08-AUG-16 04.00.01 AM        04:40:57.81
gather_table_stats           MY_SCHEMA.TABLE_A                  09-AUG-16 01.16.21 AM        00:00:03.61
gather_schema_stats          MY_SCHEMA                          09-AUG-16 04.00.03 AM        04:51:14.48
gather_table_stats           MY_SCHEMA.TABLE_A                  10-AUG-16 01.16.44 AM        00:00:02.07
gather_schema_stats          MY_SCHEMA                          10-AUG-16 04.00.02 AM        04:36:19.35
gather_schema_stats          MY_SCHEMA                          11-AUG-16 04.00.02 AM        04:43:30.77
copy_table_stats             SYS.WRH$_FILESTATXS.WRH$_FILEST_24 11-AUG-16 11.24.21 PM        00:00:02.65
                             70669947_32683

gather_schema_stats          MY_SCHEMA                          12-AUG-16 04.00.02 AM        04:42:15.92
gather_schema_stats          MY_SCHEMA                          12-AUG-16 04.00.02 AM        04:42:15.89
gather_schema_stats          MY_SCHEMA                          13-AUG-16 04.00.02 AM        04:41:06.68
gather_table_stats           SYS.WRI$_ADV_PARAMETERS            13-AUG-16 10.31.35 AM        00:00:03.88
gather_table_stats           SYS.WRI$_ADV_RATIONALE             13-AUG-16 10.31.39 AM        00:00:03.03
gather_table_stats           MY_SCHEMA.TABLE_B                  13-AUG-16 01.24.33 PM        00:04:07.98
gather_table_stats           MY_SCHEMA.TABLE_C.SYS_P12423       13-AUG-16 02.06.34 PM        00:03:22.33
copy_table_stats             SYS.WRH$_ACTIVE_SESSION_HISTORY.WR 13-AUG-16 11.03.35 PM        00:00:02.11
                             H$_ACTIVE_2470669947_32731


There is an important limitation with this script – it’s all very well seeing that gather_schema_stats has been running at a fairly uniform 4.5 to 5 hours, but it doesn’t tell us where the time went inside that call and it’s a call that could have been covering a lot of tables. There’s not a lot you can do about that from this set of data, but there’s another script I wrote a long time ago (before I wrote this one) that tried to do something about that question, and I’ll be publishing that tomorrow.

 

Next Page »

Powered by WordPress.com.