Oracle Scratchpad

July 20, 2015

12c Downgrade

Filed under: 12c,Bugs,CBO,Oracle — Jonathan Lewis @ 1:12 pm BST Jul 20,2015

No, not really – but sometimes the optimizer gets better and gives you worse performance as a side effect when you upgrade. Here’s an example where 11.2.0.4 recognised (with a few hints) the case for a nested loop semi-join and 12c went a bit further and recognised the opportunity for doing a cunning “semi_to_inner” transformation … which just happened to do more work than the 11g plan.

Here’s a data set to get things going, I’ve got “parent” and “child” tables, but in this particular demonstration I won’t be invoking referential integrity:


create table chi
as
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level <= 1e4
)
select
        rownum - 1                              id,
        trunc((rownum-1)/10)                    n1,
        trunc(dbms_random.value(0,1000))        n2,
        rpad('x',1000)                          padding
from
        generator
;

create table par
as
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level <= 1e4
)
select
        rownum - 1      id,
        rpad('x',1000)  padding
from
        generator
where
        rownum <= 1e3
;

alter table par modify id not null;
alter table par add constraint par_pk primary key(id)
-- deferrable
;

-- Now gather stats on the tables.

The code uses my standard framework that could generate a few million rows even though it’s only generating 1,000 in par and 10,000 in chi. The presence of the commented “deferrable” for the primary key constraint is for a secondary demonstration.

You’ll notice that the 1,000 values that appear in chi.n1 and chi.n2 are matched by the 1,000 rows that appear in the primary key of par – in some other experiment I’ve got two foreign keys from chi to par. Take note that the values in n1 are very well clustered because of the call to trunc() while the values in n2 are evenly scattered because of the call to dbms_random() – the data patterns are very different although the data content is very similar (the randomised data will still produce, on average, 10 rows per value).

So here’s the test code:


set serveroutput off
set linesize 156
set trimspool on
set pagesize 60

alter session set statistics_level = all;

prompt  =============================
prompt  Strictly ordered driving data
prompt  =============================

select
        /*+
                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (par.id))
        */
        count(*)
from
        chi
where   exists (
                select null
                from par
                where par.id = chi.n1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

prompt  =============================
prompt  Randomly ordered driving data
prompt  =============================

select
        /*+
                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (par.id))
        */
        count(*)
from
        chi
where   exists (
                select null
                from par
                where par.id = chi.n2
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

set serveroutput on
alter session set statistics_level = typical;

In both cases I’ve hinted the query into running with a nested loop semi-join from chi to par. Since there are 10,000 rows in chi with no filter predicates, you might expect to see the probe into the par table starting 10,000 times returning (thanks to our perfect data match) one row for each start. Here are the run-time plans with rowsource execution stats from 11.2.0.4

=============================
Strictly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.14 |    1450 |   1041 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.14 |    1450 |   1041 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.12 |    1450 |   1041 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.07 |    1434 |   1037 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   1000 |   1048 |     0   (0)|   1000 |00:00:00.01 |      16 |      4 |
--------------------------------------------------------------------------------------------------------------

=============================
Randomly ordered driving data
=============================

-----------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.12 |    5544 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.12 |    5544 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.10 |    5544 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.02 |    1434 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   4033 |   1048 |     0   (0)|   4033 |00:00:00.02 |    4110 |
-----------------------------------------------------------------------------------------------------

Notice how we do 1,000 starts of operation 4 when the data is well ordered, and 4,033 starts when the data is randomly ordered. For a semi-join nested loop the run-time engine uses the same caching mechanism as it does for scalar subqueries – a fact you can corroborate by removing the current hints and putting the /*+ no_unnest */ hint into the subquery so that you get a filter subquery plan, in which you will note exactly the same number of starts of the filter subquery.

As an extra benefit you’ll notice that the index probes for the well-ordered data have managed to take advantage of buffer pinning (statistic “buffer is pinned count”) – keeping the root block and most recent leaf block of the par_pk index pinned almost continually through the query; while the randomised data access unfortunately required Oracle to unpin and repin the index leaf blocks (even though there were only 2 in the index) as the scan of chi progessed.

Time to upgrade to 12.1.0.2 and see what happens:

=============================
Strictly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |    1448 |   1456 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |    1448 |   1456 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.20 |    1448 |   1456 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.06 |      16 |     27 |
--------------------------------------------------------------------------------------------------------------

=============================
Randomly ordered driving data
=============================

--------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |   11588 |   1429 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |   11588 |   1429 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.19 |   11588 |   1429 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.07 |   10156 |      0 |
--------------------------------------------------------------------------------------------------------------

Take a close look at operation 2 – it’s no longer a NESTED LOOP SEMI, the optimizer has got so smart (recognising the nature of the primary key on par) that it’s done a “semi_to_inner” transformation. But a side effect of the transformation is that the scalar subquery caching mechanism no longer applies so we probe the par table 10,000 times. When the driving data is well-ordered this hasn’t made much difference to the buffer gets (and related latch activity), but when the data is randomised the extra probes ramp the buffer gets up even further.

The timings (A-time) on all these experiments are not particularly trustworthy – the differences between cached reads and direct path reads introduced more variation than the difference in Starts and Buffers, and the total CPU load is pretty small anyway – and I suspect that this difference won’t make much difference to most people most of the time. No doubt, though, there will be a few cases where a small change like this could have a noticeable effect on some important queries.

Footnote

There is a hint /*+ no_semi_to_inner(@queryblock object_alias) */ that I thought might persuade the optimizer to stick with the semi-join, but it didn’t have any effect. Since the “semi to inner” transformation (and the associated hints) are available in 11.2.0.4 I was a little puzzled that (a) I didn’t see the same transformation in the 11g test, and (b) that I couldn’t hint the transformation.  This makes me wonder if there’s a defect in 11g that might be fixed in a future patch.

It’s also nice to think that the scalar subquery caching optimisation used in semi-joins might also become available to standard joins (in cases such as “join to parent”, perhaps).

September 9, 2014

Quiz Night

Filed under: Execution plans,Indexing,Oracle — Jonathan Lewis @ 6:46 pm BST Sep 9,2014

I have a table with several indexes on it, and I have two versions of a query that I might run against that table. Examine them carefully, then come up with some plausible reason why it’s possible (with no intervening DDL, DML, stats collection, parameter fiddling etc., etc., etc.) for the second form of the query to be inherently more efficient than the first.


select
        bit_1, id, small_vc, rowid
from
        bit_tab
where
        bit_1 between 1 and 3
;

prompt  ===========
prompt  Split query
prompt  ===========

select
        bit_1, id, small_vc, rowid
from
        bit_tab
where
        bit_1 = 1
or      bit_1 > 1 and bit_1 <= 3
;

Update / Answers

I avoided giving any details about the data and indexes in this example as I wanted to allow free rein to readers’ imagination  – and I haven’t been disappointed with the resulting suggestions. The general principles of allowing more options to the optimizer, effects of partitioning, and effects of skew are all worth considering when the optimizer CAN’T use an execution path that you think makes sense.  (Note: I didn’t make it clear in my original question, but I wasn’t looking for cases where you could get a better path by hinting (or profiling) I was after cases where Oracle literally could not do what you wanted.)

The specific strategy I was thinking of when I posed the question was based on a follow-up to some experiments I had done with the cluster_by_rowid() hint. and (there was a little hint in the “several indexes” and more particularly the column name “bit_1″) I was looking at a data warehouse table with a number of bitmap indexes. So here’s the execution plan for the first version of the query  when there’s a simple bitmap index on bit_1.


------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |   600 | 18000 |    96 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BIT_TAB |   600 | 18000 |    96 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |       |
|*  3 |    BITMAP INDEX RANGE SCAN   | BT1     |       |       |       |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("BIT_1">=1 AND "BIT_1"<=3)

And here’s the plan for the second query:


------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |   560 | 16800 |    91 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BIT_TAB |   560 | 16800 |    91 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |       |
|   3 |    BITMAP OR                 |         |       |       |       |
|*  4 |     BITMAP INDEX SINGLE VALUE| BT1     |       |       |       |
|   5 |     BITMAP MERGE             |         |       |       |       |
|*  6 |      BITMAP INDEX RANGE SCAN | BT1     |       |       |       |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("BIT_1"=1)
   6 - access("BIT_1">1 AND "BIT_1"<=3)

Clearly the second plan is more complex than the first – moreover the added complexity had resulted in the optimizer getting a different cardinality estimate – but, with my data set, there’s a potential efficiency gain. Notice how lines 5 and 6 show a bitmap range scan followed by a bitmap merge: to do the merge Oracle has to “superimpose” the bitmaps for the different key values in the range scan to produce a single bitmap that it can then OR with the bitmap for bit_1 = 1 (“bitmap merge” is effectively the same as “bitmap or” except all the bitmaps come from the same index). The result of this is that when we convert to rowids the rowids are in table order. You can see the consequences in the ordering of the result set or, more importantly for my demo, in the autotrace statistics:


For the original query:
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        604  consistent gets
          0  physical reads
          0  redo size
      27153  bytes sent via SQL*Net to client
        777  bytes received via SQL*Net from client
         25  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        600  rows processed


For the modified query
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        218  consistent gets
          0  physical reads
          0  redo size
      26714  bytes sent via SQL*Net to client
        777  bytes received via SQL*Net from client
         25  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        600  rows processed

Note, particularly, the change in the number of consistent gets. Each table block I visited held two or three rows that I needed, in the first query I visit the data in order of (bit_1, rowid) and get each table block 3 time; in the second case I visit the data in order of rowid and only get each table block once (with a “buffer is pinned count” for subsequent rows from the same block).

Here’s the starting output from each query, I’ve added the rowid to the original select statements so that you can see the block ordering:


Original query
     BIT_1         ID SMALL_VC   ROWID
---------- ---------- ---------- ------------------
         1          2 2          AAAmeCAAFAAAAEBAAB
         1         12 12         AAAmeCAAFAAAAECAAB
         1         22 22         AAAmeCAAFAAAAEDAAB
         1         32 32         AAAmeCAAFAAAAEEAAB
         1         42 42         AAAmeCAAFAAAAEFAAB
         1         52 52         AAAmeCAAFAAAAEGAAB

Modified query
     BIT_1         ID SMALL_VC   ROWID
---------- ---------- ---------- ------------------
         1          2 2          AAAmeCAAFAAAAEBAAB
         2          3 3          AAAmeCAAFAAAAEBAAC
         3          4 4          AAAmeCAAFAAAAEBAAD
         1         12 12         AAAmeCAAFAAAAECAAB
         2         13 13         AAAmeCAAFAAAAECAAC
         3         14 14         AAAmeCAAFAAAAECAAD
         1         22 22         AAAmeCAAFAAAAEDAAB
         2         23 23         AAAmeCAAFAAAAEDAAC
         3         24 24         AAAmeCAAFAAAAEDAAD

By rewriting the query I’ve managed to force a “cluster by rowid” on the data access. Of course, the simpler solution would be to add the /*+ cluster_by_rowid() */ hint to the original query – but it doesn’t work for bitmap indexes, and when I found that it worked for B-tree indexes the next test I did was to try a single bitmap index, which resulted in my writing this note.

Footnote: I don’t really expect Oracle Corp. to modify their code to make the hint work with bitmaps, after all it’s only relevant in the special case of using a bitmap index with a range scan and no subsequent bitmap AND/OR/MINUS operations where it would be needed – and you’re not really expected to use a single bitmap index to access a table, we engineer bitmaps to take advantage of combinations.

April 23, 2014

NL History

Filed under: Execution plans,Oracle — Jonathan Lewis @ 6:43 pm BST Apr 23,2014

Even the simplest things change – here’s a brief history of nested loop joins, starting from 8i, based on the following query (with some hints):

select
	t2.n1, t1.n2
from
	t2,t1
where
	t2.n2 = 45
and	t2.n1 = t1.n1
;

There’s an index to support the join from t2 to t1, and I’ve forced an (unsuitable) index scan for the predicate on t2.

Basic plan for 8i (8.1.7.4)

As reported by $ORACLE_HOME/rdbms/admin/utlxpls.sql.
Note the absence of a Predicate Information section.

Plan Table
--------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |          |   225 |    3K|   3038 |       |       |
|  NESTED LOOPS             |          |   225 |    3K|   3038 |       |       |
|   TABLE ACCESS BY INDEX RO|T2        |    15 |  120 |   3008 |       |       |
|    INDEX FULL SCAN        |T2_I1     |    15 |      |      8 |       |       |
|   TABLE ACCESS BY INDEX RO|T1        |     3K|   23K|      2 |       |       |
|    INDEX RANGE SCAN       |T1_I1     |     3K|      |      1 |       |       |
--------------------------------------------------------------------------------

Basic plan for 9i (9.2.0.8)

As reported by a call to a home-grown version of dbms_xplan.display_cursor() with statistics_level set to all.

Note the “prefetch” shape of the body of the plan but the inconsistency in the numbers reported for Rows, Bytes, and Cost seem to be reporting the “traditional” 8i values transposed to match the new arrangement of the operations. There’s also a little oddity in the A-rows column in line 2 which looks as if it is the sum of its children plus 1 when the size of the rowsource is (presumably) the 225 rowids used to access the table.

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     |  Name       | Rows  | Bytes | Cost  | Starts  | A-Rows  | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |   225 |  3600 |  3038 |         |         |         |
|   1 |  TABLE ACCESS BY INDEX ROWID  | T1          |    15 |   120 |     2 |     1   |    225  |   3061  |
|   2 |   NESTED LOOPS                |             |   225 |  3600 |  3038 |     1   |    241  |   3051  |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T2          |    15 |   120 |  3008 |     1   |     15  |   3017  |
|   4 |     INDEX FULL SCAN           | T2_I1       |  3000 |       |     8 |     1   |   3000  |     17  |
|*  5 |    INDEX RANGE SCAN           | T1_I1       |    15 |       |     1 |    15   |    225  |     34  |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

Basic plan for 10g (10.2.0.5)

As reported by a call to dbms_xplan.display_cursor() with statistics_level set to all.

No change from 9i.

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.03 |    3061 |
|   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.03 |    3051 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.03 |    3017 |
|   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
|*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      34 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

Basic plan for 11g (11.2.0.4)

As reported by a call to dbms_xplan.display_cursor() with statisics_level set to all

Note how the nested loop has now turned into two NESTED LOOP operations – potentially opening the way for a complete decoupling of index access and table access. This has an interesting effect on the number of starts of the table access by rowid for t1, of course. The number of buffer gets for this operation looks surprisingly low (given that it started 225 times) but can be explained by the pattern of the data distribution – and cross-checked by looking at the “buffer is pinned count” statistic which accounts for most of the table visits.


-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |    225 |00:00:00.01 |    3048 |
|   1 |  NESTED LOOPS                 |       |      1 |    225 |    225 |00:00:00.01 |    3048 |
|   2 |   NESTED LOOPS                |       |      1 |    225 |    225 |00:00:00.01 |    3038 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |    3013 |
|   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      13 |
|*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T1    |    225 |     15 |    225 |00:00:00.01 |      10 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

There is, however, a second possible plan for 11g. The one above is the “NLJ Batching” plan, but I could have hinted the “NLJ prefetch” strategy, which takes us back to the 9i execution plan (with a very small variation in buffer visits).

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.01 |    3052 |
|   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.01 |    3042 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
|   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
|*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

Base plan for 12c (12.1.0.1)

As reported by a call to dbms_xplan.display_cursor() with statistics_level set to all.
Note that the table access to t2 in line 3 is described as “batched” (a feature that can be disabled by the /*+ no_batch_table_access_by_rowid(alias) */  hint) otherwise the plan matches the 11g plan.


---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |    225 |00:00:00.01 |    3052 |
|   1 |  NESTED LOOPS                         |       |      1 |        |    225 |00:00:00.01 |    3052 |
|   2 |   NESTED LOOPS                        |       |      1 |    225 |    225 |00:00:00.01 |    3042 |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
|   4 |     INDEX FULL SCAN                   | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
|*  5 |    INDEX RANGE SCAN                   | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
|   6 |   TABLE ACCESS BY INDEX ROWID         | T1    |    225 |     15 |    225 |00:00:00.01 |      10 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

Of course 12c also has the “prefetch” version of the plan available; and again “batched” access appears – for both tables in this case – and again the feature can be disabled individually by hints addressed at the tables:


---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1    |      1 |     15 |    225 |00:00:00.01 |    3052 |
|   2 |   NESTED LOOPS                        |       |      1 |    225 |    225 |00:00:00.01 |    3042 |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
|   4 |     INDEX FULL SCAN                   | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
|*  5 |    INDEX RANGE SCAN                   | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

In these examples the difference in work done by the different variations and versions is negligible, but there may be cases where the pattern of data distribution may change the pattern of logical I/Os and buffer pins – which may affect the physical I/O. In this light it’s interesting to note the hint /*+ cluster_by_rowid(alias) */ that was introduced in 11.2.0.4 but disappeared by 12c [Ed: wrong, it wasn’t listed in v$sql_hints in the beta, but is in the production version] changing the 11g plan as follows:


----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |       |       |          |
|   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.01 |     134 |       |       |          |
|   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.01 |     124 |       |       |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |      99 |       |       |          |
|   4 |     SORT CLUSTER BY ROWID     |       |      1 |   3000 |   3000 |00:00:00.01 |       8 |   142K|   142K|  126K (0)|
|   5 |      INDEX FULL SCAN          | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |       8 |       |       |          |
|*  6 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   6 - access("T2"."N1"="T1"."N1")

Note the effect appearing at line 4 – and the extraordinary effect this has on the buffer visits (so significant that I did a follow-up check on v$mystat to see if the figures were consistent). This type of rowid sorting is, of course, an important fix for an Exadata issue I described some time ago, and I had assumed that the “batched” concept in the 12c plan was in some way enabling it – although the 12c rowsource execution stats don’t seem to bear that idea out.

Footnote:

You may also want to read the following note by Timur Akhmadeev of Pythian on the 12c batched rowid.

February 12, 2014

Caution – hints

Filed under: Hints,Oracle — Jonathan Lewis @ 6:57 pm BST Feb 12,2014

Here’s a little example of why you should be very cautious about implementing undocumented discoveries. If you take a look at the view v$sql_hints in 11.2.0.4 you’ll discover a hint (no_)cluster_by_rowid; and if you look in v$parameter you’ll discover two new parameters _optimizer_cluster_by_rowid and _optimizer_cluster_by_rowid_control.

It doesn’t take much imagination to guess that the parameters and hint have something to do with the costs of accessing compressed data by rowid on an Exadata system (see, for example, this posting) and it’s very easy to check what the hint does:

(more…)

June 20, 2011

Optimisation

Filed under: Oracle,Performance,Tuning — Jonathan Lewis @ 6:20 pm BST Jun 20,2011

A question came up on Oracle-L recently about the difference in work done by the following two queries:

SELECT /*+ RULE */
	DOM_NAME
FROM
	DOMAINS,
	TABLE(CAST(:B1 AS DOMAIN_LIST)) DL
WHERE
	DOM_NAME = DL.COLUMN_VALUE
;

SELECT
	DOM_NAME
FROM
	DOMAINS
WHERE
	DOM_NAME IN (
		SELECT	COLUMN_VALUE
		FROM	TABLE(CAST(:B1 AS  DOMAIN_LIST))
	)
;

Before saying anything else, I should point out that these two queries are NOT logically equivalent unless you can guarantee that the table() operator returns a unique set of values – and Oracle doesn’t allow uniqueness to be enforced on collections.

(more…)

April 19, 2011

More CR

Filed under: Infrastructure,Oracle,Performance,Read Consistency,Troubleshooting,undo — Jonathan Lewis @ 6:32 pm BST Apr 19,2011

Following on from yesterday’s post on consistent reads, I thought I’d make the point that the way you work can make an enormous difference to the amount of work you do. Here’s a silly little demo (in 10.2.0.3):
(more…)

January 21, 2011

Quiz Night

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 7:04 pm BST Jan 21,2011

Here’s an execution plan pulled from v$sql_plan in 10.2.0.3 with the “gather_plan_statistics” option enabled:

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |        |      1 |      0 |00:00:00.01 |     608 |
|   2 |   TABLE ACCESS BY INDEX ROWID| PARENT |      1 |    200 |00:00:00.01 |       6 |
|   3 |    INDEX RANGE SCAN          | PAR_I1 |      1 |    200 |00:00:00.01 |       2 |
|   4 |   TABLE ACCESS BY INDEX ROWID| CHILD  |    200 |      0 |00:00:00.01 |     602 |
|   5 |    INDEX UNIQUE SCAN         | CHI_I1 |    200 |    200 |00:00:00.01 |     402 |
----------------------------------------------------------------------------------------

(more…)

June 12, 2009

Consistent Gets – 2

Filed under: Infrastructure — Jonathan Lewis @ 8:45 pm BST Jun 12,2009

How would you describe “db  block gets” and “consistent gets” in a couple of short paragraphs ? Having asked the question, I suppose I ought to offer my version of the answer up for scrutiny and dissection.

Before supplying my descriptions, I’d like to point out that there are (at least) two other mechanisms that Oracle uses to visit data blocks in the buffer cache: re-visiting pinned buffers, and doing a fast cleanout on commit. These buffer visits can be seen in the statistics: “buffer is pinned count” and “commit cleanouts successfully completed”. (The statistic “commit cleanouts” tells you how many times Oracle tried to do a commit cleanout and there are various statistics to explain the failures.)

(more…)

January 16, 2009

Concurrency

Filed under: Infrastructure — Jonathan Lewis @ 7:00 pm BST Jan 16,2009

When you get into big, busy, systems one of the final barriers you have to overcome is the concurrency issue; and after you’ve designed and fiddled and tweaked everything else it’s latch acquisition that is often the final barrier to extreme levels of concurrency.

(more…)

October 14, 2008

Going too fast

Filed under: CBO,Execution plans,Performance,Statistics,Troubleshooting,Tuning — Jonathan Lewis @ 6:10 pm BST Oct 14,2008

I received an email a litle while ago with an unusual problem. It said:

“One of the jobs which used to take more than one hour to complete is now completing in less than 10 minutes. Neither the application developer nor we (the DBA’s) made *any* changes in the environment/code/database. I can’t work out why it’s got better!”

 

It’s not often that “going faster” is a problem – but there’s a very good reason for being worried about jobs that go faster for no apparent reason – one day your luck is going to run out and the jobs are going to go slower again – and people really notice when things slow down.

(more…)

February 9, 2008

Index Rebuild 10g

Filed under: Infrastructure,Performance,Troubleshooting — Jonathan Lewis @ 9:12 am BST Feb 9,2008

I’ve written a few notes about the pros and cons of index rebuilds in the past. A comment on Richard Foote’s blog describes a bug in 10g (reminiscent of an old index root block bug in early versions of 8i) which adds a little extra twist to the issue.

(more…)

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 5,219 other followers