Oracle Scratchpad

August 14, 2017

Join Elimination Bug

Filed under: Bugs,CBO,Oracle — Jonathan Lewis @ 11:59 am BST Aug 14,2017

A few years ago a bug relating to join elimination showed up in a comment to a post I’d done about the need to keep on testing and learining. The bug was visible in version 11.2.0.2 and, with a script to replay it, I’d found that it had disappeared by 11.2.0.4.

Today I had a reason to rediscover the script, and decided to test it against 12.2.0.1 – and found that the bug was still present.

Here’s the model:


rem     Script:         join_eliminate_bug_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2012

drop table child purge;
drop table parent purge;

create table parent (
        id      number(4),
        name    varchar2(10),
        constraint par_pk primary key (id)
        deferrable initially immediate
)
;

create table child(
        id_p    number(4)       
                constraint chi_fk_par
                references parent,
        id      number(4),
        name    varchar2(10),
        constraint chi_pk primary key (id_p, id)
)
;

insert into parent values (1,'Smith');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Jack');
insert into child values(2,2,'Jill');

commit;

begin
        dbms_stats.gather_table_stats(user,'child');
        dbms_stats.gather_table_stats(user,'parent');
end;
/

set serveroutput off

select
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);

The setup is just to show you the correct results with join elimination taking place. Here’s the output from the query and the actual execution plan:

      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill

4 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  1whubydgj8w0s, child number 0
-------------------------------------
select  chi.* from  child chi,  parent par where  par.id = chi.id_p

Plan hash value: 2406669797

-----------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |    11 |
|   1 |  TABLE ACCESS FULL| CHILD |     4 |    48 |    11 |
-----------------------------------------------------------

On a single column join, with referential integrity in place, and no columns other than the primary key involved, the optimizer eliminates table parent from the query. But if I now defer the primary key constraint on parent and duplicate every row (which ought to duplicate the query result), watch what happens with the query:


set constraint par_pk deferred;

insert into parent (id,name) values (1,'Smith');
insert into parent (id,name) values (2,'Jones');

alter system flush shared_pool;

select
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);


      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill

4 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  1whubydgj8w0s, child number 0
-------------------------------------
select  chi.* from  child chi,  parent par where  par.id = chi.id_p

Plan hash value: 2406669797

-----------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |    11 |
|   1 |  TABLE ACCESS FULL| CHILD |     4 |    48 |    11 |
-----------------------------------------------------------

I get the same plan, so I get the same results – and notice that I flushed the shared pool before repeating the query so I haven’t fooled Oracle into reusing the wrong plan by accident – it’s a whole new freshly optimized plan.

Just to show what ought to happen here’s the last bit of the test case:


select  /*+ no_eliminate_join(@sel$1 par@sel$1) */
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);


      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill
         2          1 Jack
         2          2 Jill

8 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  5p8sp7k8b0fgq, child number 0
-------------------------------------
select /*+ no_eliminate_join(@sel$1 par@sel$1) */  chi.* from  child
chi,  parent par where  par.id = chi.id_p

Plan hash value: 65982890

-----------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |       |       |     5 |
|   1 |  NESTED LOOPS                |        |     4 |    60 |     5 |
|   2 |   NESTED LOOPS               |        |     4 |    60 |     5 |
|   3 |    INDEX FULL SCAN           | PAR_PK |     2 |     6 |     1 |
|*  4 |    INDEX RANGE SCAN          | CHI_PK |     2 |       |     1 |
|   5 |   TABLE ACCESS BY INDEX ROWID| CHILD  |     2 |    24 |     2 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PAR"."ID"="CHI"."ID_P")


I ran this test on 11.2.0.4 – and then repeated it on 12.2.0.1: the bug is still present (although I thought I’d seen a MoS note saying it had been fixed in 12.1).

It’s always a little dangerous playing around with deferrable constraints – my view is that you should keep the interval of deferment as short as possible and don’t try to use it for doing anything other than correcting known data errors. At present if you have code that defers constraints and runs non-trivial queries afterwards you might want that code to start with an “alter session” to set the hidden parameter _optimizer_join_elimination_enabled to false (after checking with Oracle support, of course).

August 3, 2017

Rebuilding Indexes

Filed under: Indexing,Infrastructure,Oracle,Partitioning,Troubleshooting — Jonathan Lewis @ 1:00 pm BST Aug 3,2017

One of the special events that can make it necessary to rebuild an index is the case of the “massive DML”, typically a bulk delete that purges old data from a table. You may even find cases where it’s a good idea to mark a couple of your indexes as unusable before doing a massive delete and then rebuild them after the delete.

Despite the fact that a massive delete is an obvious special case it’s still not necessary in many cases to worry about a rebuild afterwards because the space made free by the delete will be smoothly reused over time with very little variation in performance. There is, however, one particular feature that increases the probability of a rebuild becoming necessary – global (or globally partitioned) indexes on partitioned tables. The problem (and the absence of problem in non-partitioned tables) is in the nature of the rowid.

For non-partitioned tables, and partitioned tables with local indexes, the rowid stored in an index is (assuming we’re thinking only of heap tables) stored as a sequence of 6 bytes consisting, in order, of: (tablespace relative file number, block number within file, row number within block). If the table is non-partitioned, or if this is an index segment from a locally partitioned index, all the index entries will be pointing to the same table segment and Oracle knows which segment that is from the data dictionary information – so Oracle can derive the data_object_id of the table segment and convert the tablespace relative file number into the absolute file number to navigate to the right row in the table.

When the index is global or globally partitioned any index entry may point to any of the table’s segments, so the rowid that is stored in the index is expanded by a further 4 bytes to hold the data_object_id of the table segment it is pointing to – and the data_object_id is the leading component: (data_object_id, tablespace relative file number, block number within file, row number within block). Think about what this means when you start to drop “old” partitions and add new partitions. Compare this with what happens when you simply delete a large volume of old data from a table and starting inserting new data. There’s an important difference to the way in which indexes will evolve.

Purging data

When you delete a large volume of data from a (simple, heap) table you will create a lot of empty space in a lot of existing table blocks. If that delete is supposed to get rid of “old” data (and to keep the description simple we’ll assume it’s the first time you’ve done this) it’s very likely that the delete will result in lots of empty blocks near the start of the table – rows that were inserted at the same time will tend to be physically close to each other in the table. This means that future inserts will soon start to reuse those table blocks. Think about what this means for index entries – especially for non-unique keys.

Assume you have 100 rows with value ‘XXX’ for an indexed column. Breaking the rowid into its component parts the index entries will be (‘XXX’,{file_id, block_id, row_number}).  Now assume you delete the oldest 10 rows then, over time, insert 10 more rows with the same key value. You’ve deleted the 10 index entries with the lowest values for (file_id, block_id) but the space that’s become available in the table will be in and around exactly that range of blocks – so the new index entries will probably end up looking very similar to the deleted index entries and inserted in and around the existing index entries for value ‘XXX’, so over time the index is unlikely to allocate much new space.

Now think about what happens when your table it partitioned but the index is global; your index entries are (‘XXX’,{data_object_id, file_id, block_id, row_number}). When you drop the oldest partition you will probably[1] delete all the index entries with the lowest data_object_id. When you start inserting new rows for ‘XXX’ the new table partition will have a data_object_id that is going to be higher than any previous data_object_id – which means you’re going to be inserting rows into the right-hand (high-value) edge of this section of the index. In some cases – typically those where you have a couple of leaf blocks per key value – the index may end up growing significantly because the insertion point for rows in the new partition isn’t in the leaf block with the available space, and it won’t be until you’ve done a few more bulk deletes and the leaf blocks relating to the oldest table partitions become completely empty that the space can be reused.

An example of this type of behaviour probably appeared on the OTN database forum quite recently.  Of course, there are various reasons why indexes can become inefficient, and the degree of inefficiency may only become noticeable over a fairly long period of time; moreover there are various reasons why global indexes are a little problematic, and various reasons why a bulk delete (which is what executing “alter table drop partition” does to a global index) has unpleasant side effects dependent somewhat on the number (and size) of the partitions and on how many you try to drop in one go.

There’s not  a lot you can do about this quirk of global indexes, but it’s always worth taking extra care with partitioned tables and focusing even more carefully on a strategic review of indexes:

  • Does this index really need to exist at all
  • Could this index be replaced by a selective function-based index
  • Does this index really need to be global / globally partitioned
  • How big is this index compared to the size it ought to be
  • Should this index be (basic) compressed
  • Is this index likely to be disrupted by a historic purge – or is there another reason for its undesirable behaviour

 

[1] probably delete entries with the lowest data_object_id” – I have to say this because if you’ve executed a “move partition” at any time a new data_object_id will have been generated for the partition, so the oldest partition could, in principal, have the highest data_object_id. The issue of changing data_object_ids brings a whole new level of complexity to global indexes – but only in a few special cases, fortunately.

 

 

July 25, 2017

Redo OP Codes:

Filed under: Infrastructure,Oracle,redo — Jonathan Lewis @ 6:17 pm BST Jul 25,2017

This posting was prompted by a tweet from Kamil Stawiarski in response to a question about how he’d discovered the meaning of Redo Op Codes 5.1 and 11.6 – and credited me and Julian Dyke with “the hardest part”.

Over the years I’ve accumulated (from Julian Dyke, or odd MoS notes, etc.) and let dribble out the occasional interpretation of a few op codes – typically in response to a question on the OTN database forum or the Oracle-L listserver, and sometimes as a throwaway comment in a blog post, but I’ve never published the full set of codes that I’ve acquired (or guessed) to date.

It’s been some time since I’ve looked closely at a redo stream, and there are many features of Oracle that I’ve never had to look at at the level of the redo so there are plenty of gaps in the list – and maybe a few people will use the comments to help fill the gaps.

It’s possible that I may be able to add more op codes over the new days – I know that somewhere I have some op codes relating to space management, and a batch relating to LOB handling, but it looks like I forgot to add them to the master list – so here’s what I can offer so far:


1	Transaction Control

2	Transaction read

3	Transaction update

4	Block cleanout
		4.1	Block cleanout record
		4.2	Physical cleanout
		4.3	Single array change
		4.4	Multiple array changes
		4.5	Format block
		4.6	ktbcc redo -  Commit Time Block Cleanout Change (?RAC, ?recursive, ?SYS objects)

5	Transaction undo management
		5.1	Update undo block
		5.2	Get undo header
		5.3	Rollout a transaction begin
		5.4	On a rollback or commit
		5.5	Create rollback segmenr
		5.6	On a rollback of an insert
		5.7	In the ktubl for 'dbms_transaction.local_transaction_id'
			(begin transaction) - also arrives for incoming distributed
			tx, no data change but TT slot acquired. Also for recursive
			transaction (e.g. truncate). txn start scn:  0xffff.ffffffff
		5.8	Mark transaction as dead
		5.9	Rollback extension of rollback seg
		5.10	Rollback segment header change for extension of rollback seg
		5.11	Mark undo as applied during rollback
		5.19	Transaction audit record - first
		5.20	Transaction audit record - subsequent
		5.23	ktudbr redo: disable block level recovery (reports XID)
		5.24	ktfbhundo - File Space Header Undo

6	Control file

10	Index
		10.1	SQL load index block
		10.2	Insert Leaf Row
		10.3	Purge Leaf Row
		10.4	Delete Leaf Row
		10.5	Restore Leaf during rollback
		10.6	(kdxlok) Lock block (pre-split?)
		10.7	(kdxulo) unlock block during undo
		10.8	(kdxlne) initialize leaf block being split
		10.9	(kdxair) apply XAT do to ITL 1	-- related to leaf block split 
		10.10	Set leaf block next pointer
		10.11	(kdxlpr) (UNDO) set kdxleprv (previous pointer)
		10.12 	Initialize root block after split
		10.13	index redo (kdxlem): (REDO) make leaf block empty,
		10.14	Restore block before image
		10.15	(kdxbin) Insert branch block row	
		10.16	Purge branch row
		10.17	Initialize new branch block
		10.18	Update key data in row -- index redo (kdxlup): update keydata
		10.19	Clear split flag
		10.20	Set split flag
		10.21	Undo branch operation
		10.22	Undo leaf operation
		10.23	restore block to tree
		10.24	Shrink ITL
		10.25	format root block
		10.26	format root block (undo)
		10.27	format root block (redo)
		10.28	Migrating block (undo)
		10.29	Migrating block (redo)
		10.30	Update nonkey value
		10.31	index root block redo (kdxdlr):  create/load index
		10.34 	make branch block empty
		10.35	index redo (kdxlcnu): update nonkey
		10.37	undo index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.38	index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.39	index redo (kdxbur) :  branch block update range
		10.40	index redo (kdxbdu) :  branch block DBA update,

11	Table
		11.1  undo row operation 
		11.2  insert row  piece
		11.3  delete row piece 
		11.4  lock row piece
		11.5  update row piece
		11.6  overwrite row piece
		11.7  manipulate first column
		11.8  change forwarding address - migration
		11.9  change cluster key index
		11.10 Set Cluster key pointers
		11.11 Insert multiple rows
		11.12 Delete multiple rows
		11.13 toggle block header flags
		11.17 Update multiple rows
		11.19 Array update ?
		11.20 SHK (mark as shrunk?)
		11.24 HCC update rowid map ?

12	Cluster

13	Segment management
		13.1	ktsfm redo: -- allocate space ??
		13.5	KTSFRBFMT (block format) redo
		13.6	(block link modify) (? index )  (options: lock clear, lock set)
		13.7	KTSFRGRP (fgb/shdr modify freelist) redo: (options unlink block, move HWM)
		13.13	ktsbbu undo - undo operation on bitmap block
		13.14	ktsbbu undo - undo operation on bitmap block
		13.17	ktsphfredo - Format Pagetable Segment Header
		13.18	ktspffredo - Format Level1 Bitmap Block
		13.19	ktspsfredo - Format Level2 Bitmap Block
		13.21	ktspbfredo - Format Pagetable Datablock
		13.22	State change on level 1 bitmap block
		13.23	Undo on level 1 bitmap block
		13.24	Bitmap block (BMB) state change (level 2 ?)
		13.25	Undo on level 2 bitmap block 
		13.26	?? Level 3 bitmap block state change ??
		13.27	?? Level 3 bitmap block undo ??
		13.28	Update LHWM and HHWM on segment header
		13.29	Undo on segment header
		13.31	Segment shrink redo for L1 bitmap block
		13.32	Segment shrink redo for segment header block

14	Extent management
		14.1	ktecush redo: clear extent control lock
		14.2	ktelk redo - lock extent (map)
		14.3	Extent de-allocate
		14.4	kteop redo - redo operation on extent map
		14.5	kteopu undo - undo operation on extent map
		14.8	kteoputrn - undo operation for flush for truncate

15	Tablespace

16	Row cache

17	Recovery management
		17.1	End backup mode marker
		17.3	Crash Recovery at scn:  0x0000.02429111
		17.28	STANDBY METADATA CACHE INVALIDATION
	
18	Block image (hot backups)
		18.1	Block image
		18.3	Reuse redo entry 
				   (Range reuse: tsn=1 base=8388753 nblks=8)
				or (Object reuse: tsn=2 objd=76515)

19	Direct loader
		19.1	Direct load block record
		19.2	Nologging invalidate block range
			Direct Loader invalidate block range redo entry

20	Compatibility segment

21	LOB segment 
		21.1	kdlop (Long Feild) redo:  [sic]
				(insert basicfile clob)

22	Locally managed tablespace
		22.2	ktfbhredo - File Space Header Redo:
		22.3	ktfbhundo - File Space Header Undo:
		22.5	ktfbbredo - File BitMap Block Redo:
		22.16	File Property Map Block (FPM)

23	Block writes
		23.1	Block written record
		23.2	Block read record (BRR) -- reference in Doc ID: 12423475.8

24	DDL statements
		24.1	DDL
		24.2	Direct load block end mark
		24.4	?? Media recovery marker
		24.10	??
		24.11	??

(E & O.E) – you’ll notice that some of the descriptions include question marks – those are guesses – and some are little more than the raw text extracted from a redo change vector with no interpretation of what they might mean.

Update

It didn’t take long for someone to email me a much longer list that has been published elsewhere on the Internet. The results don’t have the hierarchical style display I have above, so I may copy the extra entries into the list above when I get a little time.

July 24, 2017

Fast Now, Fast Later

Filed under: Infrastructure,Oracle,Statspack,Troubleshooting — Jonathan Lewis @ 1:39 pm BST Jul 24,2017

The following is the text of an article I published in the UKOUG magazine several years ago (2010), but I came across it recently while writing up some notes for a presentation and thought it would be worth repeating here.

Fast Now, Fast Later

The title of this piece came from a presentation by Cary Millsap and captures an important point about trouble-shooting as a very memorable aphorism. Your solution to a problem may look good for you right now but is it a solution that will still be appropriate when the database has grown in volume and has more users.

I was actually prompted to write this article by a question on the OTN database forum that demonstrated the need for the basic combination of problem solving and forward planning. Someone had a problem with a fairly sudden change in performance of his system from November to December, and he had some samples from trace files and Statspack of a particular query that demonstrated the problem.

The query was very simple:

select  *
from    tph
where   pol_num = :b0
order by
        pm_dt, snum

When the query was operating fast enough the trace file from a sample run showed the following (edited) tkprof output, with an the optimizer taking advantage of the primary key of (pol_num, pm_dt, snum) on table TPH to avoid a sort for the order by clause. (Note that the heading on the plan is “Row Source Operation” – which means it’s the execution plan that really was used)

call    count    cpu  elapsed  disk  query  current  rows
---------------------------------------------------------
Parse       1   0.01     0.13     0    106        0     0
Execute     1   0.03     0.03     0      0        0     0
Fetch       4   0.01     0.22    46     49        0    43
---------------------------------------------------------
total       6   0.06     0.39    46    155        0    43

Rows Row Source Operation
---- --------------------
  43 TABLE ACCESS BY INDEX ROWID TPH (cr=49 pr=46 pw=0 time=226115 us)
  43   INDEX RANGE SCAN TPH_PK (cr=6 pr=3 pw=0 time=20079 us)(object id 152978)

Elapsed times include waiting on following events:
Event waited on               Times  Max. Wait Total Waited
                             Waited 
-----------------------------------------------------------
db file sequential read          46       0.01         0.21

When the query was running less efficiently the change in the trace didn’t immediately suggest any fundamental problems:


call    count    cpu  elapsed  disk  query  current  rows
---------------------------------------------------------
Parse       1   0.00     0.00     0     51        0     0
Execute     1   0.01     0.01     0      0        0     0
Fetch       4   0.00     0.59    47     51        0    45
---------------------------------------------------------
total       6   0.01     0.61    47    102        0    45

Rows Row Source Operation
---- --------------------
45 TABLE ACCESS BY INDEX ROWID TPH (cr=51 pr=47 pw=0 time=593441 us)
45 INDEX RANGE SCAN TPH_PK (cr=6 pr=2 pw=0 time=33470 us)(object id 152978)

Elapsed times include waiting on following events:
Event waited on               Times  Max. Wait Total Waited
                             Waited 
-----------------------------------------------------------
db file sequential read          47       0.03         0.58

The plan is the same, the number of rows returned is roughly the same, and the number of disc reads and buffer gets has hardly changed. Clearly the overall change in performance comes from the slower average disk read times (a total of 0.21 seconds with a maximum of one hundredth of a second, compared to a total 0.58 seconds with a maximum of 3 hundredths), but why has the disk I/O time changed?

The figures give us a couple of preliminary ideas. An average read time of 4.5 milliseconds ( 0.21 seconds / 46 reads) is pretty good for a “small” random read of a reasonably loaded disc subject to a degree of concurrent access [ed: bearing in mind this was 2010], so the waits for “db file sequential read” in the first tkprof output are probably getting some help from a cache somewhere – possibly a SAN cache at the end of a fibre link or maybe from a local file system buffer (we might get a better idea if we could see the complete list of individual read times).

In the second case an average of 12.3 milliseconds ( 0.58 seconds / 45 reads) looks much more like a reasonable amount of genuine disc I/O is taking place – and the maximum of 30 milliseconds suggests that the disc(s) in question are subject to an undesirable level of concurrent access: our session is spending some of its time in a disk queue. Again, it would be nice to see the wait times for all the reads, but at this point it’s not really necessary.

There are couple more clues about what’s going on – one is the text of the query itself (and I’ll be coming back to that later) and the other is in the detail of the disk I/Os. If you check the “Row Source Operation” details you’ll see that in the first case the sample query selected 43 rows from the table and requested 43 (46 – 3) physical reads (pr) of the table to do so. In the second case it was 45 rows and 45 (47 – 2) physical reads. Is this simply a case of the same query needing a little more data and having to do a little more work as time passes?

So now we come to the Statspack data. Based on my observations (or guesses) about the nature of the query and the work going on, I asked if we could see some summary information for a couple of comparative intervals, and also to see if this particular query appeared in the “SQL ordered by reads” section of the Statspack reports. Here are the results, first for a snapshot taken in October:


Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                      3,816,939      58,549     15   79.4
CPU time                                                     7,789          10.6
db file parallel write                         371,865       2,005      5    2.7
log file parallel write                         75,554       1,552     21    2.1
log file sync                                   17,198       1,228     71    1.7

                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
        775,166       43,228           17.9   24.3   212.58  12449.98  1505833981
Module: javaw.exe
SELECT * FROM TPH WHERE POL_NUM = :B1 ORDER BY PM_DT ,SNUM FOR UPDATE NOWAIT}

You might notice that the critical query is actually a ‘select for update’ rather than the simple select that we had originally been told about; this doesn’t affect the execution plan, but is going to have some significance as far as undo and redo are concerned.

Now look at the corresponding figures for an interval in December:

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                      7,000,428      92,970     13   89.8
CPU time                                                     6,780           6.5
db file parallel write                         549,286       1,450      3    1.4
db file scattered read                          84,127         720      9     .7
log file parallel write                         41,197         439     11     .4


                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
      2,444,437       43,363           56.4   25.2   221.31  23376.07 1505833981
Module: javaw.exe
SELECT * FROM TPH WHERE POL_NUM = :B1 ORDER BY PM_DT ,SNUM FOR UPDATE NOWAIT

You’ll see in both cases that a huge fraction of the total database time is spent in single block reads (the “db file sequential read” time), but in December the number of reads has gone up by about 3.2 million. You can also see that about 1.7 million of the “extra” reads could be attributed to the critical query even though the number of executions of that query has hardly changed. The average number of reads per execution has gone up from 18 to 56. (I did ask if I could see the section of Statspack titled “SQL ordered by Executions” as this includes the average number of rows per execution and it would have been nice to know whether this average had gone up just a little bit, or whether it had gone up in line with the average reads per execution. Unfortunately the request was ignored, so I am going to proceed as if the change in the average result set was small.)

This, perhaps, tells us exactly what the problem is (and even if it doesn’t, the figures are symptomatic of one of the common examples of non-scalable queries).

Look at the query again – are we reporting all the rows for a “policy number”, ordered by “payment date”. If so, the number of payments recorded is bound to increase with time, and inevitably there will be lots of payments for policies belonging to other people between each pair of payments I make on my policy – and that would put each of my payments in a different table block (if I use a normal heap table).

Initially the payments table may be sufficiently small that a significant fraction of it stays in Oracle’s data cache or even in the file-system or SAN cache; but as time passes and the table grows the probability of me finding most of my blocks cached decreases – moreover, as time passes, I want increasing numbers of blocks which means that as I read my blocks I’m more likely to knock your blocks out of the cache. Given the constantly increasing numbers of competing reads, it is also no surprise that eventually the average single block read time also increases.

In scenarios like this it is inevitable that performance will degrade over time; in fact it is reasonably likely that the performance profile will degrade slowly to start with and then show an increasingly dramatic plunge. The only question really is how much damage limitation you can do.

One strategy, of course, is to increase the memory available for the critical object(s). This may mean assigning the table to a generously sized KEEP cache. (The cache need not be the same size as the table to improve things, but the bigger the better – for this query, at least). But such a strategy is only postponing the inevitable – you really need to find an approach which is less susceptible to the passage of time.

In this case, there are a few options to consider. First – note that we are selecting all the rows for a policy: do we really need to, or could we select the rows within a given date range, thus setting an upper limit on the average volume of data we need to acquire for any one policy. If we do that, we may want to think about strategies for summarizing and deleting older data, or using partitioning to isolate older data in separate segments.

If we can’t deal with the problem by changing the code (and, in this case, the apparent business requirement) can we avoid the need to visit so many data blocks for single policy. There are two obvious options to consider here – we could create the table as an “index cluster” clustered on the policy number; in this way we pay a penalty each time we insert a new row for a policy because we have to find the correct block in the cluster but when we run a query against that policy we will only need to read one or two blocks (probably) to get all the data. Alternatively we could consider setting the table up as an index-organized table (IOT) – again we do more work inserting data into the correct leaf block in the index but again we reap the benefit as we query the data because all the rows we want are in the same two or three leaf blocks (and stored in the order we want them).

Of course we are still subject to the same basic problem of the result set increasing in size as time passes, but at least we have managed to reduce (dramatically) the number of blocks we have to visit and the rate of growth of the number of blocks per query, thereby improving the scalability of the queries significantly.

Introducing new structures to an existing system is difficult, of course, and we may have to work out variations on this theme (like creating an index that includes all the table columns if we can’t switch to an IOT!). The key point is this, though: sometimes we can look at our data and the critical queries and recognize that the volume of data we have to process (even if we don’t return it, as we did in this example) is always going to increase over time, then we need to consider ways of minimizing the volume of data, or improving the packing of data so that the work we do doesn’t change (much) over time. Don’t just think ‘fast now’, think ‘will it still be fast later’.

 

July 7, 2017

OFE

Filed under: CBO,Oracle,Upgrades — Jonathan Lewis @ 1:14 pm BST Jul 7,2017

The title is a well-known shorthand for parameter optimizer_features_enable and it has been the topic of a recent blog post by Mike Dietrich in which he decries the practice of switching the parameter back to an older version on an upgrade (even though, as he points out, Oracle support has been known to recommend it and the manuals describe – though not with 100% accuracy – why you might do so).

I am one of the people who will suggest that on the upgrade a client should consider setting the optimizer_features_enable to the version just left behind as a strategy for getting to a newer version of the base code while minimising the threat of plan instability, so I’m going to play devil’s advocate in this case even though, as we shall see, I am nearly 100% in favour of Mike’s complaint.

The first point, of course, is full disclosure to the client.  Eventually they will have to set the parameter to the current database version, all they’re doing to trying to spread out the workload of addressing a perceived threat. Moreover, they are only minimising the threat, not eliminating it. Setting the parameter has the effect of changing the state of a long list of parameters and “fix controls” – but there’s no guarantee that it will reverse out all the code changes between the two versions. One hopes it won’t reverse out a bug-fix (though Mike quotes a MoS note where exactly that problem appears); more significantly it might not reverse out a clever code optimisation that (in a few unlucky cases) happens to make the SQL run more slowly even when a new transformation is not involved. What you’re hoping for when you set this parameter is that the number of places in your application where you get an unlucky change in performance is much smaller than it would be if you didn’t set the parameter.

The second point is that you really want to have the minimum impact possible while doing expending as little human effort as possible. To this end it’s better to think in terms of setting the parameter for specific users (via a logon trigger), or specific sessions (e.g. batch runs), or specific statements (through a hint or SQL Patch). It may take a couple of test runs to spot the critical classes of statements that point you at the right granularity of implementation, but the more of your SQL that runs at the newer optimizer level the better.

If you’re going to aim for minimum impact, though, and if you’ve got the time to do some broad-brush testing it’s worth going back to my comment that this parameter is a big switch for a number of parameters and fix controls. Perhaps you will be able to spot which new feature, or which fix control is the one thing that needs to be changed – in the short-term – for your system.  Again, statement level is preferable to session level, which is preferable to user level, which is preferable to system level.

The thought of adding a controlling parameter as hint to a statement will probably have some people thinking about creating SQL baselines rather than adding hints to code – and if it’s 3rd party code then an SQL Baseline may be the necessary strategy. Bear in mind that a common advisory for upgrades is “create SQL Baselines for all your SQL first” – it wouldn’t have been me that said it, though!  So here’s something to consider in the light of the whole yes/no argument about optimizer_features_enable, what does a baseline look like ? Here, taken from an 11g database is the critical content of a baseline for “select user from dual”:


IGNORE_OPTIM_EMBEDDED_HINTS
ALL_ROWS
DB_VERSION('11.2.0.4')
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
OUTLINE_LEAF(@"SEL$1")

It’s a set of hints – including the optimizer_features_enable() hint.Using SQL Baselines to stabilise your code on the upgrade leaves you exposed (in principle) to exactly the problem in the MoS notes that Mike cited as his example of the parameter undoing a bug-fix and producing wrong results. That, by the way, is why I’m not worried by Mike’s example: if the parameter re-introduces a bug then you would have been running with the bug – or probably a workaround to the bug – anyway (Unless, say, you upgraded from 11.2.0.1 to 11.2.0.4 and decided to set the parameter to 11.2.0.3.1 – but that’s not a strategy compatible with the idea of using the parameter for stability with minimum short-term change.)

The second MoS note that Mike cites is really the one that states – emphasis is mine – a realistic view of the parameter (though I’d view the restriction to Oracle Global Support is a legal cop-out rather than a pure technology requirement):

Modifying the OPTIMIZER_FEATURES_ENABLE parameter generally is strongly discouraged and should only be used as a short term measure at the suggestion of Oracle Global Support.

The follow-up comment is, to my mind, a bit hand-wavy:

By reducing the OPTIMIZER_FEATURES_ENABLE level, new optimizer features are disabled. This has serious potential for negatively affecting performance generally by eliminating the possibility of choosing better plans that are only available with features enabled within the higher revision levels.

Arguing the case against setting the parameter because of the potential for affecting performance negatively – when you’re doing it so that nothing changes – is about as valid as the argument for setting it because of the potential for affecting performance negatively in a tiny percentage of plans that use new features when it’s a very bad idea.

Bottom line: whether or not you set the parameter you’re likely to hit a few edge cases where performance suffers; the less time you have for proper testing in advance the more likely you are to feel the need to set the parameter – but if you start heading in that direction think about using the time you do have available to minimise the scope, or even getting down to the detail of which ACTUAL feature is the problem feature that needs to be disabled for your system.

Footnote

If you want to check which parameters and fix controls change as you set the optimizer_features_enable you could mess around with the dynamic performance views. Alternatively you could take advantage of the optimizer trace – it’s one of the easy things that the 10053 offers.  Enable the trace, optimize a simple statement, then check the trace file for the bit about optimizer parameters – the section you need from 12c trace will be as follows:


***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************

... Some 1,700 lines

***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

In my case I connected to SQL*Plus, enabled the trace and executed “select 1 from dual”; then I reconnected, set the trace again, set the optimizer_features_enable back to 11.2.0.4 (I was on 12.1.0.2 at the time) and executed “select 2 from dual”. Then I deleted everything but the relevant section from the two trace files. One of the joys of Unix is that you can then run commands like the following:

sdiff  -s  or32_ora_2098.trc  or32_ora_2110.trc  |  expand  >ofe_diff.txt

That’s “side by side comparison, showing only the differences, expand tab marks out to spaces”. Here’s the result (with one blank line inserted between the parameters and the fix controls):

                                                              > optimizer_features_enable           = 11.2.0.4
                                                              > _fix_control_key                    = -1750344682
optimizer_features_enable           = 12.1.0.2                <
_optimizer_undo_cost_change         = 12.1.0.2                | _optimizer_undo_cost_change         = 11.2.0.4
_fix_control_key                    = 0                       <
_optimizer_cube_join_enabled        = true                    | _optimizer_cube_join_enabled        = false
_optimizer_hybrid_fpwj_enabled      = true                    | _optimizer_hybrid_fpwj_enabled      = false
_px_replication_enabled             = true                    | _px_replication_enabled             = false
_optimizer_partial_join_eval        = true                    | _optimizer_partial_join_eval        = false
_px_concurrent                      = true                    | _px_concurrent                      = false
_px_object_sampling_enabled         = true                    | _px_object_sampling_enabled         = false
_optimizer_unnest_scalar_sq         = true                    | _optimizer_unnest_scalar_sq         = false
_px_filter_parallelized             = true                    | _px_filter_parallelized             = false
_px_filter_skew_handling            = true                    | _px_filter_skew_handling            = false
_optimizer_multi_table_outerjoin    = true                    | _optimizer_multi_table_outerjoin    = false
_px_groupby_pushdown                = force                   | _px_groupby_pushdown                = choose
_optimizer_ansi_join_lateral_enhance = true                   | _optimizer_ansi_join_lateral_enhance = false
_px_parallelize_expression          = true                    | _px_parallelize_expression          = false
_optimizer_ansi_rearchitecture      = true                    | _optimizer_ansi_rearchitecture      = false
_optimizer_gather_stats_on_load     = true                    | _optimizer_gather_stats_on_load     = false
_px_adaptive_dist_method            = choose                  | _px_adaptive_dist_method            = off
_optimizer_batch_table_access_by_rowid = true                 | _optimizer_batch_table_access_by_rowid = false
_px_wif_dfo_declumping              = choose                  | _px_wif_dfo_declumping              = off
_px_wif_extend_distribution_keys    = true                    | _px_wif_extend_distribution_keys    = false
_px_join_skew_handling              = true                    | _px_join_skew_handling              = false
_px_partial_rollup_pushdown         = adaptive                | _px_partial_rollup_pushdown         = off
_px_single_server_enabled           = true                    | _px_single_server_enabled           = false
_optimizer_dsdir_usage_control      = 126                     | _optimizer_dsdir_usage_control      = 0
_px_cpu_autodop_enabled             = true                    | _px_cpu_autodop_enabled             = false
_optimizer_use_gtt_session_stats    = true                    | _optimizer_use_gtt_session_stats    = false
_optimizer_adaptive_plans           = true                    | _optimizer_adaptive_plans           = false
_optimizer_proc_rate_level          = basic                   | _optimizer_proc_rate_level          = off
_adaptive_window_consolidator_enabled = true                  | _adaptive_window_consolidator_enabled = false
_optimizer_strans_adaptive_pruning  = true                    | _optimizer_strans_adaptive_pruning  = false
_optimizer_null_accepting_semijoin  = true                    | _optimizer_null_accepting_semijoin  = false
_optimizer_cluster_by_rowid         = true                    | _optimizer_cluster_by_rowid         = false
_optimizer_cluster_by_rowid_control = 129                     | _optimizer_cluster_by_rowid_control = 3
_distinct_agg_optimization_gsets    = choose                  | _distinct_agg_optimization_gsets    = off
_gby_vector_aggregation_enabled     = true                    | _gby_vector_aggregation_enabled     = false
_optimizer_vector_transformation    = true                    | _optimizer_vector_transformation    = false
_optimizer_aggr_groupby_elim        = true                    | _optimizer_aggr_groupby_elim        = false
_optimizer_reduce_groupby_key       = true                    | _optimizer_reduce_groupby_key       = false
_optimizer_cluster_by_rowid_batched = true                    | _optimizer_cluster_by_rowid_batched = false
_optimizer_inmemory_table_expansion = true                    | _optimizer_inmemory_table_expansion = false
_optimizer_inmemory_gen_pushable_preds = true                 | _optimizer_inmemory_gen_pushable_preds = false
_optimizer_inmemory_autodop         = true                    | _optimizer_inmemory_autodop         = false
_optimizer_inmemory_access_path     = true                    | _optimizer_inmemory_access_path     = false
_optimizer_inmemory_bloom_filter    = true                    | _optimizer_inmemory_bloom_filter    = false
_optimizer_nlj_hj_adaptive_join     = true                    | _optimizer_nlj_hj_adaptive_join     = false
_px_external_table_default_stats    = true                    | _px_external_table_default_stats    = false
_optimizer_inmemory_minmax_pruning  = true                    | _optimizer_inmemory_minmax_pruning  = false
_optimizer_inmemory_cluster_aware_dop = true                  | _optimizer_inmemory_cluster_aware_dop = false

    fix  9898249 = enabled                                    |     fix  9898249 = disabled
    fix 10004943 = enabled                                    |     fix 10004943 = disabled
    fix  9554026 = enabled                                    |     fix  9554026 = disabled
    fix  9593547 = enabled                                    |     fix  9593547 = disabled
    fix  9833381 = enabled                                    |     fix  9833381 = disabled
    fix 10106423 = enabled                                    |     fix 10106423 = disabled
    fix 10175079 = enabled                                    |     fix 10175079 = disabled
    fix 10236566 = enabled                                    |     fix 10236566 = disabled
    fix  9721228 = enabled                                    |     fix  9721228 = disabled
    fix  9929609 = enabled                                    |     fix  9929609 = disabled
    fix 10182672 = enabled                                    |     fix 10182672 = disabled
    fix  9832338 = enabled                                    |     fix  9832338 = disabled
    fix 11668189 = enabled                                    |     fix 11668189 = disabled
    fix 11940126 = enabled                                    |     fix 11940126 = disabled
    fix 12390139 = enabled                                    |     fix 12390139 = disabled
    fix 11744016 = enabled                                    |     fix 11744016 = disabled
    fix 10216738 = enabled                                    |     fix 10216738 = disabled
    fix 12563419 = enabled                                    |     fix 12563419 = disabled
    fix 12535474 = enabled                                    |     fix 12535474 = disabled
    fix 12561635 = enabled                                    |     fix 12561635 = disabled
    fix 12569245 = enabled                                    |     fix 12569245 = disabled
    fix 12569300 = enabled                                    |     fix 12569300 = disabled
    fix 12569316 = enabled                                    |     fix 12569316 = disabled
    fix 12569321 = enabled                                    |     fix 12569321 = disabled
    fix  9002958 = enabled                                    |     fix  9002958 = disabled
    fix 12810427 = enabled                                    |     fix 12810427 = disabled
    fix 12914055 = enabled                                    |     fix 12914055 = disabled
    fix 12978495 = enabled                                    |     fix 12978495 = disabled
    fix 13110511 = enabled                                    |     fix 13110511 = disabled
    fix 13345888 = enabled                                    |     fix 13345888 = disabled
    fix 13396096 = enabled                                    |     fix 13396096 = disabled
    fix 12999577 = enabled                                    |     fix 12999577 = disabled
    fix 12954320 = enabled                                    |     fix 12954320 = disabled
    fix 13036910 = enabled                                    |     fix 13036910 = disabled
    fix 12648629 = enabled                                    |     fix 12648629 = disabled
    fix 13704977 = enabled                                    |     fix 13704977 = disabled
    fix 11843466 = enabled                                    |     fix 11843466 = disabled
    fix 13909909 = enabled                                    |     fix 13909909 = disabled
    fix 12856200 = enabled                                    |     fix 12856200 = disabled
    fix  9852856 = enabled                                    |     fix  9852856 = disabled
    fix 14033181 = enabled                                    |     fix 14033181 = disabled
    fix 13836796 = enabled                                    |     fix 13836796 = disabled
    fix 13699643 = enabled                                    |     fix 13699643 = disabled
    fix 13735304 = enabled                                    |     fix 13735304 = disabled
    fix 14464068 = enabled                                    |     fix 14464068 = disabled
    fix 13448445 = enabled                                    |     fix 13448445 = disabled
    fix  9114915 = enabled                                    |     fix  9114915 = disabled
    fix 13109345 = enabled                                    |     fix 13109345 = disabled
    fix 14605040 = enabled                                    |     fix 14605040 = disabled
    fix 14633570 = enabled                                    |     fix 14633570 = disabled
    fix 13573073 = enabled                                    |     fix 13573073 = disabled
    fix 16237969 = enabled                                    |     fix 16237969 = disabled
    fix 13994546 = enabled                                    |     fix 13994546 = disabled
    fix 14750443 = enabled                                    |     fix 14750443 = disabled
    fix 14552075 = enabled                                    |     fix 14552075 = disabled
    fix 16324844 = enabled                                    |     fix 16324844 = disabled
    fix 13583529 = enabled                                    |     fix 13583529 = disabled
    fix 14565911 = enabled                                    |     fix 14565911 = disabled
    fix 16368002 = enabled                                    |     fix 16368002 = disabled
    fix 16077770 = enabled                                    |     fix 16077770 = disabled
    fix 11814337 = enabled                                    |     fix 11814337 = disabled
    fix 14764840 = enabled                                    |     fix 14764840 = disabled
    fix 16555865 = enabled                                    |     fix 16555865 = disabled
    fix 16625151 = enabled                                    |     fix 16625151 = disabled
    fix 16609749 = enabled                                    |     fix 16609749 = disabled
    fix 16751246 = enabled                                    |     fix 16751246 = disabled
    fix 16749025 = enabled                                    |     fix 16749025 = disabled
    fix 16750067 = enabled                                    |     fix 16750067 = disabled
    fix 15899648 = enabled                                    |     fix 15899648 = disabled
    fix 16690013 = enabled                                    |     fix 16690013 = disabled
    fix 16544878 = enabled                                    |     fix 16544878 = disabled
    fix 16725982 = enabled                                    |     fix 16725982 = disabled
    fix 14648222 = enabled                                    |     fix 14648222 = disabled
    fix 16507317 = enabled                                    |     fix 16507317 = disabled
    fix 16837274 = enabled                                    |     fix 16837274 = disabled
    fix 14085520 = enabled                                    |     fix 14085520 = disabled
    fix 16713081 = enabled                                    |     fix 16713081 = disabled
    fix 14703295 = enabled                                    |     fix 14703295 = disabled
    fix 16908409 = enabled                                    |     fix 16908409 = disabled
    fix 16212250 = enabled                                    |     fix 16212250 = disabled
    fix 17087729 = enabled                                    |     fix 17087729 = disabled
    fix 17088819 = enabled                                    |     fix 17088819 = disabled
    fix 13848786 = enabled                                    |     fix 13848786 = disabled
    fix 13522189 = enabled                                    |     fix 13522189 = disabled
    fix 16796185 = enabled                                    |     fix 16796185 = disabled
    fix 15950252 = enabled                                    |     fix 15950252 = disabled
    fix 16976121 = enabled                                    |     fix 16976121 = disabled
    fix 16582322 = enabled                                    |     fix 16582322 = disabled
    fix 16712213 = enabled                                    |     fix 16712213 = disabled
    fix 17382690 = enabled                                    |     fix 17382690 = disabled
    fix 14846352 = enabled                                    |     fix 14846352 = disabled
    fix 16516751 = enabled                                    |     fix 16516751 = disabled
    fix  8611462 = enabled                                    |     fix  8611462 = disabled
    fix 14062749 = enabled                                    |     fix 14062749 = disabled
    fix 16346018 = enabled                                    |     fix 16346018 = disabled
    fix 12977599 = enabled                                    |     fix 12977599 = disabled
    fix 14191778 = enabled                                    |     fix 14191778 = disabled
    fix 15939321 = enabled                                    |     fix 15939321 = disabled
    fix 17543180 = enabled                                    |     fix 17543180 = disabled
    fix 17301564 = enabled                                    |     fix 17301564 = disabled
    fix 12373708 = enabled                                    |     fix 12373708 = disabled
    fix 17397506 = enabled                                    |     fix 17397506 = disabled
    fix 14558315 = enabled                                    |     fix 14558315 = disabled
    fix 16615686 = enabled                                    |     fix 16615686 = disabled
    fix 16622801 = enabled                                    |     fix 16622801 = disabled
    fix 16954950 = enabled                                    |     fix 16954950 = disabled
    fix 17728161 = enabled                                    |     fix 17728161 = disabled
    fix 17760375 = enabled                                    |     fix 17760375 = disabled
    fix 17640863 = enabled                                    |     fix 17640863 = disabled
    fix 17716301 = enabled                                    |     fix 17716301 = disabled
    fix 17597748 = enabled                                    |     fix 17597748 = disabled
    fix 17303359 = enabled                                    |     fix 17303359 = disabled
    fix 16673868 = enabled                                    |     fix 16673868 = disabled
    fix 17800514 = enabled                                    |     fix 17800514 = disabled
    fix 14826303 = enabled                                    |     fix 14826303 = disabled
    fix 17663076 = enabled                                    |     fix 17663076 = disabled
    fix 17760755 = enabled                                    |     fix 17760755 = disabled
    fix 17997159 = enabled                                    |     fix 17997159 = disabled
    fix 14733442 = enabled                                    |     fix 14733442 = disabled
    fix 17781659 = enabled                                    |     fix 17781659 = disabled
    fix 17526569 = enabled                                    |     fix 17526569 = disabled
    fix 17760686 = enabled                                    |     fix 17760686 = disabled
    fix 17696414 = enabled                                    |     fix 17696414 = disabled
    fix 18116777 = enabled                                    |     fix 18116777 = disabled
    fix 16052625 = enabled                                    |     fix 16052625 = disabled
    fix 18091750 = enabled                                    |     fix 18091750 = disabled
    fix 17572606 = enabled                                    |     fix 17572606 = disabled
    fix 18196576 = enabled                                    |     fix 18196576 = disabled
    fix 17736165 = enabled                                    |     fix 17736165 = disabled
    fix 16434021 = enabled                                    |     fix 16434021 = disabled
    fix 18035463 = enabled                                    |     fix 18035463 = disabled
    fix 18011820 = enabled                                    |     fix 18011820 = disabled
    fix 16405740 = enabled                                    |     fix 16405740 = disabled
    fix 18365267 = enabled                                    |     fix 18365267 = disabled
    fix 17863980 = enabled                                    |     fix 17863980 = disabled
    fix 18398980 = enabled                                    |     fix 18398980 = disabled
    fix 18304693 = enabled                                    |     fix 18304693 = disabled
    fix 18508675 = enabled                                    |     fix 18508675 = disabled
    fix 18456944 = enabled                                    |     fix 18456944 = disabled
    fix 17908541 = enabled                                    |     fix 17908541 = disabled
    fix 18467455 = enabled                                    |     fix 18467455 = disabled
    fix 16033838 = enabled                                    |     fix 16033838 = disabled
    fix 16809786 = enabled                                    |     fix 16809786 = disabled
    fix 18425876 = enabled                                    |     fix 18425876 = disabled
    fix 18461984 = enabled                                    |     fix 18461984 = disabled
    fix 17023040 = enabled                                    |     fix 17023040 = disabled
    fix 14776289 = enabled                                    |     fix 14776289 = disabled

That’s 50 parameter differences, and 147 fix controls. Quite a lot of fixes between the two versions.

If you’re coming to the upgrade a couple of years late then you might want to consider using the new version number and list of parameters you generate as the criteria as a search for bugs in MoS. You might even find that simply running your eye down the list of parameters gives you a clue about a type of execution plan that you’ve never seen in the older version.

 

June 14, 2017

Unpivot

Filed under: Execution plans,Oracle,Performance — Jonathan Lewis @ 3:46 pm BST Jun 14,2017

An interesting observation appeared recently as a side-channel on a question on the OTN database forum – how does Oracle execute an unpivot() operation. Here’s an example of such a query:

rem
rem     Script:         unpivot_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem

create table t1( id, col1, col2, col3, col4, col5, padding )
cache
pctfree 95 pctused 5
-- compress for query low
as
select
        1, 100 , 200 , 300 , 400 , 500,rpad('x',100)
from
        all_objects
where
        rownum <= 50000 ; execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select 
        /*+ gather_plan_statistics find this */
        id, max(val) as high_val
from
        t1
unpivot include nulls (
        val for source in (col1, col2, col3, col4, col5)
)
group by id
order by id
; 

I’ve created a table with 50,000 rows (all_objects is quite large in 12.1.0.2 and 12.2.0.1), but with lots of free space per block so that I get three rows per block for a total of roughly 16,667 blocks which is going to make it fairly easy to spot any interesting session statistics. Then I’ve used an unpivot() call that has the effect of turning one row with five columns into five rows with one column.

Here’s the basic execution plan for the query (as pulled from memory):


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        | 10695 (100)|      1 |00:00:00.18 |   16671 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |      1 | 10695   (2)|      1 |00:00:00.18 |   16671 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K| 10653   (2)|    250K|00:00:01.41 |   16671 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.52 |   16671 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  2131   (2)|  50000 |00:00:00.12 |   16671 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

According to the plan Oracle has done one full tablescan on the data (Starts = 1), but the unpivot operation has used this to produce 250,000 rows of output from 50,000 rows (A-Rows=) of input. Note, however, that the cost of the view operation is 5 times the cost of the tablescan but, on the other hand, the number of buffers visited is 16,671 (which matches the size of the table). So have we done 5 tablescans with a union all, or have we done one tablescan ?

The next step is to look at the 10053 (optimizer) trace file, specifically for the “unparsed query” which (despite the final plan table showing the plan we’ve just seen above) looked like this – after applying a few purely cosmetic changes:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "from$_subquery$_002"."ID" "ID",MAX("from$_subquery$_002"."VAL") "HIGH_VAL" 
FROM    ( 
                   (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL1' "SOURCE","T1"."COL1" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL2' "SOURCE","T1"."COL2" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL3' "SOURCE","T1"."COL3" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL4' "SOURCE","T1"."COL4" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL5' "SOURCE","T1"."COL5" "VAL" FROM "TEST_USER"."T1" "T1")
        ) "from$_subquery$_002" 
GROUP BY "from$_subquery$_002"."ID" 
ORDER BY "from$_subquery$_002"."ID"
;

And then there’s the outline (which I can see in the 10053 trace, or in the plan pulled from memory by a call to dbms_xplan.display_cursor()).


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$6")
      NO_ACCESS(@"SEL$6" "from$_subquery$_002"@"SEL$6")
      FULL(@"SEL$5" "T1"@"SEL$5")
      FULL(@"SEL$4" "T1"@"SEL$4")
      FULL(@"SEL$3" "T1"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

And if I also pull the alias (query block and fully qualified table name) information from memory:


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$6
   2 - SET$1 / from$_subquery$_002@SEL$6
   3 - SET$1 
   4 - SEL$5 / T1@SEL$5

Then there’s the alternative (though not necessarily correct) way to find an “unparsed” version of the query – dbms_sql2.expand_sql_text() in 11g, dbms_utility.expand_sql_text() in 12c:


variable m_sql_out clob

declare
        m_sql_in    clob :=
                'select /*+ gather_plan_statistics find this */ id, max(val) as high_val
                from    t1
                        unpivot include nulls ( val for source in (col1, col2, col3, col4, col5) )
                group by id
                order by id
                '
        ;
begin

--      dbms_sql2.expand_sql_text(        -- 11g
        dbms_utility.expand_sql_text(     -- 12c
                m_sql_in,
                :m_sql_out
        );

end;
/

set long 20000
print m_sql_out

M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  ( (SELECT "A3"."ID" "ID",
"A3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1"
 "A3") UNION ALL  (SELECT "A4"."ID" "ID","A4"."PADDING" "PADDING",'COL2' "SOURCE
","A4"."COL2" "VAL" FROM "TEST_USER"."T1" "A4") UNION ALL  (SELECT "A5"."ID" "ID
","A5"."PADDING" "PADDING",'COL3' "SOURCE","A5"."COL3" "VAL" FROM "TEST_USER"."T
1" "A5") UNION ALL  (SELECT "A6"."ID" "ID","A6"."PADDING" "PADDING",'COL4' "SOUR
CE","A6"."COL4" "VAL" FROM "TEST_USER"."T1" "A6") UNION ALL  (SELECT "A7"."ID" "
ID","A7"."PADDING" "PADDING",'COL5' "SOURCE","A7"."COL5" "VAL" FROM "TEST_USER".
"T1" "A7")) "A1" GROUP BY "A1"."ID" ORDER BY "A1"."ID"

So at some level the optimizer does think it’s doing 5 tablescans – and the t1 reported in the unpivot plan is from the fifth (i.e. last) copy of the table in the union all. But the execution statistics and the session statistics say I’ve done just one tablescan – and this persists even when I make the table so large that it (a) won’t fit the buffer cache, and/or (b) uses direct path reads, and/or (c) runs under Exadata with Hybrid columnar compression.

So what’s (probably) happening ?

I think Oracle has a code path that says it’s doing a union all of tablescans (at least for this example) but tells it that the union all is there as an expansion of an unpivot so (sticking with an example that does a tablescan into the buffer cache) Oracle reads the number of blocks dictated by the current multiblock read count into the cache, pins that one batch of blocks, scans the batch (or, possibly, each block in turn) 5 times (or as required), unpins the batch and then reads the next batch. So the session does five tablescans but does them in a way that lets you see only one tablescan in the statistics.

Footnote

There was a change in the results when I tested this on 12.2.0.1; the unparsed query reported only a two-part union all subquery, and the table alias information in the plan identified the referenced table as the copy of the table from the second (i.e. still last) subquery in the union all. More significantly the cost of the VIEW operation was a close match to the cost of a single tablescan, rather than being a multiple thereof:


select * from table(dbms_xplan.display_cursor('1k077bzp0t6mn',null,'outline alias cost 

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |  1117 (100)|      1 |00:00:00.13 |    8350 |   8334 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |    250K|  1117   (6)|      1 |00:00:00.13 |    8350 |   8334 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K|  1076   (2)|    250K|00:00:00.13 |    8350 |   8334 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.09 |    8350 |   8334 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  1074   (2)|  50000 |00:00:00.07 |    8350 |   8334 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$3
   2 - SET$1 / from$_subquery$_002@SEL$3
   3 - SET$1
   4 - SEL$2 / T1@SEL$2

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$3")
      NO_ACCESS(@"SEL$3" "from$_subquery$_002"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      END_OUTLINE_DATA
  */


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  (SELECT "A3"."ID" "ID","A
3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1" "
A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL2' "SOURCE","A3
"."COL2" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."
PADDING" "PADDING",'COL3' "SOURCE","A3"."COL3" "VAL" FROM "TEST_USER"."T1" "A3"
UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL4' "SOURCE","A3"."C
OL4" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADD
ING" "PADDING",'COL5' "SOURCE","A3"."COL5" "VAL" FROM "TEST_USER"."T1" "A3") "A1
" GROUP BY "A1"."ID" ORDER BY "A1"."ID"
allstats last'));

It used to be hard enough working out what the run-time engine was doing when the optimizer and the run-time engine claimed that they were working on the same (transformed version of the) SQL; but now we can get the impression that the optimizer is directing the run-time engine to execute a plan that couldn’t possibly match the (unparsed) query that the optimizer had been considering.

June 12, 2017

dbms_sqldiag

Filed under: 12c,Execution plans,Hints,Oracle,Upgrades — Jonathan Lewis @ 12:48 pm BST Jun 12,2017

If you’re familiar with SQL Profiles and SQL Baselines you may also know about SQL Patches – a feature that allows you to construct hints that you can attach to SQL statements at run-time without changing the code. Oracle 12c Release 2 introduces a couple of important changes to this feature:

  • It’s now official – the feature had been copied from package dbms_sqldiag_internal to package dbms_sqldiag.
  • The limitation of 500 characters has been removed from the hint text – it’s now a CLOB column.

H/T to Nigel Bayliss for including this detail in his presentation to the UKOUG last week, and pointing out that it’s also available for Standard Edition.

There are a couple of other little changes as you can see below from the two extract from the 12.2 declarations of dbms_sqldiag and dbms_sqldiag_internal below:


dbms_sqldiag
------------
FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

dbms_sqldiag_internal
---------------------
FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

  • The function names change from i_create_patch to create_patch when exposed in dbms_sqldiag.
  • There are two versions of the function – one that requires you to supply the exact SQL text, and a new version that allows you to supply an SQL ID.
  • The internal function also adds a creator to the existing parameter list – and it doesn’t have a default so if you’ve already got some code to use the internal version it’s not going to work on an upgrade to 12.2 until you change it.

I was prompted to write this note by a tweet asking me if there’s any SQL available to see the contents of an SQL Profile in 11g and 12c. (I published some simple code several years ago for 10g, (before accepting – in the body of the blog, after accepting – in the linked comment) but Oracle changed the base tables in 11g). The answer is yes, probably on the Internet somewhere, but here’s some code I wrote a couple of years ago to report profiles in the more recent versions of Oracle:

rem
rem     sql_profile_baseline_11g.sql
rem     J.P.Lewis
rem     July 2010
rem

set pagesize 60
set linesize 132
set trimspool on

column hint format a70 wrap word
column signature format 999,999,999,999,999,999,999

break on signature skip 1 on opt_type skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                'Other'
        )       opt_type,
        prf.plan_id,
        extractvalue(value(tab),'.')    hint
from
        (
        select
                /*+ no_eliminate_oby */
                *
        from
                sqlobj$data
        where
                comp_data is not null
        order by
                signature, obj_type, plan_id
        )       prf,
        table(
                xmlsequence(
                        extract(xmltype(prf.comp_data),'/outline_data/hint')
                )
        )       tab
;

This will report the hints associated with SQL Baselines, SQL Profiles, and SQL Patches – all three store the data in the same base table. As a minor variation I also have a query that will reported a named profile/baseline/patch, but this requires a join to the sqlobj$ table. As you can see from the substitution variable near the end of the text, the script will prompt you for an object name.


set pagesize 60
set linesize 180
set trimspool on

column  plan_name format a32
column  signature format 999,999,999,999,999,999,999
column  category  format a10
column  hint format a70 wrap word

break on plan_name skip 1 on signature skip 1 on opt_type skip 1 on category skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.plan_name,
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                  'Other'
        )       opt_type,
        prf.category,
        prf.plan_id,
        extractvalue(value(hnt),'.') hint
from
        (
        select
                /*+ no_eliminate_oby */
                so.name         plan_name,
                so.signature,
                so.category,
                so.obj_type,
                so.plan_id,
                sod.comp_data
                from
                        sqlobj$         so,
                        sqlobj$data     sod
                where
                        so.name = '&m_plan_name'
                and     sod.signature = so.signature
                and     sod.category = so.category
                and     sod.obj_type = so.obj_type
                and     sod.plan_id = so.plan_id
                order by
                        signature, obj_type, plan_id
        )       prf,
        table (
                select
                        xmlsequence(
                                extract(xmltype(prf.comp_data),'/outline_data/hint')
                        )
                from
                        dual
        )       hnt
;

Lagniappe:

One of the enhancements that appeared in 12c for SQL Baselines was that the plan the baseline was supposed to produce was stored in the database so that Oracle could check that the baseline would still reproduce the expected plan before applying it the DBA could see what the baseline has been producing before Oracle stopped using it. (Currently Oracle stores the plan’s hash value, and stops using the baseline if it starts to produce a different hash value. Storing the plan as well gives the DBA a chance of working out how to reproduce the correct plan and create a new baseline to get to it.)

These plans (also generated for Profiles and Patches) are stored in the table sqlobj$plan, and the dbms_xplan package has been enhanced with three new functions to report them:


FUNCTION DISPLAY_SQL_PATCH_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PLAN_BASELINE RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_HANDLE                     VARCHAR2                IN     DEFAULT
 PLAN_NAME                      VARCHAR2                IN     DEFAULT
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PROFILE_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

e.g.
SQL> select * from table(dbms_xplan.display_sql_profile_plan('SYS_SQLPROF_015c9bd3bceb0000'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL text: select        t1.id, t2.id from       t1, t2 where    t1.id between 10000 and
          20000 and     t2.n1 = t1.n1 and       t2.n1 = t2.v2
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL Profile Name: SYS_SQLPROF_015c9bd3bceb0000
Status:           ENABLED
Plan rows:        From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3683239666

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 10501 |   287K|   248   (4)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |     0   (0)|          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10002 | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED   |          | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   4 |     PX RECEIVE          |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|   5 |      PX SEND HASH       | :TQ10000 | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       PX BLOCK ITERATOR |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL| T1       | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |     PX RECEIVE          |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   9 |      PX SEND HASH       | :TQ10001 |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  10 |       PX BLOCK ITERATOR |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWC |            |
|* 11 |        TABLE ACCESS FULL| T2       |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."N1"="T1"."N1")
   7 - filter("T1"."ID"<=20000 AND "T1"."ID">=10000)
  11 - filter("T2"."N1"=TO_NUMBER("T2"."V2"))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Disclaimer – I’ve checked only the SQL_PROFILE function call on 12.2, after creating a profile to check that my old 11g report still worked in 12c.

Update Aug 2017

I have just rediscovered a note I made (though I don’t have a reference to the source) that Patch 17203284 backports the visibilty of create_sql_patch to dbms_sqldiag in 12.1.0.2. The descciption for the patch is: Enhancements for dbms_sqldiag_internal.i_create_patch but the “Bugs resolved by this patch” link on the patch details screen leads to the “Requested bug could not be displayed” page.

 

June 9, 2017

12.2 Partitions

Filed under: 12c,Indexing,Oracle,Partitioning,Upgrades — Jonathan Lewis @ 10:13 am BST Jun 9,2017

At the end of my presentation to the UKOUG Database SIG yesterday I summed up (most) of points I’d made with a slide making the claim:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

 

Last night I decided I ought to demonstrate the claim – so here’s a little code, first creating a simple heap table:


rem
rem     Script:         122_features.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1(
        date_start      not null,
        date_end        not null,
        id              not null,
        client_id,
        resort_code,
        uk_flag,
        v1,
        padding,
        constraint t1_range_ck check ((date_end - date_start) in (7, 14, 21))
)
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(sysdate,'yyyy') + 7 *  mod(rownum, 8)                                     date_start,
        trunc(sysdate,'yyyy') + 7 * (mod(rownum, 8) + trunc(dbms_random.value(1,4)))    date_end,
        rownum                                          id,
        trunc(dbms_random.value(1e5,2e5))               client_id,
        trunc(dbms_random.value(1e4,2e4))               resort_code,
        case when mod(rownum,275) = 0 then 1 end        uk_flag,
        lpad(rownum,10,'0')                             v1,
        lpad('x',100,'x')                               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > "GT" inserted to avoid WordPress formatting issue
;

create index t1_client_idx on t1(client_id);
create index t1_resort_idx on t1(resort_code);
create index t1_ukflag_idx on t1(uk_flag);

alter table t1 add constraint t1_pk primary key(id);

I’ve got a table which models a travel company that arranges holidays that last one, two, or three weeks and (for convenience) they all start on the same day for the week. So I generate a start and end date for each row, making sure the start date is a multiple of seven days from a base date while the end date is 7, 14, or 21 days later. I’ve got a few indexes on the data, and a primary key constraint. There’s a special flag column on the table for holidays in the UK, which is a small parcentage of the holidays booked.

Eventually, when the data gets too big, I decide that I want to partition this data, and the obvious partitioning idea that springs to mind is to partition it so that holidays with the same start date and duration are all in the same partition and each partition holds a single start/duration.

I’ve also decided that I’m going to make old data read-only, and I’m not interested in the UK holidays once they gone into history so I’m going to get rid of some of them.

The index protecting the primary key will have to be global since it won’t contain the partition key; since the index on uk_flag covers a small amount of data I’m going to keep that global as well, but I want the other two indexes to be local – except for the older data I’m not really interested in keeping the index on client id.

And I don’t want to stop the application while I’m restructuring the data.

So here’s my one SQL statement:


alter table t1 modify 
partition by list (date_start, date_end) automatic (
        partition p11 values (to_date('01-Jan-2017'),to_date('08-Jan-2017')) indexing off read only,
        partition p12 values (to_date('01-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p13 values (to_date('01-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p21 values (to_date('08-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p22 values (to_date('08-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p23 values (to_date('08-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p31 values (to_date('15-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p32 values (to_date('15-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p33 values (to_date('15-Jan-2017'),to_date('05-Feb-2017')) indexing off read only
)
including rows where uk_flag is null or (date_start > to_date('01-feb-2017','dd-mon-yyyy'))
online
update indexes (
        t1_client_idx local indexing partial,
        t1_resort_idx local,
        t1_ukflag_idx indexing partial
)
;

Key Points

  • partition by list (date_start, date_end) — partitioned by a multi-column list
  • automatic — if data arrives for which there is on existing partition a new one will be created
  • indexing off — some of my partitions (the pre-defined (oldest) ones) will be subject to partial indexing
  • read only — some of my partitions (the pre-defined (oldest) ones) will be made read only
  • including rows where — some of my rows will disappear during copying [1]
  • online — Oracle will be journalling the data while I copy and apply the journey at the end
  • update indexes – specify some details about indexes [2]
  • local — some of the rebuilt indexes will be local
  • indexing partial — some of the rebuilt indexes will not hold data (viz: for the partitions declared “indexing off”)

I’ve footnoted a couple of the entries:

[1] – the copy is done read-consistently, so data inserted while the copy takes place will still appear in the final table, even if it looks as if it should have failed the including rows clause.

[2] – indexes which include the partition key will automatically be created as local indexes (and you can declare them here as global, or globally partitioned, if you want to). The manual has an error on this point; it suggests that prefixed indexes will be created as local indexes but then defines “prefixed” to mean contains the partition key” rather than the usual starts with the partition key”.

Job done – except for the exhaustive tests that it’s been done correctly, the load test to see how it behaves when lots of new holidays are being booked and current ones being modified, and a little bit of clearing up of “surprise” partitions that shouldn’t be there and changing some of the automatically generated table partitions to be “indexing off” (if and when necessary).

Here are a few queries – with results – showing the effects this one statement had:


select count(*) from t1;

/*
  COUNT(*)
----------
     99773

-- some rows (old UK) have disappeared from the original 10,000
*/


select
        index_name, partitioned, status, leaf_blocks, num_rows , indexing, orphaned_entries
from
        user_indexes
where   table_name = 'T1'
order by
        partitioned, index_name
;

/*
INDEX_NAME           PAR STATUS   LEAF_BLOCKS   NUM_ROWS INDEXIN ORP
-------------------- --- -------- ----------- ---------- ------- ---
T1_PK                NO  VALID            263      99773 FULL    NO
T1_UKFLAG_IDX        NO  VALID              1        136 PARTIAL NO
T1_CLIENT_IDX        YES N/A              149      62409 PARTIAL NO
T1_RESORT_IDX        YES N/A              239      99773 FULL    NO

-- Indexes: Local or global, full or partial.
*/

select
        segment_type, segment_name, count(*)
from
        user_segments
group by
        segment_type, segment_name
order by
        segment_type desc, segment_name
;

/*
SEGMENT_TYPE       SEGMENT_NAME                COUNT(*)
------------------ ------------------------- ----------
TABLE PARTITION    T1                                24
INDEX PARTITION    T1_CLIENT_IDX                     15
INDEX PARTITION    T1_RESORT_IDX                     24
INDEX              T1_PK                              1
INDEX              T1_UKFLAG_IDX                      1

-- One local index has fewer segments than the other
*/

set linesize 180
set trimspool on

column high_value format a85
break on index_name skip 1
set pagesize 200

select
        index_name, status, leaf_blocks, num_rows, partition_name, high_value
from
        user_ind_partitions
where
        index_name = 'T1_CLIENT_IDX'
--      index_name like 'T1%'
order by
        index_name, partition_position
;

/*
INDEX_NAME           STATUS   LEAF_BLOCKS   NUM_ROWS PARTITION_NAME         HIGH_VALUE
-------------------- -------- ----------- ---------- ---------------------- -------------------------------------------------------------------------------------
T1_CLIENT_IDX        UNUSABLE           0          0 P11                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P12                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P13                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P21                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P22                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P23                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P31                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P32                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P33                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     USABLE            10       4126 SYS_P1528              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4198 SYS_P1529              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4211 SYS_P1530              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4214 SYS_P1531              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4195 SYS_P1532              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1533              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE             9       4027 SYS_P1534              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4217 SYS_P1535              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4167 SYS_P1536              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4230 SYS_P1537              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1538              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4069 SYS_P1539              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4215 SYS_P1540              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4138 SYS_P1541              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4176 SYS_P1542              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )


*/

I’ve limited the index partition output to the index with partial indexing enabled so show that it’s the pre-defined partitions are marked as unusable and, as you can infer from the segement summary, those unusable index partition don’t have any segments space allocated to them.

Stress tests are left to the interested reader.

June 1, 2017

Histogram Upgrade – 2

Filed under: 12c,Histograms,Oracle,Statistics — Jonathan Lewis @ 6:00 pm BST Jun 1,2017

While reading a blog post by Maria Colgan a couple of weeks ago I came across an observation about histograms that I’d not noticed before; worse still, it was a feature that seemed to make some “damage-limitation” advice I’d been giving for years a really bad idea! The threat appeared in these paragraphs:

Setting SIZE REPEAT ensures a histogram will only be created for any column that already has one. If the table is a partitioned table, repeat ensures a histogram will be created for a column that already has one on the global level.

What’s the down side to doing this?

The current number of buckets used in each histogram becomes the limit on the maximum number of buckets used for any histogram created in the future.

Unfortunately I’ve been saying for a very long time that you have to be very careful with histograms, and should probably create then through PL/SQL code; but if you have some frequency histograms that you’re sure are going to be well-behaved then using “for all columns size repeat” to gather the histogram is probably okay. But after making the claim above Maria’s blog posting demonstrated the truth of the claim, a demonstration that showed the highly undesirable consequences.

So imagine this: you create a frequency histogram which happens to produce 26 buckets on a particular column; from then on every time you run the gather with size repeat Oracle tries to generate 26 buckets. One day the data looks a little different, temporarily there are only 25 distinct values so on the next gather you get just 25 buckets – which means that when the “missing” value re-appears 12c will give you a Top-N histogram or even a hybrid histogram (11g would have to give you a height-balanced histogram if it noticed all 26 values). It is not safe to use size repeat if the number of distinct values that actually exist can vary from day to day.

I have to say that I was fairly shocked that I’d not come across this threat before – so obviously I created a simple model to check how nasty things could get. I had a copy of 11.2.0.4 handy and created a couple of tables cloning the data from all_objects because that’s got a couple of columns that are good for producing frequency histograms.


rem     Script:         histogram_repeat.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017

drop table t2;
drop table t1;

create table t1 as select * from all_objects;
create table t2 as select * from t1;

delete from t1 where object_type = 'EDITION';
delete from t1 where object_type = 'EVALUATION CONTEXT';
commit;

pause ================  Baseline =======================================

select  count(distinct object_type), count(distinct owner) from t1;

execute dbms_stats.gather_table_stats(user,'t1',method_opt =>'for columns object_type owner')

select  column_name, count(*)
from    user_tab_histograms
where   table_name = 'T1'
and     column_name in ( 'OBJECT_TYPE','OWNER')
group by column_name
order by column_name
;

select  column_name, num_buckets, histogram
from    user_tab_columns
where   table_name = 'T1'
and     column_name in ( 'OBJECT_TYPE','OWNER')
order by column_name
;

insert into t1 select * from t2 where object_type = 'EDITION';
insert into t1 select * from t2 where object_type = 'EVALUATION CONTEXT';
commit;

After creating the data I’ve deleted a few rows from t1, reported the number of distinct values in t1 for owner and object_type, then gathered stats on just those two columns using the default size. I’ve then reported the number of histogram buckets in two ways, by counting them in user_tab_histograms and by reporting them directly (with histogram type) from user_tab_columns. Then I’ve finished off by re-inserting (copying from t2) the rows I previously deleted, giving me a couple more object_type values in the table. Here are the results of the queries:


================  Baseline =======================================

COUNT(DISTINCTOBJECT_TYPE) COUNT(DISTINCTOWNER)
-------------------------- --------------------
                        23                   11

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  17
OWNER                         7

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  17 FREQUENCY
OWNER                         7 FREQUENCY

I’m running on 11.2.0.4 – and I have two frequency histograms that have missed a few of the distinct values. But that’s because on the default settings 11g uses sampling (typically about 5,500 rows for smaller data sets) when creating histograms. So re-running the gather with size repeat shouldn’t allow the number of buckets to grow. Here’s what I got when I re-ran the gather (with size repeat) and two queries a further three times


method_opt =>'for columns object_type size repeat owner size repeat'

================  Repeat 1 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  16
OWNER                         9

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  16 FREQUENCY
OWNER                         9 FREQUENCY
================  Repeat 2 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  18
OWNER                         8

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  18 FREQUENCY
OWNER                         8 FREQUENCY
================  Repeat 3 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  13
OWNER                         9

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  13 FREQUENCY
OWNER                         9 FREQUENCY

On the first repeat I got even fewer buckets; but on the second repeat the number of buckets bounced back up and even exceeded the original count; then on the third repeat the number of buckets dropped significantly. If you run the test your results will probably vary, but that’s the effect of the random selection of rows used to generate the histogram. Key point, though, the number of buckets generated by the gather is not limited by the current number of buckets.

But…

What happens with 12.1.0.2 – here are the results. Remember I deleted two sets of object_type before I gathered the first set of stats, then put them back in before doing the repeat gathers. (The number of distinct object_types in 12c is more than I had in 11g).


================  Baseline =======================================

COUNT(DISTINCTOBJECT_TYPE) COUNT(DISTINCTOWNER)
-------------------------- --------------------
                        27                   25

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 1 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 2 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 3 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

The number of distinct values for object_type is initially 27, but after gathering stats the first time I added back two more object_type values; but the subsequent gathers stuck with 27 buckets rather than extending to 29 buckets – so the histogram changed from frequency to Top-N. If you check Maria’s blog again you’ll see that this can make a big difference, particularly if the two new values happen to be the lowest and highest values for the column.

The number of buckets on a REPEAT is fixed by the number of existing buckets in 12c. That to me is a major change in behaviour and one you’ll have to watch out for on the upgrade. In 11g if the number of actual values stored dropped briefly the situation was self-correcting; if some new values were introduced the situation was self-correcting – although in both cases the histogram isn’t necessarily telling the truth the way you’d like it. In 12c the situation doesn’t self-correct. and may introduce a massive change in the arithmetic (as shown in Maria’s example).

The big difference, of course, is that 12c is gathering on a 100% sample using the variation of the approximate_ndv mechanism – so it will always find the right number of values if a frequency histogram is appropriate: presumably this is what was suppposed to make it okay to reproduce the number of buckets previously used. In 11g with its small sample size the number of buckets created couldn’t be guaranteed to match the number of distinct values, so I guess the code in 11g wasn’t written to be so rigorous in its assumption about the number of buckets to use next time.

tl;dr

When you upgrade from 11g to 12c think very carefully about whether or not you can still use a “table-level” size repeat to gather histograms – the upgrade may force you to identify specifically the columns that need histograms so that you can name with with an explicit (large enough) size in gather command.

 

May 25, 2017

Parallelism

Filed under: 12c,CBO,Hints,Ignoring Hints,Oracle — Jonathan Lewis @ 3:48 pm BST May 25,2017

Headline – if you don’t want to read the note – the /*+ parallel(N) */ hint doesn’t mean a query will use parallel execution, even if there are enough parallel execution server processes to make it possible. The parallel(N) hint tells the optimizer to consider the cost of using parallel execution for each path that it examines, but ultimately the optimizer will still take the lowest cost path (bar the odd few special cases) and that path could turn out to be a serial path.

The likelihood of parallelism appearing for a given query changes across versions of Oracle so you can be fooled into thinking you’re seeing bugs as you test new versions but it’s (almost certainly) the same old rule being applied in different circumstances. Here’s an example – which I’ll start off on 11.2.0.4:


create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 ; create index t1_i1 on t1(id); begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

set autotrace traceonly explain

select
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) */
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) full(t1) */
        count(v1)
from    t1
where   id = 10
;

set autotrace off

I haven’t declare the index to be unique, but it clearly could be; and it’s obvious that with 1M rows and about 120M of table a parallel full scan is probably a bad idea to acquire one row (even if you’re running Exadata!). So what do we get for the three plans – I’ll skip the predicate section – when we want to collect one row.


Base plan - unhinted
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4)
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4) and full(t1)
----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 |    16 |   606   (2)| 00:00:02 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |    16 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |    16 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| T1       |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

In 11.2.0.4 the optimizer did consider the parallel hint when it appeared on its own – but it has compared the parallel(4) cost of 606 with the serial index cost of 4 and chosen the indexed access path. This is not a case of ignoring the hint, it’s an example of being fooled if you don’t know how the hint is really supposed to work.

But here’s an interesting change that appeared in 12.2 – this time just the plan with the parallel(4) hint on its own:


---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |          |     1 |    16 |     4   (0)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE                         |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR                        |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)                  | :TQ10001 |     1 |    16 |            |          |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                      |          |     1 |    16 |            |          |  Q1,01 | PCWP |            |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |    16 |     4   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX RECEIVE                        |          |     1 |       |     3   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   7 |        PX SEND HASH (BLOCK ADDRESS)     | :TQ10000 |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | S->P | HASH (BLOCK|
|   8 |         PX SELECTOR                     |          |       |       |            |          |  Q1,00 | SCWC |            |
|*  9 |          INDEX RANGE SCAN               | T1_I1    |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | SCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

You get a parallel execution plan – although it starts with a serial index range scan which is operated for the new (12c) PX Selector operator that allocates a serial operation to one of the parallel execution slaves – which, approximately, is why the indexed access cost doesn’t change in this example – rather than running it through the query coordinator (QC). The serial range scan does a hash distribution (hashed by block address of the rowids it finds to avoid collisions between parallel execution slave as they do their table accesses.

This is just one cute little trick that makes it worth looking at the upgrade to 12c – this new path is likely to be of benefit to people who had to create global (as opposed to globally partitioned) indexes on partitioned tables.

This note was prompted by a recent twitter comment by Timur Akhmadeev followed in short order by an OTN posting that added further confusion to the problem by running Siebel – which is just one of several 3rd party products that love to configure optimizer parameters with non-standard values like: optimizer_index_cost_adj = 1, or optimizer_mode = first_rows_10. (At the last update I’ve seen on the thread, there seemed to be some other reason why parallelism was being blocked.)

Footnote

In a follow-up tweet, Timue directed me to the 11.2 SQL Language Reference manual – specifically a section on the Parallel Hint, asking if this was an example of a documentation bug.

The trouble with the manuals is that sometimes they are obviously wrong, sometimes they are wrong but it’s not obvious they are wrong, sometimes they omit important information, and sometimes they are badly written and, most specfically, the writing can be ambiguous.

Here’s an extract we could consider:

For PARALLEL, if you specify integer, then that degree of parallelism will be used for the statement.

But my example above shows a “parallel({integer})” hint where we didn’t use that degree of parallelism for the statement.

However the next two sentences read as follows:

If you omit integer, then the database computes the degree of parallelism. All the access paths that can use parallelism will use the specified or computed degree of parallelism.

So what if the optimizer uses the degree of parallelism while calculating the lowest cost plan and ends up with a serial plan ? How comfortable would you feel saying that Oracle has “used the degree of parallelism for the statement”. Or would you say that the first sentence means Oracle isn’t allowed to use a serial plan even if it finds one when doing the arithmetic with the appropriate degree of parallelism.

My call is that this is one of those ambiguous cases – the manual should say something more like:

For PARALLEL, if you specify integer, then that degree of parallelism will be used by the optimizer while calculating the best execution  plan for the statement.

Even then I’m not sure that that’s a complete statement of how the hint works because when you have a full set of system statistics, or have used the dbms_resource_manager.calibrate_io mechanism to tell Oracle about the I/O capacity of the system the optimizer may do some working that says something like: “the hint says degree 64, but the stats say the maximum effective degree will be 38 so I’ll calculate using 38” (This type of thing happens with the older usage of the parallel hint with manual parallelism – I haven’t examined what happens with an automatic policy and the newer option for the hint.)

 

May 23, 2017

255 Again!

Filed under: 12c,Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 1:10 pm BST May 23,2017

There are so many things that can go wrong when you start using tables with more than 255 columns – here’s one I discovered partly because I was thinking about a client requirement, partly because I had a vague memory of a change in behaviour in 12c and Stefan Koehler pointed me to a blog note by Sayan Malakshinov when I asked the Oak Table if anyone remembered seeing the relevant note. Enough of the roundabout route, I’m going to start with a bit of code to create a table, stick a row in it, then update that row:

rem
rem     Script: wide_table_4.sql
rem     Author: Jonathan Lewis
rem     Dated:  May 2017
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0,4
rem

set pagesize 0
set feedback off

spool temp.sql

prompt create table t1(

select
        'col' || to_char(rownum,'fm0000') || '  varchar2(10),'
from
        all_objects
where   rownum <= 320
;

prompt col0321 varchar2(10)
prompt )
prompt /

spool off

@temp

set pagesize 40
set feedback on

insert into t1 (col0010, col0280) values ('0010','0280');
commit;

update t1 set col0320 ='0320';
commit;

column file_no  new_value m_file_no
column block_no new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no,
        dbms_rowid.rowid_row_number(rowid)      row_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &m_file_no block &m_block_no;

So I’ve written one of those horrible scripts that write a script and then run it. The script creates a table with 320 columns and inserts a row that populates columns 10 and 280. That gets me two row pieces, one consisting of the 255 columns from columns 26 to 280 that goes in as row piece 0, the other consisting of the first 25 columns that goes in as row piece 1; the remaining 40 columns are not populated so Oracle “forgets” about them (“trailing nulls take no space”). The script then updates the row by setting column 320 to a non-null value.

For convenience I’ve then generated the file and block number (and row number, just to show its head piece went in as row 1 rather than row 0) of the row and done a symbolic block dump. The question is: what am I going to see in that block dump ?

Answers (part 1)

Here’s an extract from the block dump from 11.2.0.4 (12.1.0.2 is similar) – though I’ve cut out a lot of lines reporting the NULL columns:


ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1e54
avsp=0x1e3e
tosp=0x1f13
0xe:pti[0]      nrow=2  offs=0
0x12:pri[0]     offs=0x1e7a
0x14:pri[1]     offs=0x1e54
block_row_dump:
tab 0, row 0, @0x1e7a
tl: 49 fb: -------- lb: 0x2  cc: 40
nrid:  0x014000a7.0
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 37: *NULL*
col 38: *NULL*
col 39: *NULL*
tab 0, row 1, @0x1e54
tl: 38 fb: --H-F--- lb: 0x2  cc: 25
nrid:  0x014000a3.0
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 22: *NULL*
col 23: *NULL*
col 24: *NULL*
end_of_block_dump

The block holds two row pieces, and the piece stored as “row 1” is the starting row piece (the H in the flag byte (fb) tells us this). This row piece consists of 25 columns. The next rowpiece (identified by nrid:) is row zero in block 0x014000a3 – that’s block 163 of file 5 – which is the same block as the first row piece. When we look at row zero we see that it holds 40 columns, all null; it’s pointing to a third row piece at row zero in block 0x014000a7 (file 5, block 167), and as corroboration we can also see that the flag byte has no bits set and that tells us that this is just a boring “somewhere in the middle” bit. So it looks like we have to follow the pointer to find the last 255 columns of the table. So let’s take a look at the dump of file 5 block 167:


fsbo=0x14
fseo=0x1e76
avsp=0x1e62
tosp=0x1e62
0xe:pti[0]      nrow=1  offs=0
0x12:pri[0]     offs=0x1e76
block_row_dump:
tab 0, row 0, @0x1e76
tl: 266 fb: -----L-- lb: 0x1  cc: 255
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 251: *NULL*
col 252: *NULL*
col 253: *NULL*
col 254: [ 4]  30 33 32 30
end_of_block_dump

Take note of the L in the flag byte – that tells us that we’re looking at the last row piece of a multi-piece row. It’s that last 255 columns we were looking for. The mechanics have worked as follows

  • On the simple insert Oracle split the used 280 columns into (25, 255)
  • On the update we grew the used column count from 280 to 320, adding 40 columns. Oracle extended the 255 column row piece to 295, then split it (40, 255) leaving 40 in the original block and migrating the 255 to a new block. So a row that could be only 2 pieces is now

So a row that could be two pieces in one block is now three pieces spread over two blocks; and there’s worse to come. Go back to the original block dump and check the used space. A good first approximation would be to check the “tl:” (total length) value for each row – this gives you: 49 + 38 bytes; add on a couple of hundred for the general block overhead and stuff like the transaction table and you find you’ve used less than 300 bytes in the block. But I’ve got a little procedure (I published this version of it some time ago) to check for free and used space – and this is what it said about the (ASSM) segment that holds this table:


Unformatted                   :           44 /          360,448
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            0 /                0
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :           15 /          122,880
Full                          :            1 /            8,192

Take particular note of the “Full” block at the end of the report – that’s the block where we’ve used up rather less than 300 bytes. In fact if you look again at the first block dump you’ll see the avsp (available space) and tosp (total space) figures of 0x1e3e and 0x1f13 bytes (7,742 and 7,955 bytes). There’s loads of space in the block – but the block is marked in the bitmap space management map as full. That’s really bad news.

On the plus side 12.2 behaves differently, as noted by Sayan in his blog note. We still get the third row piece, but it’s in the same block as the first two and the block doesn’t marked as full in the bitmap.

And there’s still more to come – but it will have to wait a little longer.

 

May 19, 2017

255 columns

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 5:49 pm BST May 19,2017

This is one of my “black hole” articles – I drafted it six months ago, but forgot to publish it.

A recent post on OTN highlighted some of the interesting oddities that appear when you create tables with more than 255 columns. In fact this was a more subtle case than usual because it reminded us that it’s possible to have a partitioned table which appears to have less than the critical 255 columns while actually having more than 255 columns thanks to the anomaly of how Oracle handles dropping columns in a partitioned table.  (For a useful insight see this note from Dominic Brooks – and for a nice thought about preparing simple tables for an exchange with such a partitioned tables look at the 12.2 feature of “create table for exchange” in Maria Colgan’s recent article)

The thread took me down the path of trying to recreate some notes I wrote a long time ago and can no longer find and the OTN problem wasn’t the basic one I had assumed anyway, but I thought I’d publish a bit of the work I had done so that you can see another of the funny effects that appear when your table definition has too many columns (and you use them).

The OP told us about a table with more than 350 columns, so here’s a little script I wrote to generate a table with 365 columns and some data. (It turned out that the OP had more than 390 columns in the table, but 30+ had been “dropped”.)


rem
rem	Script:		wide_table_2.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Nov 2016
rem	Purpose:
rem
rem	Last tested
rem		11.2.0.4
rem

create sequence s1;

declare
	m_statement_1	varchar2(32767) :=
		'create table t1(col0001 varchar2(10),';
	m_statement_2	varchar2(32767) :=
		'insert into t1 values(lpad(s1.nextval,10),';
begin
	for i in 2 .. 365 loop
		m_statement_1 := m_statement_1 ||
			'col' || to_char(i,'FM0000') || ' varchar2(100),'
		;
		m_statement_2 := m_statement_2 ||
			case when i in (2,3,4)
			-- case when i in (122,123,124)
			-- case when i in (262,263,264)
				then 'dbms_random.string(''U'',ceil(dbms_random.value(0,100))),'
			     when i = 365
				then 'lpad(s1.currval,7))'
				else '''COL' || to_char(i,'FM0000') || ''','
			end
		;
	end loop;

	m_statement_1 := substr(m_statement_1, 1, length(m_statement_1) - 1);
	m_statement_1 := m_statement_1 || ') pctfree 25';

	execute immediate m_statement_1;

	for i in 1..10000 loop
		execute immediate m_statement_2;
	end loop;

end;
/

I’ve taken a fairly simple approach to building a string that creates a table – and it’s easy to adjust the number of columns – and a string to insert some values into that table. The insert statement will insert a row number into the first and last columns of the table and generate a random length string for a few of the columns. I’ve picked three possible sets of three columns for the random length string; one set is definitely going to be in the first row piece, one set is definitely going to be in the last row piece, and (since the row will split 110/255) one will be somewhere inside whichever is the larger row piece.

If I wanted to do something more sophisticated I’d probably have to switch to a PL/SQL array for the two statements strings – 32,767 characters doesn’t give me much freedom to play if I wanted to test a table with 1,000 columns.

Having created and populated my table, I performed the following three tests on it:


analyze table t1 compute statistics;

prompt	====
prompt	CTAS
prompt	====

create table t1a pctfree 25 as select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	======
prompt	Insert
prompt	======

truncate table t1a;
insert into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	=============
prompt	Insert append
prompt	=============

truncate table t1a;
insert /*+ append */ into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

The first test creates a new table (t1a, at pctfree 25, matching the original) copying the original table with a simple “create as select”.

The second test truncates this table and does a basic “insert as select” to repopulate it.

Third test truncates the table again and does an “insert as select” with the /*+ append */ hint to repopulate it.

In all three cases (and with three variations of where the longer random strings went) I used the analyze command to gather stats on the tables so that I could get a count of the number of chained rows; and I dumped a couple of blocks from the tables to see what the inserted rows looked like.

Here’s a summary of the results from 11.2.0.4 when the random-length columns are near the start of the row (the position didn’t really affect the outcome, and the results for 12.1.0.2 and 12.2.0.1 were very similar):

====
CTAS
====

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

======
Insert
======

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       6676       3287

=============
Insert append
=============

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

As you can see we get two significantly different results: the CTAS and the “insert append” produce tables reporting 9,504 blocks and 237 chained rows, while the original table (single row inserts) and the regular “insert as select” produce tables with 6,676 blocks and 3,133 chained rows. It seems that the CTAS (which would also cover “alter table move”) and direct path insert have minimised the number of chained rows at a cost of a dramatically increased number of blocks. (The scale of the difference happens to be particularly extreme in this case – I didn’t do this deliberately it was simply a consequence of the way I happened to generate the data and the length of the rows.)

We know, of course, that every row in this table will consist of two row pieces, one of 110 columns and one of 255 columns; so every row is in some respects chained due to the potential for intra-block chaining of those two pieces, but the analyze command reports only inter-block chaining i.e. only those rows that start in one block and end in another block – intra-block chaining doesn’t count as “proper” chaining (at least in this version of Oracle).

There are two questions to address in these results: the first is “What’s happening?”, the second, which we ask when we get the answer to the first, is “How come the direct path method still gives us some chained rows?”

I believe the answer to the first question is that the direct path method attempts to avoid chaining unavoidable row-pieces. Even if it means leaving a huge amount of empty space in a block Oracle starts a new row in a new block if there isn’t enough space for both of the anticipated row-pieces to fit in the current block. I think this may be a feature to help Exadata and its use of direct path reads for smart scans, where a relatively small number of chained rows (which might be outside the current Exadata storage unit – and even in the disk space managed by another cell server) could have a catastrophic impact on performance because the system would have to do a single block read to pick up the extra piece – which could have a devastating impact on the performance.

So why do some rows still see chaining under this strategy – I think it’s because there’s a small error in the arithmetic somewhere (possibly visible only in ASSM tablespaces, perhaps related to row-piece headers) where Oracle thinks there’s enough space for both row pieces but there isn’t quite so it tries to insert both pieces and has to chain the second one. (I’ll corroborate this hypothesis with some analysis of block dumps in a moment).

So you have a choice – lots of wasted space and a little row-chaining, or maximum packing of data and (potentially) lots of row-chaining. But there’s more: I’ve said we get one row piece of 110 columns and one of 255 columns for each row, but the point at which the split occurs and the order in which the pieces are inserted depends on the method used.

  • Single row inserts (initial table creation, typical OLTP processing): The split occurred at column 111 – so the leading 110 columns are in one row piece and the trailing 255 columns are in the other – and the row piece with the trailing columns is inserted first.
  • Array inserts (normal): Exactly the same as the single row inserts.
  • Direct path inserts / CTAS: The split occurred at column 256, with the leading 255 column row-piece inserted first and the trailing 110 column row-piece inserted second.

I’m not sure that this particular detail matters very much in normal circumstances when you consider the dramatic difference in size that can appear in the comparison between direct path and normal inserts, but maybe there’s someone who will notice a performance (or even space) side effect because of this inconsistency. We will see in a later post, though, that this difference can have an enormous impact if you subsequently add columns to the table and populate them.

I said I’d come back to the row-chaining anomaly. One of the little details that I didn’t include in my code listing was the call to “analyze table report chained rows” that I did (after executing $ORACLE_HOME/rdbms/admin/utlchain.sql) to list the head rowids of the chained rows into the chained_rows table. After doing this I ran a simple pl/sql loop to dump all the relevant blocks to the trace file:

begin
	for r in (
		select
			dbms_rowid.rowid_relative_fno(head_rowid) file#,
			dbms_rowid.rowid_block_number(head_rowid) block#
			from	chained_rows
		) loop
			execute immediate 'alter system dump datafile ' || r.file# || ' block ' || r.block#;
	end loop;
end;
/

Here’s a little extract from the resulting trace file showing you what the start of a row piece looks like when dumped:

tab 0, row 0, @0x1765
tl: 2075 fb: --H-F--- lb: 0x0  cc: 255
nrid:  0x01401bc4.1
col  0: [10]  20 20 20 20 20 20 20 33 36 32
col  1: [19]  41 52 43 4a 4a 42 4e 55 46 4b 48 4c 45 47 4c 58 4c 4e 56
col  2: [ 8]  59 4b 51 46 4a 50 53 55
col  3: [17]  53 58 59 4e 4a 49 54 4a 41 5a 5a 51 44 44 4b 58 4d
col  4: [ 7]  43 4f 4c 30 30 30 35
col  5: [ 7]  43 4f 4c 30 30 30 36

A convenient thing to check is the cc: (“column count”) entry (end of 2nd line). You can see that this row piece has 255 columns, and if you look at the first six columns dumped you can see that it’s the row numbered 362 (33 36 32), then there are three columns of different length strings, then two columns with the values ‘COL00005’ and ‘COL0007’ respectively. It’s the “cc:” entry that’s useful though, so I’m going to do a bit of simple unix hackery:

grep " cc: " test_ora_24398.trc | sed "s/^.*cc: //"  | sort | uniq -c | sort -n
      1 1
      5 2
    112 108
    125 109
    237 110
    474 255

In my 237 blocks with chained rows I had 474 row pieces of 255 columns and 237 row pieces of 110 columns; then I had 125 row pieces that had lost (and therefore chained) one column and 112 row pieces that had lost and therefore chained 2 columns. I also had a couple of small “tail-end” pieces from earlier blocks scattered in these blocks. These figures suggest that there’s a small error (actually no more than about 20 bytes) in the calculation Oracle does to decide if it can fit a whole row into the current block or whether it has to go on to the next empty block.

Conclusions

When copying a table defined with more than 255 columns there’s the potential for a huge variation in the space usage and chain count depending on whether you do a CTAS (or insert /*+ append */) or a simple insert. You have to decide which option is the biggest threat to your available resources.

There is a little anomaly with the way in which rows are split that is also dependent on the method used for copying – this may also have some effect, though perhaps small enough to be ignored when compared with the space/chaining difference as far as ordinary OLTP processing is concerned. But there are  some important side effects we will consider in a later post.

Even though CTAS/direct path insert can eliminate a lot of row chaining it is still possible to find some row chaining in the resulting data. This may be the result of a calculation error (or possibly a deliberate space saving compromise).

Note that any comments about using CTAS to copy a table also apply to “alter table move” and to using expdp/impdp.

 

May 10, 2017

Quantum Space

Filed under: humour,Oracle,Troubleshooting — Jonathan Lewis @ 1:31 pm BST May 10,2017

Here’s a not very serious note that makes a serious point.  I’ve got a small tablespace made up of 4 files, and here’s a little report I can run against the data dictionary for that tablespace:


select 'File space' What, nvl(sum(user_bytes)/1048576,0) MB from dba_data_files where tablespace_name = 'LOB_TEST'
union all
select 'Free space',      nvl(sum(bytes/1048576),0)         from dba_free_space where tablespace_name = 'LOB_TEST'
union all
select 'Extents',         nvl(sum(bytes/1048576),0)         from dba_extents    where tablespace_name = 'LOB_TEST'
union all
select 'Segments',        nvl(sum(bytes/1048576),0)         from dba_segments   where tablespace_name = 'LOB_TEST'
;

The name of the tablespace isn’t significant – it happens to be a tablespace I created to do some tests relating to space allocation with securefile LOBs, and it’s been hanging around ever since.

The query first reports the “user” space defined by the files – remember that there will be some space for header information and other metadata, and there may be some space near the end of the file which is smaller than the minimum extent allowed for that tablespace.

After the statement of total possible usable space we get: the free space, the space taken up by used extents, and the space taken up by segments.

Here’s the result I get currently:


WHAT               MB
---------- ----------
File space        196
Free space        196
Extents             0
Segments          196

The total usable space is 196MB, of which 196MB are free with no space allocated to extents — except 196MB HAS been allocated to segments. Depending how you look at it the tablespace is either full, or empty.

If you’re in the know the puzzle resolves itself if you query for the segment names, which look like: “BIN$TrDKUCvcVQbgUwEAAH9K2Q==$0” – I have one segment that is in the recycle bin, so I can recover it from the bin if I need it, but until I do Oracle can reuse the space if it wants it. Like Schrodinger’s cat the objects is both alive and dead until someone decides to peek.

After a call to “purge recyclebin” the result changes to:


WHAT               MB
---------- ----------
File space        196
Free space        196
Extents             0
Segments            0

On a production system you might need to issue “purge dba_recyclebin” (if you have the appropriate privilege) to resolve the apparent contradiction as the call to “purge recyclebin” applies only to objects in your own schema.

This note was prompted by a question on the OTN database forum about contradiction between a traditional SQL statement to report free and used space and a screen dump from Enterprise Manager.  I don’t think the thread reached a firm conclusion – but apart from the potential for the recyclebin to confuse the issue, there are extra possibilities thanks to auto-extensible data files, and the Enterprise Manager’s scope for querying a complete different set of views such as dba_tablespace_usage_metrics and v$filespace_usage. In fact the EM code clearly had at least one error in it (which makes any of its results suspect) because it managed to report the critical tablespace as 390% used!

 

 

May 2, 2017

Aliases

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 9:23 am BST May 2,2017

Here’s a performance problem that came up on OTN recently. The following query (reformatted) takes “ages” to run – how do you address the problem:

SELECT
	COUNT(*) 
FROM
	smp_dbuser2.workflow_step_report
WHERE
	report_ID IN (
		SELECT	report_id
		FROM	smp_dbuser2.workflow_report
		WHERE	trunc(start_time) = '28-Apr-2017'
		AND	user_id = 'nbi_ssc'
	)
;


Various pieces of relevant information were supplied (the workflow_report table holds 1.4M rows the workflow_step_report table holds 740M rows and some indexes were described), but most significantly we were given the execution plan:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                      |     1 |     6 |    10T  (1)|999:59:59 |       |       |
|   1 |  SORT AGGREGATE       |                      |     1 |     6 |            |          |       |       |
|*  2 |   FILTER              |                      |       |       |            |          |       |       |
|   3 |    PARTITION HASH ALL |                      |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|   4 |     TABLE ACCESS FULL | WORKFLOW_STEP_REPORT |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|*  5 |    FILTER             |                      |       |       |            |          |       |       |
|   6 |     PARTITION HASH ALL|                      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
|*  7 |      TABLE ACCESS FULL| WORKFLOW_REPORT      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( EXISTS (SELECT 0 FROM "SMP_DBUSER2"."WORKFLOW_REPORT" "WORKFLOW_REPORT" WHERE :B1=:B2
              AND "USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))
   5 - filter(:B1=:B2)
   7 - filter("USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

You’ll notice that the optimizer has transformed the IN subquery into an EXISTS subquery – operation 2 is a FILTER operation, and you can see that the filter predicate at operation 2 shows the existence subquery that would be executed.

If you look carefully at the execution plan (all of it), what can you deduce from it ? What, then, should be your next step in dealing with this performance problem ?

Observations

I did emphasise the need to examine ALL of the execution plan – and the important feature appears not in the body of the plan but in the predicate section.

The body tells us that Oracle has executed the query with a FILTER subquery approach, and we can see that the IN subquery has been transformed into an EXISTS subquery. In many cases Oracle could unnest the subquery and turn it into a join (often a semi-join), but it hasn’t in this case and we might wonder why not. Look closely at the text given for the subquery in the filter predicate section:


SELECT  0 
FROM   "SMP_DBUSER2"."WORKFLOW_REPORT" "WORKFLOW_REPORT" 
WHERE  :B1=:B2
AND    "USER_ID"='nbi_ssc' 
AND    TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28 00:00:00','syyyy-mm-dd hh24:mi:ss')

When an IN subquery is transformed into an EXISTS subquery, then we usually see: “outer.column in (select inner.column …)” turning into a predicate in the existence subquery of the form “inner.column = :B1”, so why do we have “:B1 = :B2” when we expect to see “workflow_report.report_id = :B1” ?

The (obvious, if you know your optimizer) answer is that there is no column report_id in table workflow_report but “column capture” means the optimizer has assumed that report_id in the subquery refers to workflow_step_report.report_id – hence “:B1 = :B2”. The consequence of this strange predicate is that the subquery may execute once for every row in the outer table (though scalar subquery caching may reduce the number of executions) performning a tablescan as it does so.

The correct next step is to check whether this was a simple typing error – the query as it stands is valid but not very sensible, so what was the intention. It turned out that there was a column workflow_report.id, and that was the column that should have been selected in the subquery. (The OP also changed the trunc(start_date) to use a carefully constructed range-based clause – but that wasn’t really terribly important; and several people noted that some efficiency could be gained through suitable indexes – but that was missing the important point.)

Here’s the new query, with execution plan:


SELECT  COUNT(*) 
FROM    smp_dbuser2.workflow_step_report    wsr
WHERE   wsr.report_ID IN (
                SELECT  wr.id
                FROM    smp_dbuser2.workflow_report    wr
                WHERE   wr.start_time >= to_date( '28-Apr-2017','dd-mon-yyyy') 
                and     wr.start_time <  to_date( '28-Apr-2017','dd-mon-yyyy') + 1
                AND     wr.user_id = 'nbi_ssc'
        )
;

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                           | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                                |     1 |    31 | 22760   (1)| 00:04:34 |       |       |
|   1 |  SORT AGGREGATE           |                                |     1 |    31 |            |          |       |       |
|   2 |   NESTED LOOPS            |                                | 34458 |  1043K| 22760   (1)| 00:04:34 |       |       |
|   3 |    PARTITION HASH ALL     |                                |    72 |  1800 | 22759   (1)| 00:04:34 |     1 |    32 |
|*  4 |     TABLE ACCESS FULL     | WORKFLOW_REPORT                |    72 |  1800 | 22759   (1)| 00:04:34 |     1 |    32 |
|   5 |    PARTITION HASH ITERATOR|                                |   477 |  2862 |     1   (0)| 00:00:01 |   KEY |   KEY |
|*  6 |     INDEX RANGE SCAN      | WORKFLOW_STEP_REPORT_REPORT_ID |   477 |  2862 |     1   (0)| 00:00:01 |   KEY |   KEY |
----------------------------------------------------------------------------------------------------------------------------

The modified query completed in 5 seconds – presumably because there were only a few relevant rows in the workflow_report table and the related workflow_step_report rows were well clustered and accessible through a suitable “foreign key” index (there’s also a hint in the plan costs that the partitioning column for workflow_step_report is the report_id)

The final point to note about the rewritten query is the use of table aliases – both tables have a short alias (wsr and wr), and every column is qualified by its table alias. If this approach had been taken in the original code then the attempt to run it would have resulted in an error like:

ERROR at line 7:
ORA-00904: "WR"."REPORT_ID": invalid identifier

Update – 11th July 2017

Here’s a worse example of the same failure – deleting everything from a table because you didn’t use aliases properly.

April 27, 2017

Quiz Night

Filed under: Oracle,Performance,Troubleshooting — Jonathan Lewis @ 5:29 pm BST Apr 27,2017

If this is the closing section of thetkprof output from the trace file of a single end-user session that has a performance problem, what’s the most obvious deduction you can make about the cause of the problem, and what sort of action would you take next ?


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      296      0.06       0.10          0        651          0           0
Execute    440      1.39       2.24          7       4664          0         146
Fetch      345     29.38      48.27          0    1709081          0         346
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1081     30.84      50.63          7    1714396          0         492

Misses in library cache during parse: 5
Misses in library cache during execute: 7

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     497        0.00          0.00
  SQL*Net message from client                   496       27.03         50.35
  direct path read                                1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   128199      6.94      11.46          0       2740          0           0
Execute 2274845    371.25     605.60         30   10031162          0       68200
Fetch   2225314     10.94      18.17          5     879297          0      577755
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4628358    389.14     635.23         35   10913199          0      645955

Misses in library cache during parse: 9701
Misses in library cache during execute: 134

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  direct path read                                1        0.00          0.00
  latch: shared pool                              3        0.00          0.00

12666  user  SQL statements in session.
  495  internal SQL statements in session.
13161  SQL statements in session.
********************************************************************************
Trace file: {concealed file name}.trc
Trace file compatibility: 11.1.0.7
Sort options: default

       1  session in tracefile.
   12666  user  SQL statements in trace file.
     495  internal SQL statements in trace file.
   13161  SQL statements in trace file.
    5266  unique SQL statements in trace file.
 39046623  lines in trace file.
     742  elapsed seconds in trace file.


There’s no absoutely right answer to the last question, by the way – there are a couple of variations in approach that are likely to get to the same result in the same time, and some of those variations might have made looking at the tail end data the 2nd or 3rd step in the sequence.

Update

One of the skills of using your time effectively when trouble-shooting is the way you balance the time you spend noting the little details that might be relevant, and the time you then spend following up each detail to check for relevance and corroborating evidence. The amateurs might throw themselves into in-depth research on the first detail they notice; or if they’ve started with a short list of observations simply spend too much time on the first before moving on to the second.

An exercise like looking at this tail-end this tkprof file and talking about what you spot and what you might do is just a little exercise in how to pace yourself as you tackle a problem. So, from my perspective, here a couple of obvious starting points:

  • I said it was an end-user session complaining about performance – the last line of the file tells use that the elapsed time was 742 seconds, and the summary of recursive statement tells us there were 2.2 million executions. How can ANYTHING an end-user wants to do “quickly” require 3,000 executions per second of 12 minutes? I can ask that question because I know that “recursive” executions aren’t necessarily “sys-recursive”, SQL statements executed inside a PL/SQL block are also recursive – on top of that I can see, anyway, that there are 12,466 (probably all different) USER statements in the file – the user (or the user’s code) is doing something it almost certainly shouldn’t be doing. Those 2.2M executions are responsible for 10M buffer visits – does that suggest a lot of single row processing ?
  • Associated with the 13,161 statements in the trace file there are 9,700 misses in the library cache during parse – that means “hard” parsing, probably means the user is getting through about 800 “new” statements per minute – but there are only 5,266 unique statements so part of the problem is that some recently used statements (or child cursors, at least) are being flushed from the library cache: again that suggests that they’re doing something wrong.
  • As one of the commentators pointed out – a detail that I hadn’t noted initially – the number of “current” buffers is zero. This isn’t a mini-batch updating the database, it’s just a report (done badly).

What would I do next ? If it wasn’t already immediately obvious I’d look for the statements that were responible for the very large numbers of executions; a call to tkprof with sort=execnt as a parameter would push the high execution counts to the top of the file.  (The worst one had over 900,000 executions, the next 600,000). Then I’d do a grep, sed, sort with uniq -c to find the statement which (I’d guess – and there were 3 of them responsible for a few thousand variants each) were the generated texts using concatenation instead of bind variables. Then I’d go and find the owner of the code and sit down with them to work out how it should be re-engineered.

 

Next Page »

Powered by WordPress.com.