Oracle Scratchpad

December 23, 2020

19c tweak

Filed under: CBO,Hints,Oracle,Upgrades — Jonathan Lewis @ 2:15 pm GMT Dec 23,2020

There are times when an upgrade makes a big difference to performance because an enhancement to the optimizer code path changes the costing of a plan that was always possible, but badly costed. I noticed an example of this while testing the code in the email I mentioned in last month’s posting on the “Incremental Sort” that Postgres 13 can do. Here’s a model to create some data and demonstrate the principle – the code is a modified (reduced) version of the code published by Phil Florent describing the feature.

rem
rem     Script:         fetch_first_postgres.sql
rem     author:         Phil Florent
rem     Dated:          6th Nov 2020
rem
rem     Last tested
rem             19.3.0.0        Uses index descending unhinted at low cost
rem             18.3.0.0        Used index desc efficiently if hinted, but high cost
rem             12.2.0.1        Used index desc efficiently if hinted, but high cost
rem

create table lancers(dtl timestamp, idg integer not null, perf integer);

insert into lancers(dtl, idg, perf)
with serie(i) as (
        select 25e4 from dual
        UNION ALL
        select i - 1 from serie where i > 1
)
select
        current_timestamp - (i / 1440),
        trunc(dbms_random.value * 1e5 + 1),
        case
                when dbms_random.value <= 0.001 then 50000 + trunc(dbms_random.value * 50000 + 1) 
                else trunc(dbms_random.value * 50000 + 1) 
        end
from serie
/

execute dbms_stats.gather_table_stats(user,'lancers',method_opt=>'for all columns size 1')

create index perf_i1 on lancers(perf, dtl);
alter table lancers modify (perf not null, dtl not null);

This is the basic statement I want to execute – but in some versions of Oracle it will have to be hinted to produce the execution plan I want to see.

select  
        idg, perf 
from  
        lancers 
order by
        perf desc  
fetch first 5 rows only
/

If you check the order by clause and the definition of the index perf_i1 you’ll see that Oracle could (in principle) walk the index in descending order, stopping after just 5 rows, to produce the result.

But here are the execution plans from 19.3.0.0, 18.3.0.0, and 12.2.0.1, with their plans pulled from memory and showing the rowsource execution statistics (hinted by gather_plan_statistics) to show you what happens – starting from the newest first:

19.3.0.0: (unhinted)
--------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |     8 (100)|      5 |00:00:00.01 |       9 |
|*  1 |  VIEW                         |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|     8   (0)|      5 |00:00:00.01 |       9 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |      5 |     3   (0)|      5 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

You can see an index_desc() hint in the output, but it has been commented out. The key feature to note is that the optimizer has found the path I was hoping to see, and it’s a low-cost path, although there is one oddity in the plan – the E-rows (cardinality estimate) for the table access doesn’t allow for the stopkey and, since there are no predicates in the query, reports the 250K rows that exist in the table.

For 18.3.0.0 I had to include the hint, and you’ll see why:

18.3.0.0 (hinted with index_desc)
---------------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)



Again we see the plan is possible, but the optimizer’s cardinality estimate for the hinted index scan is 250K rows – the full size of the index, and it has allowed for that in the cost of the query. So the cost of this plan is high and in the absence of the hint the optimizer would have used a full tablescan with sort.

Finally we get down to 12.2.0.1 – and I’ve shown the hinted and unhinted plans.

12.2.0.1 (hinted index_desc)
-----------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)


12.2.0.1 Unhinted
------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 1374242431

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |      1 |        |  1102 (100)|      5 |00:00:00.24 |     822 |       |       |          |
|*  1 |  VIEW                    |         |      1 |      5 |  1102  (10)|      5 |00:00:00.24 |     822 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|         |      1 |    250K|  1102  (10)|      5 |00:00:00.24 |     822 |  2048 |  2048 | 2048  (0)|
|   3 |    TABLE ACCESS FULL     | LANCERS |      1 |    250K|   132  (16)|    250K|00:00:00.13 |     822 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

As you can see, 12.2.0.1 and 18.3.0.0 behave exactly the same way when hinted – the path is acceptable, but the cost is high. Consequently when I remove the hint the optimizer switches to using a full tablescan with sort because it’s cost is lower (thanks, in part, to the pattern in the data) than the indexed access path.

Summary

Two thoughts to take away from this note.

  • First, there were two possible execution plans for the same query and the optimizer in versions below 19c was picking the one that was clearly a bad idea. The presence of alternatives, though, means that the patterns in the data, the index definition and statistics (especially the clustering_factor) the number of rows to fetch, and various other optimizer settings may mean that you find yourself in the unlucky position that the optimizer’s arithmetic is on the boundary between the two plans and it switches randomly between them from day to day.
  • Secondly, when you upgrade to 19c the optimizer seems to be more likely to pick the indexed access path for a query like this – and that will probably be a good thing, but in a few cases it might turn out to be a bad thing.

December 21, 2020

Upgrade trivia

Filed under: 18c,Oracle,Upgrades — Jonathan Lewis @ 9:57 am GMT Dec 21,2020

Sometimes it’s the little things that catch you out (perhaps only briefly) on an upgrade. Here’s one that came up on Oracle’s Groundbreakers Developers Community [sic] (who knows what it will be called this time next year and, while we’re at it, who can tell where the apostrophe(s) ought to go).

The problem was with a database trigger that had been doing home-grown auditing to catch any DDL changes to non-SYS objects. The code was quite simple:

create or replace trigger system.audit_ddl_trg 
after ddl on database
begin
        if (ora_sysevent='TRUNCATE') then

                null; -- I do not care about truncate

        elsif ora_dict_obj_owner!='SYS' then

                insert into audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent)
                values(
                        sysdate,
                        sys_context('USERENV','OS_USER') ,
                        sys_context('USERENV','CURRENT_USER') ,
                        sys_context('USERENV','HOST') , 
                        sys_context('USERENV','TERMINAL') ,
                        ora_dict_obj_owner,
                        ora_dict_obj_type,
                        ora_dict_obj_name,
                        ora_sysevent
                );

        end if;
end;
/

The issue was that after an upgrade from 12c (release not specified) to Oracle 19c the trigger was failing.

Here’s the definition for the table used by the trigger as the target of the insert statement – can you see any reasons why it might be failing:

create table audit_ddl (
        d               date,
        osuser          varchar2(255 byte),
        current_user    varchar2(255 byte),
        host            varchar2(255 byte),
        terminal        varchar2(255 byte),
        owner           varchar2(30 byte),
        type            varchar2(30 byte),
        name            varchar2(30 byte),
        sysevent        varchar2(30 byte)
)
/

If it’s not immediately obvious it’s probably because you’ve forgotten that object names (and various other identifiers) are allowed to be up to 128 characters in 19c (and a little earlier) – so defining the owner and name as varchar2(30) is an accident waiting to happen.

It didn’t take the user long to work out why there was a problem but the more interesting part of the issue was why there were now objects in the database with names exceeding the old 30 character limit. The OP supplied an (obfuscated) example: after the upgrade Oracle was reporting object names “using the full path name” like: “/some/path/name/object_name”.

The structure is a clue – for this user it’s all about Java classes. Here’s a little query against dba_objects with the results from 11.2.0.4 and 12.2.0.1

select  object_name 
from    dba_objects 
where   object_type = 'JAVA CLASS' 
and     object_name like '%TimeZoneNamesBundle'
/

OBJECT_NAME (11.2.0.4)
------------------------------
/2ea59ec_TimeZoneNamesBundle

12.2.0.1
OBJECT_NAME (12.2.0.1)
--------------------------------------
sun/util/resources/TimeZoneNamesBundle

Java is a particularly enthusiastic user of long object names in Oracle – but it’s not the only culprit, there are a few others as we can see with another query against dba_objects – this time from 19c:

select  object_type, count(*)
from    dba_objects 
where   length(object_name) > 30 
group by object_type 
order by count(*)
/

OBJECT_TYPE               COUNT(*)
----------------------- ----------
PROCEDURE                        1
INDEX                            2
JAVA RESOURCE                 1286
SYNONYM                       4337
JAVA CLASS                   31739

If you’ve made much use of Java in the database before now you’re probably familiar with the call to dbms_java.long_name(). Since Oracle has a limit of 30 characters for identifiers it trims the leading edge (and sometimes a bit of the trailing edge) of the long names used by the public java libraries and uses a hashing function to create a short prefix. If you look in the sys.javasnm$ table (java short name?) in earlier versions of Oracle you’ll see that it has two columns – (short, longdbcs), and we can see the relationship between them:

select  short, longdbcs, dbms_java.longname(short) long_name 
from    javasnm$ 
where   rownum <= 10
/

SHORT                          LONGDBCS                                           LONG_NAME
------------------------------ -------------------------------------------------- --------------------------------------------------
/2ea59ec_TimeZoneNamesBundle   sun/util/resources/TimeZoneNamesBundle             sun/util/resources/TimeZoneNamesBundle
/8acf0d3a_OpenListResourceBund sun/util/resources/OpenListResourceBundle          sun/util/resources/OpenListResourceBundle
/e3e70b06_LocaleNamesBundle    sun/util/resources/LocaleNamesBundle               sun/util/resources/LocaleNamesBundle
/cc11c9d8_SerialVerFrame       sun/tools/serialver/SerialVerFrame                 sun/tools/serialver/SerialVerFrame
/1f9f2fa_N2AFilter             sun/tools/native2ascii/N2AFilter                   sun/tools/native2ascii/N2AFilter
/b6b3d680_UnsupportedEncodingE java/io/UnsupportedEncodingException               java/io/UnsupportedEncodingException
/7994ade2_CharsetEncoder       java/nio/charset/CharsetEncoder                    java/nio/charset/CharsetEncoder
/73841741_IllegalCharsetNameEx java/nio/charset/IllegalCharsetNameException       java/nio/charset/IllegalCharsetNameException
/f494d94e_UnsupportedCharsetEx java/nio/charset/UnsupportedCharsetException       java/nio/charset/UnsupportedCharsetException
/3092d940_MissingResourceExcep java/util/MissingResourceException                 java/util/MissingResourceException

10 rows selected.

With the appearance of long(er) identifiers in 18c, Oracle no longer needs to mess about with short names and a conversion function – it has just put the “fully qualified” name into obj$. I doubt if this will catch anyone out for long, but it might be nice to know about in advance.

August 6, 2020

Case and Aggregate bug

Filed under: 12c,Bugs,Oracle,Troubleshooting,Upgrades — Jonathan Lewis @ 12:43 pm BST Aug 6,2020

The following description of a bug appeared on the Oracle Developer Community forum a little while ago – on an upgrade from 12c to 19c a query starting producing the wrong results on a simple call to the average() function. In fact it turned out to be a bug introduced in 12.2.0.1.

The owner of the thread posted a couple of zip files to build a test case – but I had to do a couple of edits, and change the nls_numeric_characters to ‘,.’ in order to get past a formatting error on a call to the to_timestamp() function. I’ve stripped the example to a minimum, and translated column name from German (which was presumably the source of the nls_numeric_characters issue) to make it easier to demonstrate and play with the bug.

First the basic data – you’ll notice that I’ve tested this on 12.1.0.2, 12.2.0.1 and 19.3.0.0 to find out when the bug appeared:

rem
rem     Script:         case_aggregate_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2020
rem     Purpose:        
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table test(
        case_col        varchar2(11), 
        duration        number(*,0), 
        quarter         varchar2(6), 
        q2h_knum_b      varchar2(10)
   )
/

insert into test values('OK',22,'1.2020','AB1234');
insert into test values('OK',39,'1.2020','AB1234');
insert into test values('OK',30,'1.2020','AB1234');
insert into test values('OK',48,'1.2020','AB1234');
commit;

execute dbms_stats.gather_table_stats(user,'test')

create or replace force view v_test
as 
select 
        q2h_knum_b,
        case 
                when b.case_col not like 'err%'
                        then b.duration 
        end     duration,
        case 
                when b.case_col not like 'err%' 
                        then 1 
                        else 0 
        end     status_ok
from
        test b
where
        substr(b.quarter, -4) = 2020
;


break on report
compute avg of duration on report
select * from v_test;

---------------------------------------------

Q2H_KNUM_B   DURATION  STATUS_OK
---------- ---------- ----------
AB1234             22          1
AB1234             39          1
AB1234             30          1
AB1234             48          1
           ----------
avg             34.75


I’ve created a table, loaded some data, gathered stats, then created a view over the table. The view includes a couple of columns that use a simple case expression, and both expressions are based in the same way on the same base column (this may, or may not, be significant in what’s coming). I’ve then run off a simple query with a couple of SQL*Plus commands to report the actual content of the view with the average of the duration column – which is 34.75.

So now we run a couple of queries against the view which aggregate the data down to a single row – including the avg() of the duration – using the coalesce() function – rather than the older nvl() function – to convert any nulls to zero.


select
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg,
        coalesce(sum(status_ok), 0)     ok_count
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

DURATION_COUNT DURATION_MED DURATION_AVG   OK_COUNT
-------------- ------------ ------------ ----------
             4         34.5            0          4

You’ll notice that the duration_avg is reported as zero (this would be the same if I used nvl(), and would be a null if I omitted the coalesce(). This is clearly incorrect. This was the output from 19.3; 12.2 gives the same result, 12.1.0.2 reports the average correctly as 34.75.

There are several way in which you can modify this query to get the right average – here’s one, just put the ok_count column first in the select list:


select
        coalesce(sum(status_ok), 0)     ok_count,
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

  OK_COUNT DURATION_COUNT DURATION_MED DURATION_AVG
---------- -------------- ------------ ------------
         4              4         34.5        34.75


There’s no obvious reason why the error should occur, but there’s a little hint about what may be happening in the Column projection information from the execution plan. The basic plan is the same in both cases, so I’m only show it once; but it’s followed by two versions of the projection information (restricted to operation 1) which I’ve formatted to improve:

Plan hash value: 2603667166

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     2 (100)|          |
|   1 |  SORT GROUP BY     |      |     1 |    20 |            |          |
|*  2 |   TABLE ACCESS FULL| TEST |     1 |    20 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter((INSTR('AB1234',"Q2H_KNUM_B")>0 AND
              TO_NUMBER(SUBSTR("B"."QUARTER",(-4)))=2020))

Column Projection Information (Operation 1 only):  (Wrong result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22]



Column Projection Information (Operation 1 only):  (Right result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22]

As you can see, to report avg() Oracle has projected sum() and count().

When we get the right result the sum() for duration appears immediately after the count().

When we get the wrong result the sum() for ok_count comes between the count() and sum() for duration.

This makes me wonder whether Oracle is somehow just losing track of the sum() for duration and therefore dividing null by the count().

This is purely conjecture, of course, and may simply be a coincidence – particularly since 12.1.0.2 gets the right result and shows exactly the same projection information.

Readers are left to experiment with other variations to see if they can spot other suggestive patterns.

Update (Aug 2020)

This is now logged as Bug 31732779 – WRONG RESULT WITH CASE STATEMENT AGGREGATION , though it’s not yet publicly visible.

July 12, 2020

Massive Deletes

Filed under: 12c,Infrastructure,Oracle,Upgrades — Jonathan Lewis @ 7:36 pm BST Jul 12,2020

One of the recurrent questions on the Oracle Developer Commuity forum is:

What’s the best way to delete millions of rows from a table?

There are an enormous number of relevant details that you need to know before you can give the “right” answer to this question, e.g.

  • Which version of Oracle
  • Standard or Enterprise Edition
  • Is “millions” a tiny percentage of the table or a large percentage
  • Are there any referential integrity constraints in place
  • Does the system have to keep running while the deletion takes place
  • Is the table compressed
  • How many indexes are there – and can you drop some of them
  • How much space do you have to do this job
  • How much time do you have to do this job

One of the most important ones, of course, is “Which version of Oracle?” because it can make an enormous difference to the range of possible strategies. I’m writing this particular note because the question came up a little while ago where the user wanted to delete all the data from 2008 through to the end of 2018, keeping only the last 18 months of data.

That sounds like the volume of data to be deleted (11 years) is very much larger than the volume of data to be kept (1.5 years) – but we can’t be sure of that since businesses tend to grow over time so that last 18 months of data might actually be just as big as the previous 11 years.

As usually happens in response to this question there were suggestions to “create a new table selecting the data you want to keep”, “use dbms_parallel_execute to delete by rowid ranges in parallel”, and a relatively new one “convert to a partitioned table so that the data you want to keep is in its own partition and drop the other partition(s)”. 

I wrote a note a few years ago giving an example of converting a simple heap table to a partitioned table – while eliminating the data you don’t want to keep so there’s no need to waste resources copying redundant data, maintaining indexes (choosing between local and global) and doing the whole job online.  So, after learning that the OP was running 12.2 Enterprise Edition with the Partitioning option, I suggested that (s)he convert the table into a hash partitioned table with a single partition as this should (for purposes of optimisation) behave just like a simple heap table using the “including rows” clause to copy only the last 18 months of data.

I pointed out that their version of Oracle(EE + PO) gave them the 2nd best option – because I knew that in 19c you could simply do something like:

rem
rem     Script:         122_move.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1
as
select  *
from    all_objects
where   rownum <= 50000 -- > comment to avoid wordpress format issue
;

alter table t1 add constraint t1_pk primary key(object_id);

alter table t1 move
        including rows where owner != 'SYS'
        online
;

It wasn’t until a little later that a tiny nagging doubt crept into my mind that maybe this enhancement to the basic move command may have appeared at the same time as the modify partition enhancement – in other words in 12.2.0.1; so I ran the test above and found that it did actually seem to work. (And I haven’t yet found any bugs on MOS suggesting that it shouldn’t be used.)

Having discovered that the command was available I thought that I’d better check whether it was also documented, and found that it was in the 12.2 SQL Reference Manual (though not the 12.1 reference manual – for the obvious reason) under Alter Table. Page down to the “tram-tracks” for the Alter Table command and follow the link for the “move_table_clause”, and from there follow the link for “filter_condition”.

Note:

This option is not available on 12.1 and if you run the test using that version Oracle will raise error “ORA-25184: column name expected” at the point where the word “rows” appears. This may look somewhat counter-intuitive, but for a very long time a command like “alter table TabX move including ColY online” is how you would rebuild an index organized table (IOT) with all columns up to ColY in the “IOT_TOP” segment.

Update [The following morning]

Once you’ve got the framework of a test in place it really doesn’t take very long to start running through “what if” cases or potential boundary conditions.  So this morning I added one very obvious test – what happens if you have referential integrity declared between two tables and try to move both of them including a subset of rows from each that ensures that the referential integrity is still in place:


rem
rem     Script:         122_move_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0        Parent can't move
rem             12.2.0.1        Parent can't move
rem

create table parent
as
select  *
from    all_objects
where   rownum <= 50000 -- > comment to avoid wordpress format issue
;

alter table parent add constraint par_pk primary key(object_id);

create table child
as
select  *
from    parent
order by
        dbms_random.value
;

alter table child add constraint chi_pk primary key(object_id);
alter table child add constraint chi_fk_par foreign key(object_id) references parent;

I’ve created the child table from the parent data, with random ordering. Now I’m going to delete all the child rows where owner = ‘PUBLIC’ using an online move, then I’ll try and do the same for the parent.


alter table child move
        including rows where owner != 'PUBLIC'
        online
;

-- Child move succeeds (of course)

alter table parent move
        including rows where owner != 'PUBLIC'
        online
;

--
-- Trying to do the matching move on the parent results in:
-- ORA-02266: unique/primary keys in table referenced by enabled foreign keys
--

So there’s a gap in the functionality that makes it less desirable than the simplest case suggests. The referential integrity constraint has to be disabled before the parent table can be restructured.

But something that merits a little further investigation is the option to set the foreign key to “disable validate” (which is sufficient to allow the parent move to take place) and then to set the constraint back to “enable”. When I tried this I had expected Oracle to do a lot of work to revalidate the constraint before enabling it, but I couldn’t find any indication that any such work had taken place.

Update (Nov 2020 – following comment #3 below about IOTs)

It’s important to check the manuals before getting too stuck into experiments. The “problem” with foreign key constaints is one of the specified restrictions, as given in the 19c Language Reference manual under Alter Table:

Restrictions on Filter Conditions

The following restrictions apply to the filter_condition clause:

    • Filter conditions are supported only for heap-organized tables.
    • Filter conditions can refer only to columns in the table being altered. Filter conditions cannot contain operations, such as joins or subqueries, that reference other database objects.
    • Filter conditions are unsupported for tables with primary or unique keys that are referenced by enabled foreign keys.

July 10, 2020

Recursive WITH upgrade

Filed under: ANSI Standard,CBO,Execution plans,Oracle,Subquery Factoring,Upgrades — Jonathan Lewis @ 4:19 pm BST Jul 10,2020

There’s a notable change in the way the optimizer does cost and cardinality calculations for recursive subquery factoring that may make some of your execution plans change – with a massive impact on performance – as you upgrade to any version of Oracle from 12.2.0.1 onwards. The problem appeared in a question on the Oracle Developer Community forum a little while ago, with a demonstration script to model the issue.

I’ve copied the script – with a little editing – and reproduced the change in execution plan described by the OP. Here’s my copy of the script, with the insert statements that generate the data (all 1,580 of them) removed.

rem
rem     Script:         recursive_with_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem
rem     Notes:
rem     https://community.oracle.com/thread/4338248
rem
rem     The upgrade to 12.2.0.1 made this query much slower (on 15,000 rows)
rem     Setting OFE to 12.1.0.1 is a first possible fix for the issue.
rem     The scale is too small to see much difference in this case
rem

drop table test_folder purge;

create table test_folder(
        fldr_key                number(16,0)            not null        enable,                 
        fldr_id                 varchar2(255 byte)      not null        enable,                 
        fldr_desc_tx            varchar2(255 byte),                     
        par_fldr_key            number(16,0),                   
        seus_key                number(16,0)            not null        enable,                 
        fldr_private_flg        varchar2(1 byte)        not null        enable,                 
        last_updt_dt            date                    not null        enable,                 
        last_upby_seus_key      number(16,0)            not null        enable,                 
        lock_seq_nbr            number(9,0) default 0   not null        enable,                 
        content_guid            raw(16),                
        constraint test_folder_pk primary key (fldr_key)                
)       
;              

-- list of insert statements

alter table test_folder add constraint test_folder_fk  
        foreign key (par_fldr_key) references test_folder(fldr_key)
;  
  
create or replace force editionable view test_folder_vw (fldr_key) as   
with rec_path(fldr_key)  as (
        select  tf.fldr_key  
        from    test_folder tf where tf.par_fldr_key is null  
        union all  
        select  tf.fldr_key  
        from    test_folder tf, rec_path  
        where   rec_path.fldr_key = tf.par_fldr_key
)  
select fldr_key  
from rec_path   
; 

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'TEST_FOLDER',
                method_opt  => 'for all columns size 1'
        );
end;
/


select * from test_folder_vw where fldr_key = -41;  

I’ve run the test 3 times. First in 12.2.0.1 with no tweaking; then in 12.2.0.1 with the hint /*+ optimizer_features_enable(‘12.1.0.2’) */ and finally in a genuine 12.1.0.2 environment. In all three cases I enabled rowsource execution stats (‘alter session set statistics_level = all’) and pulled the plans from memory – with the following results

First, the base result from 12.1.0.2

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.03 |     604 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.03 |     604 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.03 |     604 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      68 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.02 |     536 |  1696K|  1696K| 1488K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|*  6 |     TABLE ACCESS FULL                     | TEST_FOLDER |      8 |   1419 |  11352 |00:00:00.01 |     536 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

Two points to note, in particular. First that the hash join has the recursive with pump as its first (build table) child and the table access full of test_folder as its second child (probe table); secondly that there is no value given for E-Rows for the recursive with pump.

Now the 12.2.0.1 plan:

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |   2524K|      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |   2524K|   1419 |00:00:00.01 |      23 |  1743K|  1743K| 1632K (0)|
|   5 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  6 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
|   7 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41)
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

There are three changes to notice in this plan – which (for the OP) was much slower than the 12.1.0.2 plan. First, the order of the hash join has changed, the recursive with pump is now the second child (probe table) in the join (and again shows no value for E-Rows); secondly that Oracle has introduced an extra operation – the buffer sort (reuse) – populated by the table access full – as the build table; thirdly (presumably the point of buffer sort (reuse) operation) the number of buffer visits has dropped from a few hundred to a couple of dozen.

Finally let’s check what happens if we set the OFE (optimizer_features_enable) to 12.1.0.2 while running 12.2.0.1

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.01 |      23 |  1797K|  1797K| 1573K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|   6 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  7 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   7 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

In these conditions the recursive with pump has gone back to being the build table (first child); but it’s worth noting that the 12.2 buffer sort (reuse) is still in place – saving us a few hundred buffer gets (and, for a bigger table, a number of disc reads possibly). Downgrading the optimizer_features_enable has given us the plan we needed, but this we’ve got an example that shows that hacking the parameter isn’t a guarantee that we will get exactly the path we used to get in the older version.

The story so far.

It seems that we can address the performance problem that the OP had by setting the optimzer_feature_enable to the older version – possibly through a hint embedded in the SQL, perhaps through an SQL Baseline or SQL Patch. Maybe we’ll have to have a logon trigger that sets the parameter for particular users or, worst case scenario, maybe we’ll have to set the parameter at the system level. Given how undesirable the last option could be it would be nice to know exactly what is causing the change in plan.

As a basic clue – if the order of tables in a hash join reverses itself this usually means that the byte (not row) estimates have changed. The optimizer will use the table with the lower byte count as the build table in a hash join. So the recursive with pump – whose row and byte estimates don’t appear – must have produced larger numbers in 12.2.0.1.

A change in the 12.2 plan that I haven’t yet mentioned is the E-rows for the hash join; it’s gone up from 640 (12.1.0.2) to 2.5 million! So let’s repeat the tests with the CBO (10053) trace file enabled and see if we can find a number like 2524K appearing as a join estimate in the trace file. Having created the two trace files (in 12.2.0.1, one with the OFE set backwards) I executed the following grep command against the trace files:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc

I’d set the tracefile_identifier to ‘ofe’ and ‘base’ respectively for the 12.1.0.2 and 12.2.0.1 tests, and here are the results:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
1166:Join Card - Rounded: 640 Computed: 639.941176

grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc
1195:Join Card - Rounded: 640 Computed: 639.941176
1391:Join Card - Rounded: 2544 Computed: 2543.865546
1576:Join Card - Rounded: 10112 Computed: 10111.865546
1737:Join Card - Rounded: 40193 Computed: 40193.075630
1898:Join Card - Rounded: 159759 Computed: 159758.731092
2059:Join Card - Rounded: 635008 Computed: 635008.462185
2220:Join Card - Rounded: 2524023 Computed: 2524023.394958
2269:Join Card - Rounded: 2524023 Computed: 2524023.394958

That’s an interesting clue. Something seems to be calculating a larger and larger value in the 12.2.0.1 trace, starting with the hash join cardinality that appeared in 12.1.0.2 had, growing by a factor of nearly 4 each time, and ending with the hash join cardinality we saw in the 12.2.0.1 plan.

Taking a closer look at the content of the 12.2.0.1 trace file it turned out that every stage in that escalation was Oracle recalculating the cost and cardinality of joining test_folder (the table) and rec_path (the “with” subquery) using the figures from the previous join calculation as the new base figures for rec_path. In effect the optimizer was calculating the cost of following the recursive subquery down to its 7th level of recursion.

Side note: in agreement with my comment about the smaller (in bytes) rowsource being used as the build table, the initial join order started as (test_folder, rec_path) in the first two iterations, but switched to (rec_path, test_folder) from the 3rd iteration onwards.

So we’ve identified the mechanics that cause the change in plan; the question now is: why 7 iterations to the final cost? (Briefly I did a quick check to see how many circles of hell there were in Dante’s Inferno – but it’s 9 (or 10 depending how you count). A quick check of v$parameter (and the x$ tables for the hidden parameters) revealed the following:

Name                                     Value
------------------------------------ ---------
_recursive_with_branch_iterations            7

Setting this parameter to 1 in the session, or adding the hint /*+ opt_param(‘_recursive_with_branch_iterations’ 1) */ to the query resulted in the 12.1.0.2 plan appearing in 12.2.0.1 – and this is a much less intrusive way of getting the plan we’re interested in than setting the entire OFE back to 12.1.0.2. One might even set the parameter in the spfile (after receiving approval from Oracle Corp., of course) given how precisely targetted it is (and know that it doesn’t switch off that nice little buffering trick.)

Summary

From 12.2 onwards the optimizer does recursive recosting of recursive “with” subqueries. This means the cost and cardinality estimates of a plan may change and the impact may cause a significant change in performance – it certainly did for the OP.

The change seems to be driven by the hidden parameter _recursive_with_branch_iterations, which was introduced in 12.2.0.0 with a default value of 7. Setting this parameter to 1 reverts the optimizer to the pre-12.2 behaviour. If you run into a problem of recursive “with” subqueries changing plans and causing performance problems on an upgrade from pre-12.2 to a newer version of Oracle then it would be worth investigating this parameter as the least intrusive way of reverting back to the original plans.

Update (July 2020)

I’ve just been prompted to check MOS for any references to the hidden parameter – and discovered a note that was published in September 2018, updated ub Feb 2019.  It’s amazing how easy it can be to find an answer on MOS when you already know what the answer is ;) Document id 2443466.1 Oracle 12.2.0.1 CBO calculating high cost/CPU for queries with recursive sub-query (Doc ID 2443466.1)

This gives two workarounds to the problem of a change in cost in 12.2 – set the optimizer_features_enable to 12.1.0.2, or set the hidden parameter to 1. It references two bugs (one a duplicate of the other, both apparently unpublished):

  • Bug 23515289 : PERFORMANCE REGRESSION OBSERVED WITH RECURSIVE WITH SERIAL PLAN
  • Bug 24566985 : UPG: QUERY PERFORMANCE ON ALL_TSTZ_TABLES 160 TIMES SLOWER THAN 11.2.0.4

and the Permanent Fix for the problem is to install the patch for Bug 24566985 on 12.2.0.1

 

January 14, 2020

Drop Column bug

Filed under: 18c,Bugs,Oracle,Upgrades — Jonathan Lewis @ 1:22 pm GMT Jan 14,2020

 

When I was a child I could get lost for hours in an encyclopedia because I’d be looking for one topic, and something in it would make me want to read another, and another, and …

The same thing happens with MOS (My  Oracle Support) – I search for something and the search result throws up a completely irrelvant item that looks much more interesting so I follow a hyperlink, which mentions a couple of other notes, and a couple of hours later I can’t remember what I had started looking for.

Today’s note is a side effect of that process. A comment made yesterday about count(*)/count(1) referenced Oracle bug “19450314: UNNECESSARY INVALIDATIONS IN 12C”, and when I searched MOS for more information on this bug I discovered bug 30404639 : TRIGGER DOES NOT WORK CORRECTLY AFTER ALTER TABLE DROP UNUSED COLUMN. The impact of this bug is easy to demonstrate, and the ramifications are as follows:

Exercise extreme care with the “alter table drop column” command in 18c and above.

The problem is easy to work around, but the impact of not knowing about it could be catastrophic if your pre-production testing wasn’t quite good enough. Here’s a little demonstration script – the bug note says the problem appeared in 18.3 but I ran this test against 19.3. The script is a modified version of the SQL in the bug note:


create table t1 (c0 varchar2(30), c1 varchar2(30), c2 varchar2(30), c3 varchar2(30), c4 varchar2(30));
create table t2 (c_log varchar2(30));

create or replace trigger t1_ariu
after insert or update on t1
for each row
begin
        IF :new.c3 is not null then
                insert into t2 values (:new.c3);
        end if;
end;
/

spool drop_col_bug_18c.lst

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;

prompt  ===================================
prompt  Drop some columns in two steps then
prompt  truncate t2 and repeat the test
prompt  ===================================
 
alter table t1 set unused (c1, c2);
alter table t1 drop unused columns;

truncate table t2;

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;
 

The code is very simple. It creates a couple of tables an “after row” trigger on one of them to copy one column value across to the other table on an insert or update provided the new column value is not null.

To check that the trigger is (at least in part) behaving the code does two inserts – one which should copy a value and one which should not – and we see that the copy takes place as expected.

Now comes the critical part. We mark two of the columns in the table as unused, then drop all unused columns, truncate the second table and repeat the inserts.

If you run the test on 12.2.0.1 then you should find that the second run behaves just like the first run. If you’re running 18c or 19c be prepared for the following:


insert into t1(c3) values ('Inserting c3 - should log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

no rows selected

insert into t1(c4) values ('Inserting c4 - should not log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

This is not good – but it gets worse. If your application starts consistently breaking with an ORA-00600 error that’s going to annoy a lot of users for (we hope) a brief interval, but if your application keeps running and corrupting your data that’s a much bigger problem. Re-run the whole script (dropping the two tables first) but change it to mark just one of the two columns as unused, and you’ll get results for the second pass that look like this:


Table truncated.


1 row created.


no rows selected


1 row created.


C_LOG
------------------------------
Inserting c4 - should not log

1 row selected.

The trigger seems to “lose count” of the columns in the table (presumably it’s compiled to refer to something like “column_position = 3” and doesn’t adjust on the “drop column” – the linked bug notes on MOS refer to the problem being associated with the project to increase fine-grained dependencies) so it manages to survive with one column dropped because there’s still a “column 3” which happens now to be the column that used to be “column 4”.

Workaround

There is a simple workaround if you run into this problem after modifying your production system (and before you’ve corrupted a huge amount of data – recompile the trigger manually immediately after the drop completes: “alter trigger t1_ariu compile”.

Refinement

The problem seems to appear only if the following two conditions are true:

  • you use a two-step approach to dropping a column, viz: set unused then drop. If you simply issue “alter table t1 drop column c1” (with or without a “checkpoint NNN”) then the problem does not appear. It’s a great shame that in the past I’ve given advice that setting columns unused and dropping them later is a better option than doing an immediate drop.
  • you drop columns that appear earlier in the table than the highest position column mentioned in the trigger. But this isn’t something you should gamble on, particularly since the workaround is so easy to implement, because the order the columns appear in the table declaration isn’t necessarily the internal column ordering so you might get it wrong (not that I’ve tried to test for that threat) – and what if there are some selective materialized view logs where you don’t explicitly create triggers and forget to cater for.

I don’t expect anyone to be dropping columns in production systems with any great frequency, and you would expect such a significant operation to be tested quite carefully, but it’s easy to envisage a scenario where the testing might be split into two pieces viz:

  1. test the application on a pre-prod version of the database where a table has been created as a subset of the production data without the column that’s due to be dropped
  2. test how long it takes to execute the actual drop on a (minimal) recovered backup of production, but don’t test the new production code on the resulting table.

Sometimes it’s easy to overlook things that “obviously” won’t need testing, especially when it’s something that has always worked in the past with no special treatment required.

Footnote

If you try running this model on LiveSQL you’ll find that the code stops and the web page reports “Error: Internal Server Error” so you can’t tell that the problem is exactly the same there – but it seems quite likely that it is.

Given how easy it is to bypass the problem I haven’t bothered to do any further research on the issue – is it only related to insert and update trigger, and do they have to be after row for the update, and what about before row delete triggers (with materialized view logs in mind).

Update Aug 2020

I’ve just rediscovered this note and done a quick check on the bug. It’s now reported as fixed in 21.1 and there are a few patches for earlier versions of Oracle including 18.10 and 19.0

 

 

August 27, 2019

AW-argh

Filed under: AWR,Oracle,Troubleshooting,Upgrades — Jonathan Lewis @ 3:59 pm BST Aug 27,2019

This is another of the blog notes that have been sitting around for several years – in this case since May 2014, based on a script I wrote a year earlier. It makes an important point about “inconsistency” of timing in the way that Oracle records statistics of work done. As a consequence of being first drafted in May 2014 the original examples showed AWR results from 10.2.0.5 and 11.2.0.4 – I’ve just run the same test on 19.3.0.0 to see if anything has changed.

 

[Originally drafted May 2014]: I had to post this as a reminder of how easy it is to forget things – especially when there are small but significant changes between versions. It’s based loosely on a conversation from Oracle-L, but I’m going to work the issue in the opposite order by running some code and showing you the ongoing performance statistics rather than the usual AWR approach of reading the performance stats and trying to guess what happened.

The demonstration needs two sessions to run; it’s based on one session running some CPU-intensive SQL inside an anonymous PL/SQL block with a second another session launching AWR snapshots at carefully timed moments. Here’s the code for the working session:

rem
rem     Script:         awr_timing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2013
rem

alter session set "_old_connect_by_enabled"=true';

create table kill_cpu(n, primary key(n))
organization index
as
select  rownum n
from    all_objects
where   rownum <= 26 -- > comment to avoid wordpress format issue
;

execute dbms_stats.gather_table_stats(user,'kill_cpu')

pause Take an AWR snapshot from another session and when it has completed  press return

declare
        m_ct    number;
begin

        select  count(*) X
        into    m_ct
        from    kill_cpu
        connect by
                n > prior n
        start with
                n = 1
        ;

        dbms_lock.sleep(30);

end;
/

You may recognise an old piece of SQL that I’ve often used as a way of stressing a CPU and seeing how fast Oracle can run. The “alter session” at the top of the code is necessary to use the pre-10g method of running a “connect by” query so that the SQL does a huge number of buffer gets (and “buffer is pinned count” visits). On my current laptop the query takes about 45 seconds (all CPU) to complete. I’ve wrapped this query inside a pl/sql block that then sleeps for 30 seconds.

From the second session you need to launch an AWR snapshot 4 times – once in the pause shown above, then (approximately) every 30 seconds thereafter. The second one should execute while the SQL statement is still running, the third one should execute while the dbms_lock.sleep(30) is taking place, and the fourth one should execute after the pl/sql block has ended and the SQL*Plus prompt is visible.

Once you’ve got 4 snapshots you can generate 3 AWR reports. The question to ask then, is “what do the reports say about CPU usage?” Here are a few (paraphrased) numbers – starting with 10.2.0.5 comparing the “Top 5 timed events”, “Time Model”, “Instance Activity” and “Top SQL by CPU” There are three sets of figures, the first reported while the SQL was still running, the second reported after the SQL statement had completed and the dbms_lock.sleep() call is executing, the last reported after the PL/SQL block has completed. There are some little oddities in the numbers due to backgorund “noise” – but the key points are still clearly visible:

While the SQL was executing

Top 5 Timed events
------------------
CPU Time                       26 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     26.9        100.0
DB CPU                                       26.2         97.6

Instance Activity
-----------------
CPU used by this session         0.65 seconds
recursive cpu usage              0.67 seconds

SQL ordered by CPU
------------------
31 seconds reported for both the SQL and PLSQL

During the sleep()

Top 5 Timed events
------------------
CPU Time                        19 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     19.0        100.0
DB CPU                                       18.6         98.1

Instance Activity
-----------------
CPU used by this session         0.66 seconds
recursive cpu usage             44.82 seconds

SQL ordered by CPU
------------------
14 seconds reported for both the SQL and PLSQL

After the PL/SQL block ended

Top 5 Timed events
------------------
CPU Time                         1 second

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.4         99.9
DB CPU                                        1.4         99.7

Instance Activity
-----------------
CPU used by this session        44.68 seconds
recursive cpu usage              0.50 seconds

SQL ordered by CPU
------------------
1 second reported for the PLSQL, but the SQL was not reported

Points to notice:

While the SQL was excecuting (and had been executing for about 26 seconds, the Time Model mechanism was recording the work done by the SQL, and the Top N information echoed the Time model CPU figure. At the same time the “CPU used …” Instance Activity Statistics have not recorded any CPU time for the session – and they won’t until the SQL statement completes. Despite this, the “SQL ordered by …” reports double-count in real-time, show the SQL and the PL/SQL cursors as consuming (with rounding errors, presumably) 31 seconds each.

After the SQL execution was over and the session was sleeping the Time model (hence the Top 5) had recorded a further 19 seconds of work. The instance activity, however, has now accumulated 44 seconds of CPU, but only as “recursive CPU usage” (recursive because our SQL was called from with a PL/SQL block), with no “CPU used by this session”. The “SQL ordered by …” figures have recorded the amount of CPU used by both the SQL and PL/SQL in the interval (i.e. 14 seconds – which is a little off) recorded against both.)

After the PL/SQL block has completed the Time Model and the Top 5 report both say that nothing much happened in the interval, but the Instance Activity suddenly reports 44.68 seconds of CPU used by this session – which (roughly speaking) is truish as the PL/SQL block ended and assigned the accumulated recursive CPU usage to the session CPU figure. Finally, when we get down to the “SQL ordered by CPU” the SQL was not reported  – it did no work in the interval – but the PL/SQL block was still reported but only with a generous 1 second of CPU since all it did in the interval was finish the sleep call and tidy up the stack before ending.

Now the same sets of figures for 11.2.0.4 – there’s a lot of similarity, but one significant difference:

While the SQL was executing


Top 5 Timed events
------------------
CPU Time                        26.6 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     27.0        100.0
DB CPU                                       26.6         98.5

Instance Activity
-----------------
CPU used by this session         1.09 seconds
recursive cpu usage              1.07 seconds

SQL ordered by CPU
------------------
25.6 seconds reported for both the SQL and PLSQL

During the sleep()

Top 5 Timed events
------------------
CPU Time                        15.1 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     15.3         99.8
DB CPU                                       15.1         98.2

Instance Activity
-----------------
CPU used by this session        41.09 seconds
recursive cpu usage             41.03 seconds

SQL ordered by CPU
------------------
14.3 seconds reported for the SQL
13.9 seconds reported for the PLSQL

After the PL/SQL block ended

Top 5 Timed events
------------------
CPU Time                         1.4 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.5         99.6
DB CPU                                        1.4         95.4

Instance Activity
-----------------
CPU used by this session         1.02 seconds
recursive cpu usage              0.95 seconds

SQL ordered by CPU
------------------
0.5 seconds reported for the PLSQL, and no sign of the SQL

Spot the one difference in the pattern – during the dbms_lock.sleep() the Instance Activity Statistic “CPU used by this session” is recording the full CPU time for the complete query, whereas the time for the query appeared only in the “recursive cpu” in the 10.2.0.5 report.

I frequently point out that for proper understanding of the content of an AWR report you need to cross-check different ways in which Oracle reports “the same” information. This is often to warn you about checking underlying figures before jumping to conclusions about “hit ratios”, sometimes it’s to remind you that while the Top 5 Timed Events might say some average looks okay the Event Histogram may say that what you’re looking at is mostly excellent with an occasional disaster thrown in. In this blog note I just want to remind you that if you only ever look at one set of figures about CPU usage there are a few special effects (particularly relating to long running PL/SQL / Java / SQL) where you may have to work out a pattern of behaviour to explain unexpectedly large (or small) figures and contradictory figures, The key to the problem is recognising that different statistics may be updated at different stages in a complex process.

Footnote

I doubt if many people still run 11.2.0.4, so I also re-ran the test on 19.3.0.0 before publishing. The behaviour hasn’t changed since 11.2.0.4 although the query ran a little faster, perhaps due to changes in the mechanisms for this type of “connect by pump”.

11.2.0.4 stats


Name                                            Value
----                                            -----
session logical reads                      33,554,435
consistent gets                            33,554,435
consistent gets from cache                 33,554,435
consistent gets from cache (fastpath)      33,554,431
no work - consistent read gets             33,554,431
index scans kdiixs1                        33,554,433
buffer is not pinned count                 16,777,219


19.3.0.0 stats

Name                                            Value
----                                            -----
session logical reads                      16,843,299
consistent gets                            16,843,299
consistent gets from cache                 16,843,299
consistent gets pin                        16,843,298
consistent gets pin (fastpath)             16,843,298
no work - consistent read gets             16,790,166
index range scans                          33,554,433
buffer is not pinned count                 16,790,169

Some changes are trivial (like the change of name for “index scans kdiixs1”) some are interesting (like some gets not being labelled as “pin” and “pin (fastpath)”), some are baffling (like how you can manage 33M index range scans while doing only 16M buffer gets!)

May 2, 2019

LOB length

Filed under: Infrastructure,LOBs,Oracle,Upgrades — Jonathan Lewis @ 11:32 am BST May 2,2019

This note is a reminder combined with a warning about unexpected changes as you move from version to version. Since it involves LOBs (large objects) it may not be relevant for most people but since there’s a significant change in the default character set for the database as you move up to 18.3 (or maybe even as you move to 12.2) anyone using character LOBs may get a surprise.

Here’s a simple script that I’m first going to run on an instance of 11.2.0.4:

rem
rem     Script:         lob_length_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2019
rem

create table t1(
        n1      number,
        version varchar2(8),
        v1      varchar2(4000),
        c1      clob,
        nc1     nclob
)
lob (c1) store as securefile c1_lob(
        enable storage in row
        chunk 8K
        cache logging
)
lob (nc1) store as securefile nc1_lob(
        enable storage in row
        chunk 8K
        cache logging
)
;

insert into t1 values( 0,'General',  null,               empty_clob(),       empty_clob()      ) ;

insert into t1 values( 1,'11.2.0.4', rpad('x',1951,'x'), rpad('x',1951,'x'), rpad('x',1951,'x')) ;
insert into t1 values( 2,'11.2.0.4', rpad('x',1952,'x'), rpad('x',1952,'x'), rpad('x',1952,'x')) ;

insert into t1 values( 3,'12.1.0.2', rpad('x',1951,'x'), rpad('x',1951,'x'), rpad('x',1951,'x')) ;
insert into t1 values( 4,'12.1.0.2', rpad('x',1952,'x'), rpad('x',1952,'x'), rpad('x',1952,'x')) ;

insert into t1 values( 5,'12.2.0.1', rpad('x',1937,'x'), rpad('x',1937,'x'), rpad('x',1937,'x')) ;
insert into t1 values( 6,'12.2.0.1', rpad('x',1938,'x'), rpad('x',1938,'x'), rpad('x',1938,'x')) ;

insert into t1 values( 7,'18.3.0.0', rpad('x',1984,'x'), rpad('x',1984,'x'), rpad('x',1984,'x')) ;
insert into t1 values( 8,'18.3.0.0', rpad('x',1985,'x'), rpad('x',1985,'x'), rpad('x',1985,'x')) ;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

break on version skip 1 on report

compute avg of sys_op_opnsize(c1)  on report
compute avg of sys_op_opnsize(nc1) on report

select
        version, n1,
        length(v1), length(c1), length(nc1),
        sys_op_opnsize(v1), sys_op_opnsize(c1), sys_op_opnsize(nc1)
from
        t1
order by
        n1
;

select  avg_row_len
from    user_tables
where   table_name = 'T1'
;

select  column_name, avg_col_len
from    user_tab_cols
where   table_name = 'T1'
order by
        column_id
;

I’ve created a table holding a varchar2() a CLOB, and an NCLOB, then I’ve inserted some rows into that table, labelling the rows in pairs with what appear to be Oracle version numbers, with one extra row labelled “General” that holds the special “empty LOB value (note a NULL and an empty clob behave very differently). Then I’ve reported the lengths of the LOB columns in two different ways, once using the standard length() function (I could have used the dbms_lob.getlength() function) and once using the internal sys_op_opnsize() function that Oracle uses in its queries to gather table stats.

Here’s the output from the script. Since this first run is on 11.2.0.4 I want you to note, particularly, the numbers in the rows labelled 11.2.0.4

VERSION          N1 LENGTH(V1) LENGTH(C1) LENGTH(NC1) SYS_OP_OPNSIZE(V1) SYS_OP_OPNSIZE(C1) SYS_OP_OPNSIZE(NC1)
-------- ---------- ---------- ---------- ----------- ------------------ ------------------ -------------------
General           0                     0           0                                    96                  96

11.2.0.4          1       1951       1951        1951               1951               2048                3999
                  2       1952       1952        1952               1952               2049                  86

12.1.0.2          3       1951       1951        1951               1951               2048                3999
                  4       1952       1952        1952               1952               2049                  86

12.2.0.1          5       1937       1937        1937               1937               2034                3971
                  6       1938       1938        1938               1938               2035                3973

18.3.0.0          7       1984       1984        1984               1984               2081                  86
                  8       1985       1985        1985               1985               2082                  86

********                                                                 ------------------ -------------------
avg                                                                              1835.77778          1820.22222


AVG_ROW_LEN
-----------
       5410

COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
N1                             3
VERSION                        9
V1                          1740
C1                          1837
NC1                         1822

The reminders:

  • A LOB that uses a multi-byte character set (and possibly, but I have not checked, an NCLOB that is using a single-byte character set) stores its content using a two-byte fixed width character set. So when I insert a character string of 1,951 (multibyte) characters the internal representation uses double the number of bytes as there are characters – so the data content will be 3,902 bytes.
  • The length() and dbms_lob.getlength() functions report the number of characters not the number of bytes. (There is a lengthb() function to report the number of bytes, but it doesn’t work with multibyte character set LOBs, raising error: “ORA-22998: CLOB or NCLOB in multibyte character set not supported”). Even though our NCLOB needs 3,902 bytes for its content it is reported – just like the CLOB – with a length of 1,951.
  • The sys_op_opnsize() internal function tells you about the number of bytes a column takes up in the row – though not always totally accurately, as we shall soon see – so it will tell you about the 3,902 bytes stored for the data plus the “structural” data that helps to describe the LOB. So for the row holding strings of length 1,951 we can see that the CLOB seems to take 1,951 + 97 = 2,048 bytes while the NCLOB seems to take 2 * 1951 + 97 = 3,999 bytes.
  • When the total storage for a LOB column exceeds 4,000 bytes the LOB is stored “out of line” even when it has been declared as mine have with “enable storage in row”. So when we insert strings of length 1,952 the CLOB column stays in line and reports a size of 1,952 + 97 = 2.049 bytes while the NCLOB exceeds the limit (2 + 1,952 + 97 = 4001) and goes out of row, reporting an “in-row” size of 86 bytes which is (we assume) the LOB metadata.

You might note, by the way, that the avg_col_len is the average (which I’ve reported for the CLOB and NCLOB columns) of the sys_op_opnsize() values rounded up plus 1 (for the “column count” byte); and – because the number of columns in the table is small – the avg_row_len is very similar to the sum of the avg_col_len.

You should also note that the storage size of an “empty” LOB is (or seems to be) 96 bytes in this version of Oracle. That’s quite a lot of space to say that there’s no data  – but using an empty_[n]clob() to represent “nothing” does mean that you don’t have to code for “is null or is empty” and it may help you to avoid incorrect results as a consequence. In fact a block dump shows that the actual space usage for the empty_[n]clob() is only 30 bytes – so it’s not quite as bad as it seems. The error is probably based around the older code using the maximum possible length of a lob locator as an assumption rather than checking the actual size in the table.

The warning

It’s a minor variation of the standard warning: “odd little things change when you upgrade and some of them may have a big side effect”. Here are the results from 18.3 (which, amongst other details, defaults to a multi-byte character set – which I’ll check before I show you the LOB results).

SQL> select * from v$nls_parameters where parameter like '%CHARACTERSET%';

PARAMETER                        VALUE
-------------------------------- --------------------
NLS_CHARACTERSET                 AL32UTF8
NLS_NCHAR_CHARACTERSET           AL16UTF16

This affects the storage of CLOBs if you’ve previously been using a default single-byte character set – you’re suddenly going to find your LOB segments are twice as big as they used to be – even though any report of “length()” will be unchanged. But there are other changes. Here’s the output from the same script running on 18.3.0.0:



VERSION          N1 LENGTH(V1) LENGTH(C1) LENGTH(NC1) SYS_OP_OPNSIZE(V1) SYS_OP_OPNSIZE(C1) SYS_OP_OPNSIZE(NC1)
-------- ---------- ---------- ---------- ----------- ------------------ ------------------ -------------------
General           0                     0           0                                    30                  30

11.2.0.4          1       1951       1951        1951               1951               3933                3933
                  2       1952       1952        1952               1952               3935                3935

12.1.0.2          3       1951       1951        1951               1951               3933                3933
                  4       1952       1952        1952               1952               3935                3935

12.2.0.1          5       1937       1937        1937               1937               3905                3905
                  6       1938       1938        1938               1938               3907                3907

18.3.0.0          7       1984       1984        1984               1984               3999                3999
                  8       1985       1985        1985               1985                132                 132
********                                                                 ------------------ -------------------
avg                                                                              3078.77778          3078.77778

AVG_ROW_LEN
-----------
       7912

COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
N1                             3
VERSION                        9
V1                          1740
C1                          3080
NC1                         3080

Looking at the details for “General” and “18.3.0.0” what can we see that’s changed?

First, the length of an empty_[n]clob is now reported correctly at 30 bytes. This might (if the value is also used internally) explain why a LOB can now be 33 characters (66 bytes) longer before it’s moved out of line – it’s a small difference, but you might be at a boundary condition where it makes a big difference (for good, or for bad – who can tell) or your system.

Secondly, the LOB metadata for an out of line LOB seems to have jumped from 86 bytes to 132 bytes. Like the empty_[n]clob() issue, this is an error. The actual space usage in row was 38 bytes – consisting of the basic 30 bytes “empty” metadata, 4 extra bytes overhead, and a 4-byte block address linking to the stored LOB value. (For larger LOBs you will probably see that the “tail” of the in-row data grows to accomodate a short list of block addresses and chunk information).

Finally, you’re allowed to guess that the rows labelled 12.2.0.1 are there to give you a hint that in that version the NCLOB moves out of line at 1,938 characters – but you’ll probably want to run the test on 12.2 to confirm that claim. This does mean, though, that an upgrade to 12.2 from a lower version might be a bigger threat than any upgrade to 18.3. You’ll also find that in 12.2 the empty_[n]clob() is reported with a length of 124. (Again, this may explain the change in the break-point for going out of line: 124 – 96 = 28, and (1952 – 28/2) = 1,938. The arithmetic is consistent with the incorrect reporting of the size of the empty LOB metadata.

So the bottom line is this – if you do a lot of work with LOBs, do some careful checking of how much space you are using, how much space you will use on the next install, how changes in character sets can affect numbers, how the statistics might change – even when collected in exactly the same way – and what this might do to execution plans.

Footnote

This testing was all done using securefile LOBs – which has been the default for some time if you don’t specify the type of LOB you want to use. The numbers, particularly the break from in-row to out-of-row, are a little different if you switch to basicfile LOBs.

A quick test on LiveSQL (19.2) suggests that nothing has changed from 18.3 to 19.3

For further reading on quirky details of LOBs – here’s the article that has a footnote reporting another interesting change in LOB handling in 18.3; and a very old note about analysing the contents of a LOB segment with a view to assessing the impact of deleting old LOBs and potentially moving the LOB segment to a tablespace of a different size.

 

 

October 28, 2018

Upgrades – again

Filed under: 12c,Histograms,Oracle,Statistics,Upgrades — Jonathan Lewis @ 12:39 pm GMT Oct 28,2018

I’ve got a data set which I’ve recreated in 11.2.0.4 and 12.2.0.1.

I’ve generated stats on the data set, and the stats are identical.

I don’t have any indexes or extended stats, or SQL Plan directives or SQL Plan Profiles, or SQL Plan Baselines, or SQL Patches to worry about.

I’m joining two tables, and the join column on one table has a frequency histogram while the join column on the other table has a height-balanced histogram.  The histograms were created with estimate_percent => 100%. (which explains why I’ve got a height-balanced histogram in 12c rather than a hybrid histogram.)

Here are the two execution plans, 11.2.0.4 first, pulled from memory by dbms_xplan.display_cursor():


SQL_ID  f8wj7karu0hhs, child number 0
-------------------------------------
select         count(*) from         t1, t2 where         t1.j1 = t2.j2

Plan hash value: 906334482

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      12 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      12 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1855 |   1327 |00:00:00.01 |      12 |  2440K|  2440K| 1357K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       6 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       6 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."J1"="T2"."J2")



SQL_ID	f8wj7karu0hhs, child number 0
-------------------------------------
select	       count(*) from	     t1, t2 where	  t1.j1 = t2.j2

Plan hash value: 906334482

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      41 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      41 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1893 |   1327 |00:00:00.01 |      41 |  2545K|  2545K| 1367K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       7 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       7 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."J1"="T2"."J2")

The key point I want to draw your attention to is the difference between the two cardinality estimates. Why has that appeared, and what might the optimizer do in a more complex plan when a cardinality estimate like this changes?

The difference is only 2% but that was on a couple of data sets I just happened to run up to check something completely different, I wasn’t trying to break something so who know how big the variation can get. Of course if you’re switching from 11g to 12c then Oracle (Corp.) expects you to be using auto_sample_size anyway so you shouldn’t be producing height-balanced histograms – and that might be relevant.

So does this difference really matter? Maybe not, but if (like many sites I’ve seen) you are still using fixed percentage sample sizes and are generating histograms it’s another reason (on top of the usual instability effects of height-balanced and hybrid histograms) why you might see plans change as you upgrade from 11g to 12c.

Footnote

It looks as if the difference comes mostly from a coding error in 11g that has been fixed in 12c – I couldn’t find an official bug or fix_control that matched, though. More on that later on this week.

Update

Chinar Aliyev has pointed out that there are three fix-controls that may be associated with this (and other ) changes. From v$system_fix_control these are:

14033181 1 QKSFM_CARDINALITY_14033181   correct ndv for non-popular values in join cardinality comp.         (12.1.0.1)
19230097 1 QKSFM_CARDINALITY_19230097   correct join card when popular value compared to non popular         (12.2.0.1)
22159570 1 QKSFM_CARDINALITY_22159570   correct non-popular region cardinality for hybrid histogram          (12.2.0.1)

I haven’t tested them yet, but with the code easily available in the article it won’t take long to see what the effects are when I have a few minutes. The first fix may also be why I had a final small discrepancy between 11g and 12c on the join on two columns with frequency histograms.

October 23, 2018

Upgrade threat

Filed under: 12c,18c,Histograms,Oracle,Statistics,Upgrades — Jonathan Lewis @ 7:50 pm BST Oct 23,2018

Here’s one I’ve just discovered while trying to build a reproducible test case – that didn’t reproduce because an internal algorithm has changed.

If you upgrade from 12c to 18c and have a number of hybrid histograms in place you may find that some execution plans change because of a change in the algorithm for producing hybrid histograms (and that’s not just if you happen to get the patch that fixes the top-frequency/hybrid bug relating to high values).

Here’s a little test to demonstrate how I wasted a couple of hours trying to solve the wrong problem – first a simple data set:


rem
rem     Script:         18c_histogram_upgrade.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem 

drop table t2 purge;

execute dbms_random.seed(0)

create table t2(
        id              number(8,0),
        n20             number(6,0),
        n30             number(6,0),
        n50             number(6,0),
        j2              number(6,0)
)
;

insert into t2
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   20) + 1                   n20,
        mod(rownum,   30) + 1                   n30,
        mod(rownum,   50) + 1                   n50,
        28 - round(abs(7*dbms_random.normal))        j2
from
        generator       v1
where
        rownum <= 800 -- > comment to avoid WordPress format issue
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T2',
                method_opt       => 'for all columns size 1 for columns j2 size 13'
        );
end;
/

I’ve created a skewed data set which (we will see) has 22 distinct values and created a histogram of 13 buckets on it. This will be a hybrid histogram – but different versions of Oracle will produce different histograms (even though the data set is the same for both versions):


select
        j2, count(*)
from
        t2
group by
        j2
order by
        j2
;

select
        endpoint_value                                                            value,
        endpoint_number,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) bucket_size,
        endpoint_repeat_count
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'J2'
order by
        endpoint_value
;

Here’s the dataset from 12.2.0.1 and 18.3.0.0


        J2   COUNT(*)
---------- ----------
         1          1
         8          3
         9          1
        10          5
        11          4
        12          8
        13         14
        14          9
        15         11
        16         22
        17         34
        18         31
        19         36
        20         57
        21         44
        22         45
        23         72
        24         70
        25         87
        26        109
        27         96
        28         41

22 rows selected.



And here are the histograms - 12.2.0.1 then 18.3.0.0:



     VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT
---------- --------------- ----------- ---------------------
         1               1           1                     1
        15              56          55                    11
        17             112          56                    34
        18             143          31                    31
        19             179          36                    36
        20             236          57                    57
        21             280          44                    44
        22             325          45                    45
        23             397          72                    72
        24             467          70                    70
        25             554          87                    87
        26             663         109                   109
        28             800         137                    41

13 rows selected.

     VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT
---------- --------------- ----------- ---------------------
         1               1           1                     1
        15              56          55                    11
        17             112          56                    34
        19             179          67                    36
        20             236          57                    57
        21             280          44                    44
        22             325          45                    45
        23             397          72                    72
        24             467          70                    70
        25             554          87                    87
        26             663         109                   109
        27             759          96                    96
        28             800          41                    41

13 rows selected.

Both histograms have 13 buckets as requested; both are hybrid histograms as expected.

But why does 12c have the value 18 when 18c doesn’t, and why does 18c have the value 27 when 12c doesn’t ?

That’s the second time in two weeks I’ve had reproducible test cases not reproducing – thanks to an 18c upgrade.

Update (See comments)

I had completely forgotten that a previous defect in the construction of hybrid (and Top-N) histograms had been addressed in 18.3 but needed a fix in 12.2 and a backport patch in 12.1.0.2.

Since the defect could “lose” a popular value in order to ensure that both the low and high values were captured in the histogram it’s not surprising that a fix could result in one of the popular values in a histogram dissappearing (after the upgrade) even when the gather had used a 100% sample. Quite possibly the algorithm used to ensure the presence of the high value has had a cascading effect down the histogram that can affect which popular values get into the histogram with repeat counts.

I think I’m going to have to grit my teeth and patch a 12.1.0.2, or update a 12.2.0.1 with exactly the right patch-set to find out.

[It has now been confirmed by Nigel Bayliss that this is a side effect of the fix to the bug 25994960]

October 8, 2018

Random Upgrade

Filed under: 18c,Oracle,Upgrades — Jonathan Lewis @ 1:36 pm BST Oct 8,2018

Here’s a problem that (probably) won’t affect the day to day running of most systems – but it could be a pain in the backside for people who write programs to generate repeatable test data. I’m not going to say much about the problem, just leave you with a test script.


rem
rem     Script  random_upgrade.sql
rem     Author: Jonathan Lewis
rem     Dated:  Oct 2018
rem
rem     Last tested
rem             19.3.0.0
rem             18.3.0.0
rem             12.2.0.1
rem

drop table t4 purge;
drop table t3 purge;
drop table t2 purge;
drop table t1 purge;
drop table t0 purge;

set feedback off

create table t0 as
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
;


execute dbms_random.seed(0);

create table t1
as
select dbms_random.normal
from
        t0
;

execute dbms_random.seed(0);

create table t2
as
with g1 as (
        select rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        dbms_random.normal
from
        g1
;

prompt  =================
prompt  Diff the two CTAS
prompt  =================

select count(*)
from (
select * from t1
minus
select * from t2
union all
select * from t2
minus
select * from t1
)
;


create table t3 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

create table t4 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

execute dbms_random.seed(0)

insert into t3
select dbms_random.normal
from
        t0
;

execute dbms_random.seed(0)

insert into t4
with g1 as (
        select rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        dbms_random.normal
from
        g1
;


prompt  ===================
prompt  Diff the two Insert
prompt  ===================

select count(*)
from (
select * from t3
minus
select * from t4
union all
select * from t4
minus
select * from t3
)
;


prompt  ===========
prompt  Sum of CTAS
prompt  ===========

select sum(normal) from t1;

prompt  =============
prompt  Sum of Insert
prompt  =============

select sum(normal) from t3;


execute dbms_random.seed(0)

prompt  =============
prompt  Sum of select
prompt  =============

with g1 as (
        select rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select sum(n) from (
select
        dbms_random.normal n
from
        g1
)
;


I’m repeatedly using dbms_random.seed(0) to reset the random number generator and trying to generate 10,000 normally distributed numbers. (I’ve chosen the normal distribution because that happened to be the function in a script I sent someone with the comment that “this will recreate the data for the demonstration” – and they wrote back to say that it didn’t.)

I’ve got two “create as select”, and two “insert as select”. One of each pair selects from a real existing table to get 10,000 rows, the other uses the “select dual connect by” trick to generate rows. I’ve written SQL that shows whether or not the two pairs of tables end up with the same data (they do, pairwise), then I’ve summed one table from each pair to see if the different mechanisms produce the same data – and that depends on the version of Oracle you’re using. Finally I’ve reset the random number generator and summed across a pure select to see what that produces.

If you run this code on 12.2.0.1 or earlier you’ll see that the “diffs” report zeros and the “sums” report -160.39249. If you upgrade to 18.3 the diffs will still report zeros and some of the sums will still report -160.39249 but the sum of the CTAS will report -91.352172.

Bottom Line

If you’ve got code that you wrote to create reproducible test cases and the code uses: “create table … as select … dbms_random …” then it won’t produce the same data when you upgrade to 18.3. You’ll have to modify the code to do “create table (); insert as select …”.

As of this afternoon I have 1,209 test scripts on my laptop that use the dbms_random package to model data distribution patterns. It is almost certain that I will end up modifying every single one of them eventually.

There are words to express how I feel about this – but not ones that I would consider publishing.

Update (March 2020)

I’ve just run this test on 19.3 and things have changed again. The simplest summary I can give is a little chart of results I got for the three (four) summations and three versions of Oracle. There are three sum() statement in the code above, but four results below because I ran the insert test with and without the /*+ append */ hint, so I’ve got numbers for:

  • select …
  • create as select …
  • insert as select …
  • insert /*+ append */ as select ….

I’ve also tested the code with parameter deferred_segment_creation set to true and then set to false, but that had no effect on the results.

Select          CTAS            Insert           Append          Version
 -160.39249     -160.39249      -160.39249       -160.39249      12.2.0.1
 -160.39249     -91.352172      -160.39249       -91.352172      18.3.0.0
 -160.39249     -91.352172      (not repeatable) -91.352172      19.3.0.0

So, if you’ve gone to the trouble of rewriting all your test code to change “create as select …” into “create table; insert as select” – you’ll be feeling just a little irritated because while it kept 18.3 producing the same results as 12.2 (and every earlier version) it will produce different results every time you run it in 19.3.

I hesitate to suggest creating a pl/sql pipelined function using a simple select to return a set of random numbers. At present it might get you back to the 12.2 results irrespective of how you use that stream – but who know how things will change in 20c.  (I might get around to testing it, one day.)

 

June 29, 2018

Truncate upgrade

Filed under: 12c,Infrastructure,Oracle,Upgrades — Jonathan Lewis @ 8:22 am BST Jun 29,2018

Connor McDonald produced a tweet yesterday linking to a short video he’d created about an enhancement to the truncate command in 12c. If you have referential integrity declared between a parent and child table then in 12c you can truncate the parent table and Oracle will truncate the child table for you – rather than raising an error. The feature requires the foreign key constraint to be declared “on delete cascade” – which is an option that I don’t see used very often. Unfortunately if you try to change an existing foreign key constraint to meet this requirement you’ll find that you can’t (yet) use the “alter table modify constraint” to make the necessary change. As Connor pointed out, you’ll have to drop and recreate the constraint – which leaves you open to bad data getting into the system or an outage while you do the drop and recreate.

If you don’t want to stop the system but don’t want to leave even a tiny window for bad data to arrive here’s a way to do it. In summary:

  1. Add a virtual column to the child table “cloning” the original foreign key column
  2. Create an index on the  virtual column (if you have concerns about “foreign key locking”)
  3. Add a foreign key constraint based on the virtual column
  4. Drop the old foreign key constraint
  5. Recreate the old foreign key constraint “on delete cascade”
  6. Drop the virtual column

Here’s some sample SQL:


rem
rem	Script:		122_truncate_workaround.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Jun 2018
rem	Purpose:	
rem
rem	Last tested 
rem		18.1.0.0	via LiveSQL
rem		12.2.0.1
rem		12.1.0.2

drop table child;
drop table parent;

create table parent (
	p number,
	constraint p_pk primary key(p)
);

create table child (
	c	number,
	p	number,
	constraint c_pk primary key(c),
	constraint c_fk_p foreign key (p) references parent
);

create index c_fk_p on child(p);

insert into parent values(1);
insert into child values(1,1);

commit;

prompt	==========================================================================
prompt	Truncate  should fail with
prompt	ORA-02266: unique/primary keys in table referenced by enabled foreign keys
prompt	==========================================================================

truncate table parent;

alter table child add (
	pv generated always as (p+0) virtual
)
;

create index c_ipv on child(pv) online;

alter table child add constraint c_fk_pv
	foreign key (pv)
	references parent(p)
	on delete cascade
	enable novalidate
;
alter table child modify constraint c_fk_pv validate;

alter table child drop constraint c_fk_p;

prompt	===================================================================================
prompt	Illegal insert (first 1) should fail with
prompt	ORA-02291: integrity constraint (TEST_USER.C_FK_PV) violated - parent key not found
prompt	===================================================================================

insert into child (c,p) values(2,2);
insert into child (c,p) values(2,1);
commit;

alter table child add constraint c_fk_p
	foreign key (p)
	references parent(p)
	on delete cascade
	enable novalidate
;

alter table child modify constraint c_fk_p validate;

prompt	===================================================
prompt	Dropping the virtual column results in Oracle
prompt	dropping the index and constraint at the same time
prompt	===================================================

alter table child drop column pv;

The overhead of this strategy is significant – I’ve created an index (which you may not need, or want, to do) in anticipation of a possible “foreign key locking” issue – and I’ve used the online option to avoid locking the table while the index is created which means Oracle has to use a tablescan to acquire the data. I’ve enabled a new constraint without validation (which takes a brief lock on the table) then validated it (which doesn’t lock the table but could do a lot of work). Then I’ve dropped the old constraint and recreated it using the same novalidate/validate method to minimise locking time. If I were prepared simply to drop and recreate the original foreign key I wouldn’t need to create that index and I’d only do one validation pass rather than two.

 

May 30, 2018

Upgrades

Filed under: 12c,Bugs,Function based indexes,Indexing,Oracle,Upgrades — Jonathan Lewis @ 10:08 am BST May 30,2018

One of my maxims for Oracle performance is: “Don’t try to be too clever”. Apart from the obvious reason that no-one else may be able to understand how to modify your code if the requirements change at a future date, there’s always the possibility that an Oracle upgrade will mean some clever trick you implemented will simply stop working.

While searching for information about a possible Oracle bug recently I noticed the following fix control (v$system_fix_control) in 12.2.0.1:


     BUGNO OPTIMIZE SQL_FEATURE                        DESCRIPTION                                                             VALUE
---------- -------- ---------------------------------- ---------------------------------------------------------------- ------------
  18385778          QKSFM_CARDINALITY_18385778         avoid virtual col usage if FI is unusable or invisible 

Maybe that’s just invalidated an idea I published 12 years ago.

I haven’t researched the bug or any underlying SR, but I can think of valid argument both for and against the fix as described.

 

 

March 13, 2018

Deferred Invalidation

Filed under: 12c,CBO,Infrastructure,Oracle,Troubleshooting,Upgrades — Jonathan Lewis @ 6:30 pm GMT Mar 13,2018

I was going to write an article on the way 12.2 has introduced the option for “deferred invalidation” for a number of DDL operations but I did a quick google search before I started writing and found that both Franck Pachot and Richard Foote (yes, rebuild index is one of the operations) had got there long ago, so here are a couple of links – as much for my own benefit as anything else:

Richard Foote:

Franck Pachot:

Franck’s 2nd example could be particularly relevant to some clients of mine who were having problems with SQL queries that were crashing (slowly and randomly) instead of running very efficiently because they were running against one subpartition of a table while another subpartition of the same table was subject to exchange. With a little bad luck in the timing an exchange that took place between a parse and an execute would cause a query to have its cursor invalidated and re-parsed in a way that failed to do (sub-)partition elimination because the local indexes were temporarily in an indeterminate state.

Update Aug 2020

I’ve just been reminded of about deferred invalidation, and got very excited because I’d forgotten that I already knew about it – but again I did a couple of searches and found my own reference note plus a few more references, so here’s a little update and expansion.

Entries in Reference manuals for 12.2

Nigel Bayliss has an article with echoes back to dbms_stats and its rolling invalidation, and comments on the  new states and supporting columns in v$sql and has a link to the SQL Tuning Guide (12.2).

Maria Colgan has an article about deferring invalidation for partition operations, highlighting the fact that it won’t work for dropping (sub)partitions, but you can more-or-less work around the limitation by using truncate instead of drop. The issue relates to the fact that “drop (sub-)partition” results in renumbering of partitions (in the views if nowhere else). She also points out that “The fix in 12cR2 that will not invalidate cursors for EXCHANGE PARTITION, TRUNCATE PARTITION or MOVE PARTITION is fully backportable to 11.2.0.4 or 12c R1. All you need to do is request a back port for Bug ID 24817834.”

Footnote

There is a documentation error in SQL Tuning reference page that Nigel Bayliss links to – it gives an example of the syntax as “immediate validation” when it should be “immediate INvalidation”

 

October 6, 2017

12c Parse

Filed under: 12c,Oracle,Upgrades — Jonathan Lewis @ 9:07 am BST Oct 6,2017

Following on from a comment to a recent posting of mine about “bad” SQL ending up in the shared pool and the specific detail that too much bad SQL could cause contention problems while staying virtually invisible, there’s a related note today on the ODC (formerly OTN) forum of a little change in 12.2 that alerts you to the problem.

Try executing the following anonymous block (on a non-production system):

rem
rem     Script:         parse_fail_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2017
rem

declare
        m1 number;
begin
        for i in 1..10000 loop
        begin
                execute immediate 'select count(*) frm dual' into m1;
                dbms_output.put_line(m1);
        exception
                when others then null;
        end;
        end loop;
end;
/

Then check your alert log (if you want to be a little cautious, change the 10,000 in the loop to something like 200). If you’re running 12.2.0.1 you’ll find something like the following:


ORCL(3):WARNING: too many parse errors, count=100 SQL hash=0x19a22496
ORCL(3):PARSE ERROR: ospid=4577, error=923 for statement:
2017-10-06T03:46:15.842431-04:00
ORCL(3):select count(*) frm dual
ORCL(3):Additional information: hd=0x7673c258 phd=0x765151a8 flg=0x28 cisid=135 sid=135 ciuid=135 uid=135
2017-10-06T03:46:15.842577-04:00
ORCL(3):----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x76734f18         5  anonymous block
ORCL(3):WARNING: too many parse errors, count=200 SQL hash=0x19a22496
ORCL(3):PARSE ERROR: ospid=4577, error=923 for statement:
2017-10-06T03:46:15.909523-04:00
ORCL(3):select count(*) frm dual
ORCL(3):Additional information: hd=0x7673c258 phd=0x765151a8 flg=0x28 cisid=135 sid=135 ciuid=135 uid=135
2017-10-06T03:46:15.909955-04:00
ORCL(3):----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x76734f18         5  anonymous block

The warning will be repeated every hundred occurrences. As you can see the guilty (ORA-00923: missing FROM) SQL appears in the report so you know what you’re looking for. In my particular case, with the silly PL/SQL block, the address of the calling anonymous pl/sql block was also reported:


select sql_text from V$sql where child_address = '0000000076734F18';

SQL_TEXT
--------------------------------------------------------------------------------
declare m1 number; begin  for i in 1..10000 loop  begin   execute immediate 'sel
ect count(*) frm dual' into m1;   dbms_output.put_line(m1);  exception	 when ot
hers then null;  end;  end loop; end;

The additional information line reports the v$sql.address (phd) and v$sql.child_address (hd) of the cursor – except you won’t see them because cursors that raise parse errors are hidden from the view. Instead you can query x$kglob (if you have sys privileges), after converting to upper case and left padding the values to 8 bytes with zeroes:


select  *
from    x$kglob
where   kglhdadr = hextoraw('000000007673C258')
and     kglhdpar = hextoraw('00000000765151A8')
;

In the case of the OP on ODC the SQL reported in the alert log was simply: “SELECT 1”. As Billy Verreynne suggested in the thread, this looks like the sort of code that would be sent to the database by some of the connection pooling clients to check that the database is up. Unfortunately (apart from the waste of effort) this particular setup seems to think it’s talking to some database other Oracle!

Footnote:

This is a feature of 12.2 – 11g and 12.1 don’t write such warnings to the alert log.

Lagniappe

A tweet from Mohamed Houri reminds me that parse failures like these, of course, show up in the instance activity stats, in particular:


Name                               Value
----                               -----
opened cursors cumulative         10,006
enqueue requests                  10,002
enqueue releases                  10,002
sql area purged                   10,000
sql area evicted                  10,000
parse count (total)               10,008
parse count (hard)                10,002
parse count (failures)            10,000

The enqueue requests are for the ‘CU’ (cursor) enqueue which, I think, appeared in 10g – they’re acquired (and released) on every hard parse.

Most of the figures that my session reports here are likely to be highly camouflaged by the rest of the activity from a normal system, so the most important number is the “parse count (failures)” – so it’s useful to know that you can subtract that number the other statistics to give you an idea of the impact that would be eliminated if you could located and stop the thing generating the failing statements.

Update

Patrick Joliffe (see pingback below) has published an article pointing out that in earlier versions of Oracle you can set event 10035 to get the same information dumped into the alert log on every parse failure.

Update Feb 2020

In 18.1 the information in the flood control dump (and the 10035 dump referenced by Patrick Joliffe) has been enhanced slightly to report the current user name, application (actually module) and action – the latter being set by calls to dbms_application_info.set_module(), so a typical dump now has two extra lines – compare the following with the above:

ORCLPDB(3):PARSE ERROR: ospid=21736, error=923 for statement:
2020-02-17T18:32:10.112969+00:00
ORCLPDB(3):select count(*) frm dual
ORCLPDB(3):Additional information: hd=0x846b21b8 phd=0x819d8038 flg=0x28 cisid=107 sid=107 ciuid=107 uid=107 sqlid=bfqc8ascu494q
ORCLPDB(3):...Current username=TEST_USER
ORCLPDB(3):...Application: My module Action: Special action
ORCLPDB(3):----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x79680930         7  anonymous block

Note the contents of lines 5 and 6. In particular I forced a parse error after setting event 10035 and then executing a call to

execute dbms_application_info.set_module('My module', 'Special action')

For reference, see MOS Doc ID: 25754281.8

Next Page »

Website Powered by WordPress.com.