Oracle Scratchpad

December 10, 2018

Case Study

Filed under: Execution plans,Oracle,Statistics — Jonathan Lewis @ 1:10 pm GMT Dec 10,2018

A recent thread on the ODC database forum highlighted a case where the optimizer was estimating 83,000 for a particular index full scan when the SQL Monitor output for the operation showed that it was returning 11,000,000 rows.

Apart from the minor detail that the OP didn’t specifically ask a question, the information supplied was pretty good. The OP had given us a list of bind variables, with values, and the SQL statement, followed by the text output of the Monitor’ed SQL and, to get the predicate section of the plan, the output from a call to dbms_xplan. This was followed by the DDL for the critical index and a list of the stats for all the columns in the index.

Here’s the critical line of the plan (from the SQL Monitor report) followed by its predicate section (from the dbms_xplan output, but cosmetically enhanced) and some details of the columns used in the predicate:

SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
=================================================================================================================================================================================================================================

  11 - filter(
        (    TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)>=ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)) 
         AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)<=TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10)
        )

COLUMN_NAME                    DATA_TYPE       NUM_DISTINCT  DENSITY  NUM_NULLS LAST_ANALYZED       HISTOGRAM
------------------------------ --------------- ------------ -------- ---------- ------------------- ---------------
YEAR                           NUMBER                     5        0          0 2018-12-02 13:19:10 FREQUENCY
MONTH                          NUMBER                    12        0          0 2018-12-02 13:19:10 FREQUENCY

I’ve included the full Monitor output at the end of the posting, or you could visit the ODC page if you want to see it, but if we look at just this line we can see that the index full scan starts running in the first second of the query (‘Start Active’), runs once (‘Execs’) and, as the OP said, retrieved 11M rows in that one scan compared to an estimated 83,917.

When we examine the predicate section we can understand why the optimizer could make such a large error – the SQL requires Oracle to combine two columns from the table with various bits of bind variables to construct a date which is then compares with a couple of constant dates derived from several input bind variables using range based comparisons.

This is an example of Oracle using a fixed estimate of 5% for the selectivity of “unknown range-based comparison” – but with two comparisons the selectivity becomes 5% of 5% = 0.25% (i.e. 1/400).

If we look at the column definitions and stats we see that we seem to have 5 possible years and 12 possible months (which could mean a range as small as 3 years and 2 months) – so a selectivity of 1/400 would be in the right ballpark if we were querying for a date range of roughly 4.5 days. Working the figures the other way around – if 83,917 is 1/400 of the data then there are about 33.5M rows in the table and we are querying for something more like 1/3 of the table.

Observations

I find it curious that the optimizer used an “index full scan” to fetch a huge amount of data from the index when there is no requirement for sorting (there is a subsequent “hash unique”, rather than “sort unique nosort”). I would have expected an “index fast full scan” so I am curious to know if some optimizer parameters have been fiddled with to get the optimizer to bypass the fast full scan. Possibly a change in parameter settings would result in a very different plan.

The names of the bind variables are of the form “SYS_B_nn” – which means that the original query has been subject to the effects of forced cursor sharing. Since we are apparently expecting to identify and manipulate millions of rows this looks like the type of query where you don’t want to use cursor sharing. If the session can set “cursor_sharing=exact” before running the query, or inject the hint /*+ cursor_sharing_exact */ into the query then perhaps we’d get a better estimate of rows (and a better plan). If hinting or setting session parameters is possible then setting optimzer_dynamic_sampling to level 3, or possibly 4, might be sufficient.

The messy expression combining month and year is a crippling handicap to the optimizer – so fixing the query to make the literals visible isn’t actually going to help. This is Oracle 12c, though – so we could add a virtual date column (declared as invisible to avoid the threat of inserts that don’t specify column lists) and gather stats on it. The combination of virtual column and literal values might give the optimizer the information it really needs. Here’s a little script to demonstrate:


rem
rem     Script:         virtual_study.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.2

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        sysdate - (5 * 365) + rownum / 550      d1,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'MM'
                )
        )                                       month,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'YYYY'
                )
        )                                       year,
        lpad(rownum,10,'0')                     v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns month size 12 for columns year size 6'
        );
end;
/

I’ve created a table with a million rows with data going back roughly 5 years from current date, which means I need roughly 550 rows per day. I’ve then created histograms on the month and year columns to match the original posting. Now I’ll set up the bind variables and values specified by the OP and run a simple query to show the date information that the bind variables give, and the 1/400 selectivity of the OP’s predicate:


var SYS_B_00 varchar2(32);
var SYS_B_01 varchar2(32);
var SYS_B_02 varchar2(32);
var SYS_B_03 varchar2(32);
var SYS_B_04 varchar2(32);
var SYS_B_05 varchar2(32);
var SYS_B_06 number;
var SYS_B_07 varchar2(32);
var SYS_B_08 varchar2(32);
var SYS_B_09 varchar2(32);
var SYS_B_10 number;

exec :SYS_B_00:='01/';
exec :SYS_B_01:='/';
exec :SYS_B_02:='dd/MM/yyyy';
exec :SYS_B_03:='10/04/2018';
exec :SYS_B_04:='MM/dd/yyyy';
exec :SYS_B_05:='q';
exec :SYS_B_06:=12;
exec :SYS_B_07:='10/04/2018';
exec :SYS_B_08:='MM/dd/yyyy';
exec :SYS_B_09:='q';
exec :SYS_B_10:=1;

select
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d1, 
        add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06))  c1,
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d2,
        trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10                 c2
from
        t1
where
        rownum = 1
;

set serveroutput off
alter session set statistics_level = all;

select  count(*)
from    t1
where
        (    to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) >= add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06)) 
         and to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) <= trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

===========================================

D1        C1        D2        C2
--------- --------- --------- ---------
01-DEC-13 01-OCT-17 01-DEC-13 30-SEP-18


  COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:07.39 |    4980 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:07.39 |    4980 |
|*  2 |   FILTER            |      |      1 |        |    200K|00:00:06.42 |    4980 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |   2500 |    200K|00:00:04.59 |    4980 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10 .ge. ADD_MON
              THS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)))
   3 - filter((TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR")
              ,:SYS_B_02) .ge. ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B
              _06)) AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS
              _B_02) .le. TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10))


Note: in this and subsequent text I’ve had to use .le. to represent “less than or equal to” and .ge. to represent “greater than or equal to”. in the execution plans

This shows us that the first row in my table has a date component of 1st Dec 2013, while the date range required by the OP was one year’s worth of data between 1st Oct 2017 and 30th Sept 2018. The optimizer’s estimate of 2,500 rows out of 1M is the 1/400 we expect.

Let’s test the effect of running the query using literals (i.e. in the OP’s environment stop the “cursor_sharing = force” effect):


select
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'))

========================================================

 COUNT(*)
----------
    200750


--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   892 (100)|      1 |00:00:05.17 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:05.17 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |   2500 |   892  (30)|    200K|00:00:04.30 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))


We can see that the literals have echoed through the plan to the predicate section, but the optimizer hasn’t changed its estimate. Let’s create the virtual column, gather stats on it, and try again:


alter table t1 add v_date invisible generated always as (
        to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy')
) virtual
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns v_date size 1')

select  /* virtual column */
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

 select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

=======================================================================

 COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   950 (100)|      1 |00:00:06.27 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:06.27 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |    236K|   950  (34)|    200K|00:00:04.78 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))



The optimizer sees that the expression involving month and year matches the virtual column definition, and evaluates the two date expression to produce simple constants and gives us a cardinality estimate in the right ballpark.

Conclusion

Cursor sharing and “big” queries don’t mix. If you have queries that have to manipulate large volumes of data then the overhead of optimising each one separately is likely to be insignificant, and the threat of cardinality errors introduced by bind variables being re-used could be significant.

If you have to make use of an existing (bad) table definition, and can’t managed to write predicates that allow the optimizer to use existing column statistics, remember that you might be able to create a virtual (and invisible) column that captures the necessary definition thereby allowing you to give Oracle some statistics about the necessary predicate.

Footnote

In case you didn’t want to scan through the ODC page, here’s the full SQL Monitor output for the original query:


Global Stats
==============================================================================================
| Elapsed |   Cpu   |    IO    | Cluster  |  Other   | Buffer | Read | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes | Reqs  | Bytes |
==============================================================================================
|     320 |      76 |      140 |       39 |       66 |     8M | 257K |   2GB |  1528 | 306MB |
==============================================================================================
 
 
SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|  -> 0 | SELECT STATEMENT                |                         |         |       |       180 |   +142 |     1 |        0 |       |       |       |       |      |      |          |                             |          |
|  -> 1 |   SORT UNIQUE                   |                         |    1093 | 52574 |       180 |   +142 |     1 |        0 |       |       |   534 | 107MB |   2M | 113M |     0.94 | Cpu (3)                     |          |
|  -> 2 |    NESTED LOOPS                 |                         |    1093 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 3 |     NESTED LOOPS                |                         |    1118 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 4 |      HASH JOIN RIGHT SEMI       |                         |    1118 | 52238 |       189 |   +133 |     1 |       3M |       |       |       |       | 153M |      |     1.57 | Cpu (5)                     |          |
|     5 |       VIEW                      |                         |    157K | 31145 |         9 |   +134 |     1 |       2M |       |       |       |       |      |      |          |                             |          |
|     6 |        WINDOW SORT              |                         |    157K | 31145 |        57 |    +86 |     1 |       4M |  3777 | 199MB |   994 | 199MB |      |      |     3.14 | Cpu (5)                     |     100% |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | direct path read temp (5)   |          |
|     7 |         HASH JOIN               |                         |    157K | 29653 |        50 |    +85 |     1 |       4M |       |       |       |       |      |      |     1.26 | Cpu (4)                     |          |
|     8 |          VIEW                   |                         |   81771 | 23273 |         1 |    +86 |     1 |       1M |       |       |       |       |      |      |          |                             |          |
|     9 |           HASH UNIQUE           |                         |   81771 | 23273 |        75 |    +12 |     1 |       1M |       |       |       |       |      |      |     1.89 | Cpu (6)                     |          |
|    10 |            FILTER               |                         |         |       |        78 |     +9 |     1 |      11M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
|    12 |          INDEX FULL SCAN        | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |        49 |    +86 |     1 |       8M |       |       |       |       |      |      |    12.58 | gc cr block 2-way (37)      |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | gc current block 2-way (3)  |          |
| -> 13 |       INDEX FULL SCAN           | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |       180 |   +142 |     1 |       7M |       |       |       |       |      |      |     0.63 | Cpu (2)                     |          |
| -> 14 |      INDEX RANGE SCAN           | IDX4_LPL_BETA_CUST_RLTN |       1 |     1 |       181 |   +141 |    3M |       3M | 75759 | 592MB |       |       |      |      |    23.27 | gc current grant 2-way (1)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | Cpu (21)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (52)  |          |
| -> 15 |     TABLE ACCESS BY INDEX ROWID | IMPL_LPL_BETA_CUST_RLTN |       1 |     1 |       180 |   +142 |    3M |       3M |  177K |   1GB |       |       |      |      |    29.56 | Cpu (12)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (81)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file sequential read (1) |          |
=================================================================================================================================================================================================================================

December 7, 2018

Plans and Trees

Filed under: Uncategorized — Jonathan Lewis @ 5:58 pm GMT Dec 7,2018

Prompted by a question on the ODC database forum – and also because I failed to get to the “Bonus slides” on my presentation on basic execution plans at both the DOAG and UKOUG conferences, here’s a small of slides demonstrating how to convert a text execution plan into a tree that you can read using the mechanism described in Oracle’s white paper by the phrase: “start from the bottom left and work across and then up”.

The file is a Microsoft Powerpoint file (early version).

 

Misdirection

Filed under: Uncategorized — Jonathan Lewis @ 11:48 am GMT Dec 7,2018

A recent post on the ODC database forum prompted me to write a short note about a trap that catches everyone from time to time. The trap is following the obvious; and it’s a trap because it’s only previous experience that lets you decide what’s obvious and the similarity between what you’re looking and your previous experience may be purely coincidental.

The question on OTN (paraphrased) was as follows:

When I run the first query below Oracle doesn’t use the index on column AF and is slow, but when I run the second query the Oracle uses the index and it’s fast. So when the input starts with ‘\\’ the indexes are not used. What’s going on ?


SELECT * FROM T WHERE AF = '\\domain\test\1123.pdf';
SELECT * FROM T WHERE AF = 'a\\domain\test\1123.pdf';

Looking at the two queries my first thought was that it’s obvious what’s (probably) happening, and my second thought was the more interesting question: “why does this person think that the ‘\\’ is significant ?”

The cause of the difference in behaviour is probably related to the way that Oracle stores statistics (specifically histograms) about character columns, and the way in which the cardinality calculations can go wrong.  If two character match over the first few characters the numeric representation of those strings that Oracle uses in a histogram is identical, and if they are long enough even the “actual value” stored would be identical. It looks as if this person is storing URLs, and it’s quite likely that there are a lot of long URLs that start with the same (long) string of characters – it’s a very old problem – and it’s an example of a column where you probably want to be absolutely sure that you don’t gather a histogram.

But why did the OP decide that the ‘\\’ was the significant bit ? I don’t know, of course, but  how about this:

  • No contrary tests: Perhaps every single time the query misbehaved the value started with ‘\\’ and it never went wrong for any other starting values. And maybe the OP tested several different domain names – it would be much easier to see the ‘\\’ as the common denominator rather than “repetitive leading character string” if you tested with values that spanned different domains.

combined with

  • An easily available “justification”: In many programming languages (including SQL) ‘\’ is an escape character – if you don’t really know much about how the optimizer works you might believe that that could be enough to confuse the optimizer.

It can be very difficult when you spot an obvious pattern to pause long enough to consider whether you’ve identified the whole pattern, or whether you’re looking at a special case that’s going to take you in the wrong direction.

 

December 3, 2018

Row Migration

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 4:27 pm GMT Dec 3,2018

There’s a little detail of row migration that’s been bugging me for a long time – and I’ve finally found a comment on MoS explaining why it happens. Before saying anything, though, else I’m going to give you a little script (that I’ve run on 12.2.0.1 with an 8KB block size in a tablespace using ASSM and system allocated extents) to demonstrate the anomaly.


rem
rem     Script:         migration_itl.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1

create table t1 (v1 varchar2(4000))
segment creation immediate
tablespace test_8k
pctfree 0
;

insert into t1
select  null from dual connect by level <= 734 -- > comment to avoid wordpress format issue
;

commit;

spool migration_itl.lst

column rel_file_no new_value m_file
column block_no    new_value m_block

select 
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no, 
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from 
        t1
group by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid) 
order by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid)
;

update t1 set v1 = rpad('x',10);
commit;

alter system flush buffer_cache;

alter system dump datafile &m_file block &m_block;

column tracefile new_value m_tracefile

select
        tracefile 
from 
        v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from v$mystat where rownum = 1
                )
        )
;

-- host grep nrid &m_tracefile

spool off

The script creates a single column table with pctfree set to zero, then populates it with 734 rows where every row has a null for its single column. The query using the calls to the dbms_rowid package will show you that all 734 rows are in the same block. In fact the block will be full (leaving a handful of bytes of free space) because even though each row will require only 5 bytes (2 bytes row directory entry, 3 bytes row overhead, no bytes for data) Oracle’s arithmetic will allow for the 11 bytes that is the minimum needed for a row that has migrated – the extra 6 bytes being the pointer to where the migrated row now lives. So 734 rows * 11 bytes = 8078, leaving 4 bytes free space with 110 bytes block and transaction layer overhead.

After populating and reporting the table the script then updates every row to grow it by a few bytes, and since there’s no free space every row will migrate to a new location. By dumping the block (flushing the buffer cache first) I can check where each row has migrated to. (If you’re running a UNIX flavour and have access to the trace directory then the commented grep command will give you what you want to see.) Here’s a small extract from the dump on a recent run:

nrid:  0x05c00082.0
nrid:  0x05c00082.1
nrid:  0x05c00082.2
nrid:  0x05c00082.3
...
nrid:  0x05c00082.a4
nrid:  0x05c00082.a5
nrid:  0x05c00082.a6
nrid:  0x05c00083.0
nrid:  0x05c00083.1
nrid:  0x05c00083.2
nrid:  0x05c00083.3
...
nrid:  0x05c00085.a4
nrid:  0x05c00085.a5
nrid:  0x05c00085.a6
nrid:  0x05c00086.0
nrid:  0x05c00086.1
nrid:  0x05c00086.2
...
nrid:  0x05c00086.3e
nrid:  0x05c00086.3f
nrid:  0x05c00086.40
nrid:  0x05c00086.41

My 734 rows have migrated to fill the next four blocks (23,130) to (23,133) of the table and taken up some of the space in the one after that (23,134). The first four blocks have used up row directory entries 0x00 to oxa6 (0 to 166), and the last block has used up row directory entries 0x00 to 0x41 (0 to 65) – giving us the expected total: 167 * 4 + 66 = 734 rows. Let’s dump one of the full blocks – and extract the interesting bits:

alter system dump datafile 23 block 130;
Block header dump:  0x05c00082
 Object id on Block? Y
 seg/obj: 0x1ba1e  csc:  0x0000000001e0aff3  itc: 169  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0006.00f.000042c9  0x0240242d.08f3.14  --U-  167  fsc 0x0000.01e0affb
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x04   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x05   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x06   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
...
0xa6   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa7   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa8   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa9   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000

nrow=167
frre=-1
fsbo=0x160
fseo=0x2ec
avsp=0x18c
tosp=0x18c

tab 0, row 0, @0xfe4
tl: 20 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x05c00081.0
col  0: [10]  78 20 20 20 20 20 20 20 20 20
tab 0, row 1, @0xfd0
tl: 20 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x05c00081.1

This block has 169 (0xa9) ITL entries – that’s one for each row migrated into the block (nrow = 167) plus a couple spare. The block still has some free space (avsp = tosp = 0x18c: available space = total space = 396 bytes), but it can’t be used for any more incoming migration because Oracle is unable to create any more ITL entries – it’s reached the ITL limit for 8KB blocks.

So finally we come to the question that’s been bugging me for years – why does Oracle want an extra ITL slot for every row that has migrated into a block? The answer appeared in this sentence from MoS Doc ID: 2420831.1: Errors Noted in 12.2 and Above During DML on Compressed Tables”

“It is a requirement during processing of parallel transactions that each data block row that does not have a header have a block ITL available.”

Rows that have migrated into a block do not have a row header – check the flag byte (fb) for the two rows I’ve listed, it’s: “—-FL–“ , there is no ‘H’ for header. We have the First and Last row pieces of the row in this block and that’s it. So my original “why” question now becomes “What’s the significance of parallel DML?”

Imagine the general case where we have multiple processes updating rows at random from multiple blocks, and many different processes forced rows to migrate at the same time into the same block. The next parallel DML statement would dispatch multiple parallel execution slaves, which would all be locking rows in their own separate block ranges – but multiple slaves could find that they wanted to lock rows which had all migrated into the same block – so every slave MUST be able to get an ITL entry in that block at the same time; for example, if we have 8 rows that had migrated into a specific block from 8 different places, and 8 parallel execution slaves each followed a pointer from the region they were scanning to update a row that had migrated into this particular block then all 8 slaves would need an ITL entry in the block (and if there were a ninth slave scanning this region of the table we’d need a 9th ITL entry). If we didn’t have enough ITL entries in the block for every single migrated row to be locked by a different process at the same time then (in principle, at least) parallel execution slaves could deadlock each other because they were visiting blocks in a different order to lock the migrated rows. For example:

  1. PQ00 visits and locks a row that migrated to block (23,131)
  2. PQ01 visits and locks a row that migrated to block (23,132)
  3. PQ00 visits and tries to lock a row that migrated to block (23,132) — but if there were no “extra” ITL slots available, it would wait
  4. PQ01 visits and tries to lock a row that migrated to block (23,131) — but there were no “extra” ITL slots available so it would wait, and we’d be in a deadlock.

Oracle’s solution to this threat: when migrating a row to a block add a new ITL if the number of migrated rows exceeds the number of ITL slots + 2 (the presence of the +2 is a working hypothesis, it might be “+initrans of table”).

Footnote 1

The note was about problems with compression for OLTP, but the underlying message was about 4 Oracle errors of type ORA-00600 and ORA-00700, which report the discovery and potential threat of blocks where the number of ITL entries isn’t large enough compared to the number of inward migrated rows. Specifically:

  • ORA-00600 [PITL1]
  • ORA-00600 [kdt_bseg_srch_cbk PITL1]
  • ORA-00700: soft internal error, arguments: [PITL6]
  • ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5]

 

Footnote 2

While drafting the SQL script above, I decide to check to see how many other scripts I had already written about migrated rows and itl slots: there were 12 of the former and 10 of the latter, and reading through the notes I found that one of the scripts (itl_chain.sql),Ac dated December 2002 included the following note:

According to a comment that came from Oracle support via Steve Adams, the reason for the extra ITLs is to avoid a risk of parallel DML causing an internal deadlock.

So it looks like I knew what the ITLs were for about 16 years ago, but managed to forget sometime since then.

 

 

November 30, 2018

Index rebuild bug

Filed under: Bugs,Indexing,Oracle — Jonathan Lewis @ 1:02 pm GMT Nov 30,2018

I tweeted a reference yesterday to a 9 year old article about index rebuilds, and this led me on to look for an item that I thought I’d written on a related topic. I hadn’t written it (so there’s another item on my todo list) but I did discover a draft I’d written a few years ago about an unpleasant side effect relating to rebuilding subpartitions of local indexes on composite partitoned tables. It’s probably the case that no-one will notice they’re suffering from it because it’s a bit of an edge case – but you might want to review the things your system does.

Here’s the scenario: you have a large table that is composite partitioned with roughly 180 daily partitions and 512 subpartitions (per partition). For some strange reason you have a couple of local indexes on the table that have been declared unusable – hoping, perhaps, that no-one ever does anything that makes Oracle decide to rebuild all the unusable bits.

One day you decide to rebuild just one subpartition of one of the indexes that isn’t marked as unusable. You might be planning to rebuild every single subpartition of that index overnight, but you’re going to start with just one to see how long it takes. Something very strange happens – and here’s a simple model to demonstrate:

rem
rem     Script:         index_rebuild_pt_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2015
rem
rem     Last tested
rem             18.3.0.0
rem             12.1.0.1
rem             11.2.0.4
rem

create table interval_hash (
        n1 number,
        n2 number,
        n3 number
)
segment creation immediate
partition by range(n1) interval (1)
subpartition by hash (n2) subpartitions 16 (
        partition p1 values less than (2) 
)
;


begin
        for i in 1 .. 16 loop
                for j in 1..64 loop
                        insert into interval_hash i(n1, n2, n3) values (i, j, j + 64*(i-i));
                end loop;
        end loop;

        commit;
end;
/

create index ih_i1 on interval_hash(n1) local;
create index ih_i2 on interval_hash(n2) local;
create index ih_i3 on interval_hash(n3) local;

alter index ih_i1 unusable;
alter index ih_i2 unusable;

The code creates a table which extends as data arrives to have 16 partitions with 16 subpartitions each – for a total of 256 data segments. After loading the data I’ve created 3 local indexes on the table and made two of them unusable.

After setting up the table and indexes I’ve identified one subpartition of the table by name, enabled tracing, and rebuilt the corresponding subpartition of the index which is currently usable (the same effect appears if I rebuild a partition of one of the unusable indexes, but the phenomenon is slightly more surprising if you rebuild a usable subpartition). Here’s the code for the rebuild:


column max_subp new_value m_subp

select
        max(partition_name) max_subp
from
        user_segments
where
        segment_name = 'INTERVAL_HASH'
;

alter session set events '10046 trace name context forever, level 4';

alter index ih_i3 rebuild subpartition &m_subp;

alter session set events '10046 trace name context off';

Would you expect to find anything interesting in the trace file after running it through tkprof ? Here’s the most frequently executed SQL statement I found when running this test on an instance of Oracle 18.3:


tkprof or18_ora_24939.trc temp sort=execnt

SQL ID: 0yn07bvqs30qj Plan Hash: 866645418

select pctfree_stg, pctused_stg, size_stg,initial_stg, next_stg, minext_stg,
  maxext_stg, maxsiz_stg, lobret_stg,mintim_stg, pctinc_stg, initra_stg,
  maxtra_stg, optimal_stg, maxins_stg,frlins_stg, flags_stg, bfp_stg, enc_stg,
   cmpflag_stg, cmplvl_stg,imcflag_stg, ccflag_stg, flags2_stg
from
 deferred_stg$  where obj# =:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    512      0.02       0.03          0          0          0           0
Fetch      512      0.00       0.00          0       1536          0         512
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1025      0.03       0.03          0       1536          0         512

This query runs once for every single subpartition of the two unusable indexes. (There’s another statement that runs once for every partition of the two unusable indexes to provide the object numbers of the subpartitions and that shouldn’t be forgotten). In my example the impact and time to run doesn’t look too bad – but when the numbers climb to a couple of hundred thousand executions before you start to rebuild the first subpartition you might start to worry. Depending on the state of your data dictionary, and how you got to the point where you had so many unusable segments, the time to execute could become large, and you might do most of it all over again for the next subpartition!

You might wonder why anyone would have a couple of unusable indexes. First, many years ago (in Practival Oracle 8i) I pointed out that if you wanted to create a new locally partitioned index you might want to create it unusable and then rebuild each partition in turn – that might not be a good idea any more. (The book also pointed out the requirement to think about sizing the dictionary cache (rowcache)).

Secondly, before the introduction of partial indexing it was possible to emulate the feature manually for local indexes by setting partitions and subpartition unusable and allowing the optimizer to use table expansion to pick the best plan for partitions that had different index partitions still usable.

Finally if you are using the new partial indexing feature of 12.2 where you can set the default characteristic of a partitioned table to “indexing off”, and the default characteristic of an index to “indexing partial”, the partitions of any local index that are not created are deemed to be deferred – but you won’t see the effect in my example unless you modify it to include partial indexes and include an “alter system flush shared pool” just before the rebuild.

Footnote

I’ve found this bug (or something very similar) on MoS: Bug 17335646 : ALTER INDEX IDX REBUILD SUBPARTITION SP VISITS EVERY INDEX AND SUBPARTION. However the bug was recorded against 11.2.0.3 and its status is: “31 – Could Not Reproduce. To Filer”. It does seem to be terribly easy to reproduce, though, provided you have a large number of unusable subpartitions in your indexes – so it’s possible the original bug appeared even when there were no unusable subpartitions (the customer comments about the bug don’t give any suggestion that there might be unusable indexes in place – and it seems unlikely that the 22 indexes mentioned were all unusable).

 

 

 

 

 

November 27, 2018

Counting Rows

Filed under: Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 9:08 pm GMT Nov 27,2018

Here’s another little utility I use from time to time (usually for small tables) to check how many rows there are in each block of the table, and which blocks are used. It doesn’t do anything clever, just call routines in the dbms_rowid package for each rowid in the table:


rem
rem     Rowid_count.sql
rem     Generic code to count rows per block in a table
rem     Ordered by file and block
rem

define m_table = '&1'

spool rowid_count

select 
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no, 
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from 
        &m_table        t1
group by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid) 
order by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid)
;


select
        rows_starting_in_block,
        count(*)        blocks
from
        (
        select 
                dbms_rowid.rowid_relative_fno(rowid), 
                dbms_rowid.rowid_block_number(rowid),
                count(*)                                rows_starting_in_block
        from 
                &m_table        t1
        group by 
                dbms_rowid.rowid_relative_fno(rowid), 
                dbms_rowid.rowid_block_number(rowid) 
        )
group by
        rows_starting_in_block
order by
        rows_starting_in_block
;

spool off


And here’s a sample of the output:


REL_FILE_NO   BLOCK_NO ROWS_STARTING_IN_BLOCK
----------- ---------- ----------------------
	 22	   131			  199
	 22	   132			  199
	 22	   133			  199
	 22	   134			  199
	 22	   135			   88
	 22	   138			  111

6 rows selected.


ROWS_STARTING_IN_BLOCK	   BLOCKS
---------------------- ----------
		    88		1
		   111		1
		   199		4

3 rows selected.


Obviously it could take quite a lot of I/O and CPU to run the two queries against a large table – generally I use it when I want to pick a block to dump afterwards.

Dump logfile

Filed under: Infrastructure,Oracle,redo — Jonathan Lewis @ 9:24 am GMT Nov 27,2018

Here’s a little procedure I’ve been using since Oracle 8i to dump the contents of the current log file – I’ve mentioned it several times in the past but never published it, so I’ll be checking for references to it and linking to it.

The code hasn’t changed in a long time, although I did add a query to get the full tracefile name from v$process when that became available. There’s also an (optional) called to dbms_support.my_sid to pick up the SID of the current session that slid into the code when that package became available.


rem
rem     Script:         c_dump_log.sql
rem     Author:         Jonathan Lewis
rem     Dated:          December 2002
rem     Purpose:        Create procedured to dump the current online redo log file.
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             11.1.0.7
rem             11.2.0.6
rem             10.2.0.5
rem             10.1.0.4
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     Must be run as a DBA
rem     Very simple minded - no error trapping
rem     

create or replace procedure dump_log
as
        m_log_name      varchar2(255);
        m_process       varchar2(255);
        m_trace_name    varchar2(255);

begin
        select 
                lf.member
        into
                m_log_name
        from
                V$log           lo,
                v$logfile       lf
        where 
                lo.status = 'CURRENT'
        and     lf.group# = lo.group#
        and     rownum = 1
        ;

        execute immediate
        'alter system dump logfile ''' || m_log_name || '''';

        select
                spid
        into
                m_process
        from
                v$session       se,
                v$process       pr
        where
                se.sid = --dbms_support.mysid
                        (select sid from v$mystat where rownum = 1)
        and     pr.addr = se.paddr
        ;

        select
                tracefile
        into
                m_trace_name
        from
                v$session       se,
                v$process       pr
        where
                se.sid = --dbms_support.mysid
                        (select sid from v$mystat where rownum = 1)
        and     pr.addr = se.paddr
        ;

        dbms_output.put_line('Trace file is: ' || m_trace_name);
        dbms_output.put_line('Log file name is: ' || m_log_name);
        dbms_output.put_line('Trace file name includes: ' || m_process);


end;
/

show errors

drop public synonym dump_log;
create public synonym dump_log for dump_log;
grant execute on dump_log to public;

I don’t use the package often but if I want to find out what redo is generated during a test I usually follow the sequence:

  • alter system switch logfile;
  • do the experiment
  • execute dump_log

If you’re running in a PDB there’s an extra step needed as you can’t “switch logfile” inside a PDB so I’ll either do a log file switch before I start the test or (if there are steps in the test script that could generate a lot of log file I don’t want to see) I include a “pause” in the test script and use another session to do the logfile switch – in both cases the second session has to be connected to the CDB.

You will have noticed the creation of the public synonym and granting of the execute privilege to public. In my own sandbox database that’s a convenience – you may want to be a little more protective in your development and test systems.

The “dump logfile” command has a number of options for selective dumping – I have a note in my file commenting on these options, but I haven’t checked if there are any new ones (or changes to existing ones) for a long time:


alter system dump logfile '{filename}'
        scn min {first SCN to dump}
        scn max {last SCN to dump}
        time min {seconds since midnight at the end of 1st Sept 1987}
        time max {see redo_time_calc.sql}
        layer {integer} opcode {integer} e.g.:
                layer 23        Block Written Records
                layer 5         Undo handling in general
                layer 5 opcode 4        Undo Seg header on commit; or rollback;
                layer 9999 opcode 9999  Trick to validate the whole log file structure
        xid {usn} {slot} {sequence}     -- 10g only, may break on IMU redo (see below)
        objno {object_id}               -- 10g only, may break on IMU redo (see below)
        dba min {datafile no} . {blockno} -- with spaces either side of the dot.
        dba max {datafile no} . {blockno} -- with spaces either side of the dot.
        rba min {log file seq no} . {blockno} -- with spaces either side of the dot.
        rba max {log file seq no} . {blockno} -- with spaces either side of the dot..
(The dots in the last four options becomes invalid syntax in 10g).

The introduction to this note references back to a presentation I did in the year 2000, but the closing comment suggests that I probably haven’t checked the list since some time in the 10g timeline.

The reference to redo_time_calc.sql points to the following script, that expresses the time as the number of seconds since Jan 1988, with the unfortunate simplification that Oracle thinks there are 31 days in every month of the year:


rem
rem     Script:         redo_time_calc3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2012
rem     Purpose:
rem

select 
        86400 * (
                31 *
                        months_between(
                                trunc(sysdate,'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        ) +
                sysdate - trunc(sysdate,'MM')
        )       redo_now
from 
        dual
;



select 
        86400 * (
                (sysdate - 10/1440) - trunc((sysdate-10/1440),'MM') + 
                31 * 
                        months_between(
                                trunc((sysdate - 10/1440),'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        )
                )               ten_minutes_ago,
        86400 * (
                sysdate - trunc(sysdate,'MM') + 
                31 * 
                        months_between(
                                trunc(sysdate,'MM'),
                                to_date('01-Jan-1988','dd-mon-yyyy')
                        )
                )               redo_now,
        to_char(sysdate,'mm/dd/yyyy hh24:mi:ss')        now
from 
        dual
;

This isn’t a piece of code I use much – the original version (which I published in Oracle Core, p.241) was something I wrote in 2003 and had to adjust by hand each time I used it without realising that I’d got it wrong. Luckily someone pointed out my error and gave me the corrected code a little while after I’d published the book. (It was one of those “why didn’t I think of that” moments – it seemed so obvious after he’d told me the right answer.)

November 26, 2018

Shrink Space

Filed under: dbms_xplan,Execution plans,Oracle,Performance,subqueries — Jonathan Lewis @ 4:37 pm GMT Nov 26,2018

I have never been keen on the option to “shrink space” for a table because of the negative impact it can have on performance.

I don’t seem to have written about it in the blog but I think there’s something in one of my books pointing out that the command moves data from the “end” of the table (high extent ids) to the “start” of the table (low extent ids) by scanning the table backwards to find data that can be moved and scanning forwards to find space to put it. This strategy can have the effect of increasing the scattering of the data that you’re interested in querying if most of your queries are about “recent” data, and you have a pattern of slowing deleting aging data. (You may end up doing a range scan through a couple of hundred table blocks for data at the start of the table that was once packed into a few blocks near the end of the table.)

In a discussion with a member of the audience at the recent DOAG conference (we were talking about execution plans for queries that included filter subqueries) I suddenly thought of another reason why (for an unlucky person) the shrink space command could be a disaster – here’s a little fragment of code and output to demonstrate the point.


rem
rem     Script:         shrink_scalar_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2018
rem     Purpose:
rem
rem     Versions tested
rem             12.2.0.1
rem

select
        /*+ gather_plan_statistics pre-shrink */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from
                emp outer
        where
                outer.sal > (
                        select  /*+ no_unnest */
                                avg(inner.sal)
                        from
                                emp inner
                        where
                                inner.dept_no = outer.dept_no
                )
        )
;

alter table emp enable row movement;
alter table emp shrink space compact;

select
        /*+ gather_plan_statistics post-shrink  */
        count(*)
from    (
        select  /*+ no_merge */
                outer.*
        from emp outer
        where outer.sal >
                (
                        select /*+ no_unnest */ avg(inner.sal)
                        from emp inner
                        where inner.dept_no = outer.dept_no
                )
        )
;

The two queries are the same and the execution plans are the same (the shrink command doesn’t change the object statistics, after all), but the execution time jumped from 0.05 seconds to 9.43 seconds – and the difference in timing wasn’t about delayed block cleanout or other exotic side effects.


  COUNT(*)
----------
      9498

Elapsed: 00:00:00.05


  COUNT(*)
----------
      9498

Elapsed: 00:00:09.43

The query is engineered to have a problem, of course, and enabling rowsource execution statistics exaggerates the anomaly – but the threat is genuine. You may have seen my posting (now 12 years old) about the effects of scalar subquery caching – this is another example of the wrong item of data appearing in the wrong place making us lose the caching benefit. The emp table I’ve used here is (nearly) the same emp table I used in the 2006 posting, but the difference between this case and the previous case is that I updated a carefully selected row to an unlucky value in 2006, but here in 2018 the side effects of a call to shrink space moved a row from the end of the table (where it was doing no harm) to the start of the table (where it had a disastrous impact).

Here are the two execution plans – before and after the shrink space – showing the rowsource execution stats. Note particularly the number of times the filter subquery ran – jumping from 7 to 3172 – the impact this has on the buffer gets, and the change in time recorded:

----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:00.03 |    1880 |
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:00.03 |    1880 |
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:00.03 |    1880 |
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:00.03 |    1880 |
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |      7 |      1 |      7 |00:00:00.02 |    1645 |
|*  6 |      TABLE ACCESS FULL| EMP  |      7 |   2714 |  19001 |00:00:00.02 |    1645 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


----------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        |      1 |00:00:09.42 |     745K|
|   1 |  SORT AGGREGATE       |      |      1 |      1 |      1 |00:00:09.42 |     745K|
|   2 |   VIEW                |      |      1 |    136 |   9498 |00:00:11.71 |     745K|
|*  3 |    FILTER             |      |      1 |        |   9498 |00:00:11.70 |     745K|
|   4 |     TABLE ACCESS FULL | EMP  |      1 |  19001 |  19001 |00:00:00.01 |     235 |
|   5 |     SORT AGGREGATE    |      |   3172 |      1 |   3172 |00:00:09.40 |     745K|
|*  6 |      TABLE ACCESS FULL| EMP  |   3172 |   2714 |     10M|00:00:04.33 |     745K|
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("OUTER"."SAL">)
   6 - filter("INNER"."DEPT_NO"=:B1)


Footnote:

For completeness, here’s the code to generate the emp table. It’s sitting in a tablespace using system managed extents and automatic segment space management.


create table emp(
        dept_no         not null,
        sal,
        emp_no          not null,
        padding,
        constraint e_pk primary key(emp_no)
)
as
with generator as (
        select  null
        from    dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',60)
from
        generator       v1,
        generator       v2
where
        rownum <= 2e4 -- > comment to avoid wordpress format issue
;


insert into emp values(432, 20001, 20001, rpad('x',60));
delete /*+ full(emp) */ from emp where emp_no <= 1000;      -- > comment to avoid wordpress format issue
commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'EMP',
                method_opt       => 'for all columns size 1'
        );
end;
/



 

November 24, 2018

Bait and Switch

Filed under: humour,Non-technical — Jonathan Lewis @ 9:56 am GMT Nov 24,2018

Just what you need after a long hike in the Lake District:

November 19, 2018

Table order

Filed under: ANSI Standard,Execution plans,Oracle,Tuning — Jonathan Lewis @ 1:30 pm GMT Nov 19,2018

Over the last few days I’ve highlighted on Twitter a couple of older posts showing how a change in the order that tables appear in the from clause could affect the execution plan of a query. In one case the note was purely theoretical describing a feature of the way the optimizer works with simple query blocks, in the other case the note was about an anomaly with table elimination that could appear with both “ANSI” and “traditional” Oracle syntax.

Here’s another note that might be more generally useful – an example of an odd side effect of ordering and “ANSI” syntax, with a suggestion for a pattern for writing ANSI SQL. It’s based on a test I wrote to play around with a problem that showed up on the Oracle database forum more than six years ago and shows a strange inconsistency. The setup is a little long-winded as the example involves 4 tables, so I’ll leave the script to create, load and index the tables to the end of the note. Here’s the query that introduced the problem; it’s a fairly straightforward 4 table join with two (left) outer joins:


select
        episode.episode_id , episode.cross_ref_id , episode.date_required ,
        product.number_required,
        request.site_id
from
        episode
left join
        request
on      episode.cross_ref_id = request.cross_ref_id
join
        product
ON      episode.episode_id = product.episode_id
left join
        product_sub_type
ON      product.prod_sub_type_id = product_sub_type.prod_sub_type_id
where
        episode.department_id = 2
and     product.status = 'I'
order by
        episode.date_required
;

And here’s the execution plan:


----------------------------------------------------------------------------------------
| Id  | Operation            | Name    | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |         | 33333 |  1725K|       | 17135   (4)| 00:00:01 |
|   1 |  SORT ORDER BY       |         | 33333 |  1725K|  2112K| 17135   (4)| 00:00:01 |
|*  2 |   HASH JOIN OUTER    |         | 33333 |  1725K|  1632K| 16742   (4)| 00:00:01 |
|*  3 |    HASH JOIN         |         | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| PRODUCT | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL| EPISODE |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    TABLE ACCESS FULL | REQUEST |  4000K|    57M|       | 13542   (3)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID"(+))
   3 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

The first thing you’ll notice, of course, is that the plan reports a three table join. Thanks to various referential integrity constraints, the absence of the table in the final select list, and the nature of the join to that table, the optimizer has determined that the product_sub_type table could be eliminated from the join without changing the result set.

What you can’t tell from the plan is that there’s an index on the request table that holds all the columns needed to satisfy the query, and an index fast full scan on the index would be significantly more efficient than the tablescan that appears at operation 6.

Having noticed from the plan that product_sub_type is redundant, the obvious thing to do before investigating further is to rewrite the statement to remove the table . Here’s the resulting query, with execution plan:

----------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |             | 33333 |  1725K|       |  5525   (6)| 00:00:01 |
|   1 |  SORT ORDER BY         |             | 33333 |  1725K|  2112K|  5525   (6)| 00:00:01 |
|*  2 |   HASH JOIN OUTER      |             | 33333 |  1725K|  1632K|  5132   (7)| 00:00:01 |
|*  3 |    HASH JOIN           |             | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | PRODUCT     | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL  | EPISODE     |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    INDEX FAST FULL SCAN| IX4_REQUEST |  4000K|    57M|       |  1932   (7)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EPISODE"."CROSS_REF_ID"="REQUEST"."CROSS_REF_ID"(+))
   3 - access("EPISODE"."EPISODE_ID"="PRODUCT"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

So – when the optimizer removes the product_sub_type from the query the plan reports a tablescan of request, when we remove product_sub_type the plan reports an index fast full scan of an appropriate index – which appears to be roughly one seventh (1,932/13,542) of the size of the table. It’s a little surprising that the optimizer didn’t get it right by itself – but “ANSI” style SQL often displays quirky little side effects because of the way the optimizer transforms it into traditional Oracle style.

We could stop at that point, of course, but then you’d wonder about the significance of the title of the post. So let’s play around with the join order of the original query, without removing the product_sub_type table.

As a general strategy (though not an absolute rule) I tend to arrange code so that outer joins don’t appear before “inner” joins. In this example that means I would have written the original statement as follows:


select
        episode.episode_id, episode.cross_ref_id, episode.date_required,
        product.number_required,
        request.site_id
from
        episode
join
        product
ON      product.episode_id = episode.episode_id
left join
        product_sub_type
ON      product_sub_type.prod_sub_type_id = product.prod_sub_type_id
left join
        request
on      request.cross_ref_id = episode.cross_ref_id
where
        episode.department_id = 2
and     product.status        = 'I'
order by
        episode.date_required
;

All I’ve done is move the join between episode and product up the SQL, following it with the outer join to product_sub_type, finally closing with the outer join between episode and request. Here’s the execution plan – which you might expect to look exactly like the original plan:


----------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |             | 33333 |  1725K|       |  5525   (6)| 00:00:01 |
|   1 |  SORT ORDER BY         |             | 33333 |  1725K|  2112K|  5525   (6)| 00:00:01 |
|*  2 |   HASH JOIN OUTER      |             | 33333 |  1725K|  1632K|  5132   (7)| 00:00:01 |
|*  3 |    HASH JOIN           |             | 33333 |  1236K|       |   436   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | PRODUCT     | 33333 |   325K|       |    54  (12)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL  | EPISODE     |   300K|  8203K|       |   375   (6)| 00:00:01 |
|   6 |    INDEX FAST FULL SCAN| IX4_REQUEST |  4000K|    57M|       |  1932   (7)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("REQUEST"."CROSS_REF_ID"(+)="EPISODE"."CROSS_REF_ID")
   3 - access("PRODUCT"."EPISODE_ID"="EPISODE"."EPISODE_ID")
   4 - filter("PRODUCT"."STATUS"='I')
   5 - filter("EPISODE"."DEPARTMENT_ID"=2)

The product_sub_type table has been eliminated and we’re doing an index fast full scan of the ix4_request index instead of a tablescan of the much larger request table.

tl;dr

Changing the order of the tables in an ANSI join – especially when there are outer joins involved – could make a significant difference to the way the query is transformed and optimised. While it is nice to write the table ordering so that “chains” of joins are easily visible, bear in mind that re-ordering the join to postpone outer joins may be enough to help the optimizer produce a better execution plan.

Footnote

If you want to play around with the example, here’s the code to create and load the tables. The code doesn’t follow my usual style as most of it is cut-n-pasted from the Oracle forum thread:


rem
rem     script:         Ansi_outer_5.sql
rem     Dated:          July 2012
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             18.3.0.0        iffs still not used by default
rem             12.2.0.1        iffs still not used by default
rem

create table episode (
        episode_id number (*,0),
        department_id number (*,0),
        date_required date,
        cross_ref_id varchar2 (11),
        padding varchar2 (80),
        constraint pk_episode primary key (episode_id)
)
;

create table product_sub_type (
        prod_sub_type_id number (*,0),
        sub_type_name varchar2 (20),
        units varchar2 (20),
        padding varchar2 (80),
        constraint pk_product_sub_type primary key (prod_sub_type_id)
)
;

create table product (
        product_id number (*,0),
        prod_type_id number (*,0),
        prod_sub_type_id number (*,0),
        episode_id number (*,0),
        status varchar2 (1),
        number_required number (*,0),
        padding varchar2 (80),
        constraint pk_product primary key (product_id),
        constraint nn_product_episode check (episode_id is not null) 
)
;

alter table product add constraint fk_product 
        foreign key (episode_id) references episode (episode_id)
;

alter table product add constraint fk_prod_sub_type
        foreign key (prod_sub_type_id) references product_sub_type (prod_sub_type_id)
;

create table request (
        request_id number (*,0),
        department_id number (*,0),
        site_id number (*,0),
        cross_ref_id varchar2 (11),
        padding varchar2 (80),
        padding2 varchar2 (80),
        constraint pk_request primary key (request_id),
        constraint nn_request_department check (department_id is not null),
        constraint nn_request_site_id check (site_id is not null)
)
;

prompt  ===================
prompt  Loading episode ...
prompt  ===================

insert /*+ append */ into episode
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, 2,
    sysdate + mod (r, 14),
    to_char (r, '0000000000'),
    'ABCDEFGHIJKLMNOPQRSTUVWXYZ' || to_char (r, '000000')
  from generator g
where g.r <= 3e5
/ 

commit;

prompt  ============================
prompt  Loading product_sub_type ...
prompt  ============================

insert /*+ append */ into product_sub_type
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, 
       to_char (r, '000000'),
       to_char (mod (r, 3), '000000'),
       'ABCDE' || to_char (r, '000000')
  from generator g
where g.r <= 15
/ 

commit;

prompt  ===================
prompt  Loading product ...
prompt  ===================

insert /*+ append */ into product
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e6
       ) 
select r, mod (r, 12) + 1, mod (r, 15) + 1, mod (r, 300000) + 1,
       decode (mod (r, 3), 0, 'I', 1, 'C', 2, 'X', 'U'),
       dbms_random.value (1, 100), NULL
  from generator g
where g.r <= 1e5
/ 

commit;

prompt  ===================
prompt  Loading request ...
prompt  ===================

insert /*+ append */ into request
with generator as 
(select rownum r
          from (select rownum r from dual connect by rownum <= 1000) a,
               (select rownum r from dual connect by rownum <= 1000) b,
               (select rownum r from dual connect by rownum <= 1000) c
         where rownum <= 1e7
       ) 
select 
        r, mod (r, 4) + 1, 1, to_char (r, '0000000000'),
        'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890123456789' || to_char (r, '000000'),
        'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789012345678' || to_char (r, '000000')
  from generator g
where g.r <= 4e6
/ 

commit;

create index ix1_episode_cross_ref on episode (cross_ref_id);

create index ix1_product_episode on product (episode_id);
create index ix2_product_type on product (prod_type_id);

create index ix1_request_site on request (site_id);
create index ix2_request_dept on request (department_id);
create index ix3_request_cross_ref on request (cross_ref_id);
create index ix4_request on request (cross_ref_id, site_id);

exec dbms_stats.gather_schema_stats ('test_user')

Note that there is a call to gather_schema_stats() at the end, rather than a set of 4 calls to gather_table_stats(); you may want to change this. The entire data set, including indexes, will need about 1.5GB of free space.

 

November 15, 2018

num_index_keys

Filed under: 12c,Bugs,CBO,Execution plans,Hints,Oracle — Jonathan Lewis @ 1:13 pm GMT Nov 15,2018

The title is the name of an Oracle hint that came into existence in Oracle 10.2.0.3 and made an appearance recently in a question on the rarely used “My Oracle Support” Community forum (you’ll need a MOS account to be able to read the original). I wouldn’t have found it but the author also emailed me the link asking if I could take a look at it.  (If you want to ask me for help – without paying me, that is – then posting a public question in the Oracle (ODC) General Database or SQL forums and emailing me a private link is the strategy most likely to get an answer, by the way.)

The question was about a very simple query using a straightforward index – with a quirky change of plan after upgrading from 10.2.0.3 to 12.2.0.1. Setting the optimizer_features_enable to ‘10.2.0.3’ in the 12.2.0.1 system re-introduced the 10g execution plan. Here’s the query:


SELECT t1.*
   FROM   DW1.t1
  WHERE   t1.C1 = '0001' 
    AND   t1.C2 IN ('P', 'F', 'C')
    AND   t1.C3 IN (
                    '18110034450001',
                    '18110034450101',
                    '18110034450201',
                    '18110034450301',
                    '18110034450401',
                    '18110034450501'
          );
 

Information supplied: t1 holds about 500 million rows at roughly 20 rows per block, the primary key index is (c1, c2, c3, c4), there are just a few values for each of c1, c2 and c4, while c3 is “nearly unique” (which, for clarity, was expanded to “the number of distinct values of c3 is virtually the same as the number of rows in the table”).

At the moment we don’t have any information about histograms and we don’t known whether or not “nearly unique” might still allow a few values of c3 to have a large number of duplicates, so that’s something we might want to follow up on later.

Here are the execution plans – the fast one (from 10g) first, then the slow (12c) plan – and you should look carefully at the predicate section of the two plans:


10g (pulled from memory with rowsource execution statistics enabled)
--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   1 |  INLIST ITERATOR             |                  |      1 |        |      6 |00:00:00.01 |      58 |      5 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1               |     18 |      5 |      6 |00:00:00.01 |      58 |      5 |
|*  3 |    INDEX RANGE SCAN          | PK_T1            |     18 |      5 |      6 |00:00:00.01 |      52 |      4 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P')) AND (("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR "C3"='18110034450401' OR
              "C3"='18110034450501')))

 

12c (from explain plan)
---------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                  |     1 |   359 |     7   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR                     |                  |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1               |     1 |   359 |     7   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | PK_T1            |     1 |       |     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND ("T1"."C2"='C' OR "T1"."C2"='F' OR
              "T1"."C2"='P'))
       filter("C3"='18110034450001' OR "C3"='18110034450101' OR
              "C3"='18110034450201' OR "C3"='18110034450301' OR
              "C3"='18110034450401' OR "C3"='18110034450501')
  

When comparing plans it’s better, of course, to present the same sources from the two systems, it’s not entirely helpful to have the generated plan from explain plan in one version and a run-time plan with stats in the other – given the choice I’d like to see the run-time from both. Despite this, I felt fairly confident that the prediction would match the run-time for 12c and that I could at least guess the “starts” figure for 12c.

The important thing to notice is the way that the access predicate in 10g has split into an access predicate followed by a filter predicate in 12c. So 12c is going to iterate three times (once for each of the values  ‘C’, ‘F’, ‘P’) and then walk a potentially huge linked list of index leaf blocks looking for 6 values of c3, while 10g is going to probe the index 18 times (3 combinations of c2 x six combinations of c3) to find “nearly unique” rows which means probably one leaf block per probe.

The 12c plan was taking minutes to run, the 10g plan was taking less than a second. The difference in execution time was probably the effect of the 12c plan ranging through (literally) thousands of index leaf blocks.

There are many bugs and anomalies relating to in-list iteration and index range scans and cardinality calculations – here’s a quick sample of v$system_fix_control in 12.2.0.1:


select optimizer_feature_enable ofe, sql_feature, bugno, description
from v$system_fix_control
where
	optimizer_feature_enable between '10.2.0.4' and '12.2.0.1'
and	(   sql_feature like '%CBO%'
	 or sql_feature like '%CARDINALITY%'
	)
and	(    lower(description) like '%list%'
	 or  lower(description) like '%iterat%'
	 or  lower(description) like '%multi%col%'
	)
order by optimizer_feature_enable, sql_feature, bugno
;

OFE        SQL_FEATURE                      BUGNO DESCRIPTION
---------- --------------------------- ---------- ----------------------------------------------------------------
10.2.0.4   QKSFM_CBO_5259048              5259048 undo unused inlist
           QKSFM_CBO_5634346              5634346 Relax equality operator restrictions for multicolumn inlists

10.2.0.5   QKSFM_CBO_7148689              7148689 Allow fix of bug 2218788 for in-list predicates

11.1.0.6   QKSFM_CBO_5139520              5139520 kkoDMcos: For PWJ on list dimension, use part/subpart bits

11.2.0.1   QKSFM_CBO_6818410              6818410 eliminate redundant inlist predicates

11.2.0.2   QKSFM_CBO_9069046              9069046 amend histogram column tracking for multicolumn stats

11.2.0.3   QKSFM_CARDINALITY_11876260    11876260 use index filter inlists with extended statistics
           QKSFM_CBO_10134677            10134677 No selectivity for transitive inlist predicate from equijoin
           QKSFM_CBO_11834739            11834739 adjust NDV for list partition key column after pruning
           QKSFM_CBO_11853331            11853331 amend index cost compare with inlists as filters
           QKSFM_CBO_12591120            12591120 check inlist out-of-range values with extended statistics

11.2.0.4   QKSFM_CARDINALITY_12828479    12828479 use dynamic sampling cardinality for multi-column join key check
           QKSFM_CARDINALITY_12864791    12864791 adjust for NULLs once for multiple inequalities on nullable colu
           QKSFM_CARDINALITY_13362020    13362020 fix selectivity for skip scan filter with multi column stats
           QKSFM_CARDINALITY_14723910    14723910 limit multi column group selectivity due to NDV of inlist column
           QKSFM_CARDINALITY_6873091      6873091 trim histograms based on in-list predicates
           QKSFM_CBO_13850256            13850256 correct estimates for transitive inlist predicate with equijoin

12.2.0.1   QKSFM_CARDINALITY_19847091    19847091 selectivity caching for inlists
           QKSFM_CARDINALITY_22533539    22533539 multi-column join sanity checks for table functions
           QKSFM_CARDINALITY_23019286    23019286 Fix cdn estimation with multi column stats on fixed data types
           QKSFM_CARDINALITY_23102649    23102649 correction to inlist element counting with constant expressions
           QKSFM_CBO_17973658            17973658 allow partition pruning due to multi-inlist iterator
           QKSFM_CBO_21057343            21057343 order predicate list
           QKSFM_CBO_22272439            22272439 correction to inlist element counting with bind variables

There are also a number of system parameters relating to inlists that are new (or have changed values) in 12.2.0.1 when compared with 10.2.0.3 – but I’m not going to go into those right now.

I was sufficiently curious about this anomaly that I emailed the OP to say I would be happy to take a look at the 10053 trace files for the query – the files probably weren’t going to be very large given that it was only a single table query – but in the end it turned out that I solved the problem before he’d had time to email them. (Warning – don’t email me a 10053 file on spec; if I want one I’ll ask for it.)

Based on the description I created an initial model of the problem – it took about 10 minutes to code:


rem     Tested on 12.2.0.1, 18.3.0.1

drop table t1 purge;

create table t1 (
	c1 varchar2(4) not null,
	c2 varchar2(1) not null,
	c3 varchar2(15) not null,
	c4 varchar2(4)  not null,
	v1 varchar2(250)
)
;

insert into t1
with g as (
	select rownum id 
	from dual
	connect by level <= 1e4 -- > hint to avoid wordpress format issue
)
select
	'0001',
	chr(65 + mod(rownum,11)),
	'18110034'||lpad(1+100*rownum,7,'0'),
	lpad(mod(rownum,9),4,'0'),
	rpad('x',250,'x')
from
	g,g
where
        rownum <= 1e5 -- > hint to avoid wordpress format issue
;


create unique index t1_i1 on t1(c1, c2, c3, c4);

begin
        dbms_stats.gather_table_stats(
                null,
                't1',
                method_opt => 'for all columns size 1'
        );
end;
/

alter session set statistics_level = all;
set serveroutput off

prompt	==========================
prompt	Default optimizer features
prompt	==========================

select
        /*+ optimizer_features_enable('12.2.0.1') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

select 
        /*+ optimizer_features_enable('10.2.0.3') */
	t1.*
FROM	t1
WHERE
	t1.c1 = '0001' 
AND	t1.c2 in ('H', 'F', 'C')
AND	t1.c3 in (
		'18110034450001',
		'18110034450101',
		'18110034450201',
		'18110034450301',
		'18110034450401',
		'18110034450501'
	)
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

alter session set statistics_level = all;
set serveroutput off

The two queries produced the same plan – regardless of the setting for optimizer_features_enable – it was the plan originally used by the OP’s 10g setting:


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |    20 (100)|      0 |00:00:00.01 |      35 |
|   1 |  INLIST ITERATOR             |       |      1 |        |            |      0 |00:00:00.01 |      35 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     18 |      2 |    20   (0)|      0 |00:00:00.01 |      35 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     18 |      2 |    19   (0)|      0 |00:00:00.01 |      35 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')) AND
              (("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501')))

There was one important difference between the 10g and the 12c plans – in 10g the cost of the table access (hence the cost of the total query) was 20; in 12c it jumped to 28 – maybe there’s a change in the arithmetic for costing the iterator, and maybe that’s sufficient to cause a problem.

Before going further it’s worth checking what the costs would look like (and, indeed, if the plan is possible in both versions) if we force Oracle into the “bad” plan. That’s where we finally get to the hint in the title of this piece. If I add the hint /*+ num_index_keys(t1 t1_i1 2) */ what’s going to happen ? (Technically I’ve included a hint to use the index, and specified the query block name to make sure Oracle doesn’t decide to switch to a tablescan):


select
        /*+
            optimizer_features_enable('12.2.0.1')
            index_rs_asc(@sel$1 t1@sel$1 (t1.c1 t1.c2 t1.c3 t1.c4))
            num_index_keys(@sel$1 t1@sel$1 t1_i1 2)
        */
        t1.*
FROM        t1
WHERE
        t1.c1 = '0001'
AND        t1.c2 in ('H', 'F', 'C')
AND        t1.c3 in (
                '18110034450001',
                '18110034450101',
                '18110034450201',
                '18110034450301',
                '18110034450401',
                '18110034450501'
        )
;

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |   150 (100)|      0 |00:00:00.01 |     154 |      1 |
|   1 |  INLIST ITERATOR                     |       |      1 |        |            |      0 |00:00:00.01 |     154 |      1 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      3 |     18 |   150   (2)|      0 |00:00:00.01 |     154 |      1 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |      3 |     18 |   142   (3)|      0 |00:00:00.01 |     154 |      1 |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"='0001' AND (("T1"."C2"='C' OR "T1"."C2"='F' OR "T1"."C2"='H')))
       filter(("T1"."C3"='18110034450001' OR "T1"."C3"='18110034450101' OR "T1"."C3"='18110034450201' OR
              "T1"."C3"='18110034450301' OR "T1"."C3"='18110034450401' OR "T1"."C3"='18110034450501'))

This was the plan from 12.2.0.1 – and again the plan for 10.2.0.3 was identical except for costs which became 140 for the index range scan and 141 for the table access. At first sight it looks like 10g may be using the total selectivity of the entire query as the scaling factor for the index clustering_factor to find the table cost while 12c uses the cost of accessing the table for one iteration (rounding up) before multiplying by the number of iterations.

Having observed this detail I thought I’d do a quick test of what happened by default if I requested 145 distinct values of c3. Both versions defaulted to the access/filter path rather than the pure access path – but again there was a difference in costs. The 10g index cost was 140 with a table access cost of 158, while 12c had an index cost of 179 and a table cost of 372. So both versions switch plans at some point – do they switch at the same point ? Reader, I could not resist temptation, so I ran a test loop. With my data set the 12c version switched paths at 61 values in the in-list and 10g switched at 53 values –

Conclusion: there’s been a change in the selectivity calculations for the use of in-list iterators, which leads to a change in costs, which can lead to a change in plans; the OP was just unlucky with his data set and stats. Possibly there’s something about his data or stats that makes the switch appear with a much smaller in-list than mine.

Footnote:

When I responded to the thread on MOSC with the suggestion that the problem was in part due to statistics and might be affected by out of date stats (or a histogram on the (low-frequency) c2 column) the OP noted that stats hadn’t been gathered since some time in August – and found that the 12c path changed to the efficient (10g) one after re-gathering stats on the table.

 

November 13, 2018

Index Splits – 3

Filed under: Indexing,Infrastructure,Oracle,redo — Jonathan Lewis @ 8:06 pm GMT Nov 13,2018

This is stored only for reference, and in case anyone wants to wade through the details. It’s the redo log dump from the 90/10 index leaf block split test from the previous blog posts running on 11.2.0.4 on Linux. The first part is the full block dump, the second part is an extract of the Record and Change vector headings with the embedded opcode (opc:) for the undo records in the redo vectors, and a tiny note of what each change vector is doing.

Full dump


*** 2018-11-02 19:16:45.844
*** SESSION ID:(244.47) 2018-11-02 19:16:45.844
*** CLIENT ID:() 2018-11-02 19:16:45.844
*** SERVICE NAME:(SYS$USERS) 2018-11-02 19:16:45.844
*** MODULE NAME:(MyModule) 2018-11-02 19:16:45.844
*** ACTION NAME:(MyAction) 2018-11-02 19:16:45.844
 
----- begin tree dump
branch: 0x140008b 20971659 (0: nrow: 2, level: 1)
   leaf: 0x140008f 20971663 (-1: nrow: 147 rrow: 147)
   leaf: 0x140008c 20971660 (0: nrow: 145 rrow: 145)
----- end tree dump
----- begin tree dump
branch: 0x140008b 20971659 (0: nrow: 3, level: 1)
   leaf: 0x140008f 20971663 (-1: nrow: 147 rrow: 147)
   leaf: 0x140008c 20971660 (0: nrow: 145 rrow: 145)
   leaf: 0x140008d 20971661 (1: nrow: 1 rrow: 1)
----- end tree dump
Initial buffer sizes: read 1024K, overflow 832K, change 805K
 
DUMP OF REDO FROM FILE '/u01/app/oracle/oradata/TEST/onlinelog/o1_mf_2_bfjt3tsl_.log'
 Opcodes *.*
 RBAs: 0x000000.00000000.0000 thru 0xffffffff.ffffffff.ffff
 SCNs: scn: 0x0000.00000000 thru scn: 0xffff.ffffffff
 Times: creation thru eternity
 FILE HEADER:
        Compatibility Vsn = 186647552=0xb200400
        Db ID=2124268929=0x7e9dc581, Db Name='TEST'
        Activation ID=2124229505=0x7e9d2b81
        Control Seq=28320068=0x1b02144, File size=204800=0x32000
        File Number=2, Blksiz=512, File Type=2 LOG
 descrip:"Thread 0001, Seq# 0000012654, SCN 0x0b860d004016-0xffffffffffff"
 thread: 1 nab: 0xffffffff seq: 0x0000316e hws: 0x1 eot: 1 dis: 0
 resetlogs count: 0x313da101 scn: 0x0000.00000001 (1)
 prev resetlogs count: 0x0 scn: 0x0000.00000000
 Low  scn: 0x0b86.0d004016 (12670371643414) 11/02/2018 19:16:45
 Next scn: 0xffff.ffffffff 01/01/1988 00:00:00
 Enabled scn: 0x0000.00000001 (1) 09/14/2013 14:28:50
 Thread closed scn: 0x0b86.0d004016 (12670371643414) 11/02/2018 19:16:45
 Disk cksum: 0xfc42 Calc cksum: 0xfc42
 Terminal recovery stop scn: 0x0000.00000000
 Terminal recovery  01/01/1988 00:00:00
 Most recent redo scn: 0x0000.00000000
 Largest LWN: 0 blocks
 End-of-redo stream : No
 Unprotected mode
 Miscellaneous flags: 0x800000
 Thread internal enable indicator: thr: 0, seq: 0 scn: 0x0000.00000000
 Zero blocks: 0
 Format ID is 2
 redo log key is 16409c48687eb1bbc3e4a4d820a6f7e5
 redo log key flag is d
 Enabled redo threads: 1 
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000002.0010 LEN: 0x0074 VLD: 0x05
SCN: 0x0b86.0d004016 SUBSCN:  1 11/02/2018 19:16:45
(LWN RBA: 0x00316e.00000002.0010 LEN: 0038 NST: 0001 SCN: 0x0b86.0d004016)
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d003f20 SEQ:2 OP:4.1 ENC:0 RBL:0
Block cleanout record, scn:  0x0b86.0d004016 ver: 0x01 opt: 0x02, entries follow...
  itli: 2  flg: 2  scn: 0x0b86.0d003f20
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000002.0084 LEN: 0x0144 VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  2 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d003fe6 SEQ:1 OP:5.2 ENC:0 RBL:0
ktudh redo: slt: 0x000d sqn: 0x00009d48 flg: 0x0412 siz: 136 fbi: 0
            uba: 0x00c05593.2254.1d    pxid:  0x0000.000.00000000
CHANGE #2 TYP:0 CLS:76 AFN:3 DBA:0x00c05593 OBJ:4294967295 SCN:0x0b86.0d003fe5 SEQ:2 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 136 spc: 2818 flg: 0x0012 seq: 0x2254 rec: 0x1d
            xid:  0x001e.00d.00009d48  
ktubl redo: slt: 13 rci: 0 opc: 10.21 [objn: 350367 objd: 350367 tsn: 7]
Undo type:  Regular undo        Begin trans    Last buffer split:  No 
Temp Object:  No 
Tablespace Undo:  No 
             0x00000000  prev ctl uba: 0x00c05593.2254.1b 
prev ctl max cmt scn:  0x0b86.0d003da4  prev tx cmt scn:  0x0b86.0d003da5 
txn start scn:  0xffff.ffffffff  logon user: 62  prev brb: 0  prev bcl: 0 BuExt idx: 0 flg2: 0
index general undo (branch) operations
KTB Redo 
op: 0x04  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: L  itl: xid:  0x0019.01d.00009119 uba: 0x00c1702b.1ea1.02
                      flg: CB--    lkc:  0     scn: 0x0b86.0d003e8f
Dump kdige : block dba :0x0140008c, seghdr dba: 0x0140008a
unlock block
(1):  01
CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.6 ENC:0 RBL:0
index redo (kdxlok):  lock block, count=2
KTB Redo 
op: 0x01  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x001e.00d.00009d48    uba: 0x00c05593.2254.1d
lock itl 1
operation = 0x5, pre-split
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000002.01c8 LEN: 0x20a4 VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  3 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.2 ENC:0 RBL:0
ktudh redo: slt: 0x000d sqn: 0x00000000 flg: 0x000a siz: 8148 fbi: 255
            uba: 0x00c05595.2254.01    pxid:  0x0000.000.00000000
CHANGE #2 TYP:1 CLS:76 AFN:3 DBA:0x00c05595 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 8148 spc: 2680 flg: 0x000a seq: 0x2254 rec: 0x01
            xid:  0x001e.00d.00009d48  
ktubu redo: slt: 13 rci: 0 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
Undo type:  Regular undo       Undo type:  Last buffer split:  No 
Tablespace Undo:  No 
             0x00c05593
index general undo (branch) operations
KTB Redo 
op: 0x05  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: R  itc: 2
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x001e.00d.00009d48  0x00c05593.2254.1d  ----    1  fsc 0x0000.00000000
0x02   0x001d.012.0000a9d7  0x00c04b88.256f.10  C---    0  scn 0x0b86.0d003f20
Dump kdige : block dba :0x0140008c, seghdr dba: 0x0140008a
restore block before image
(8032): 
 00 01 85 02 01 00 00 00 91 00 46 01 5c 01 16 00 00 00 00 00 00 00 00 00 8f
 00 40 01 06 00 00 00 60 1f 00 00 2b 1f f6 1e c1 1e 8c 1e 57 1e 22 1e ed 1d
 b8 1d 83 1d 4e 1d 19 1d e4 1c af 1c 7a 1c 45 1c 10 1c db 1b a6 1b 71 1b 3c
 1b 07 1b d2 1a 9d 1a 68 1a 33 1a fe 19 c9 19 94 19 5f 19 2a 19 f5 18 c0 18
 8b 18 56 18 21 18 ec 17 b7 17 82 17 4d 17 18 17 e3 16 ae 16 79 16 44 16 0f
 16 da 15 a5 15 70 15 3b 15 06 15 d1 14 9c 14 68 14 33 14 fe 13 c9 13 94 13
 5f 13 2a 13 f5 12 c0 12 8b 12 56 12 21 12 ec 11 b7 11 82 11 4d 11 18 11 e3
 10 ae 10 79 10 44 10 0f 10 da 0f a5 0f 70 0f 3b 0f 06 0f d1 0e 9c 0e 67 0e
 32 0e fd 0d c8 0d 93 0d 5e 0d 29 0d f4 0c bf 0c 8a 0c 55 0c 20 0c eb 0b b6
 0b 81 0b 4c 0b 17 0b e2 0a ad 0a 78 0a 43 0a 0e 0a d9 09 a4 09 6f 09 3a 09
 05 09 d0 08 9b 08 66 08 31 08 fc 07 c7 07 92 07 5d 07 28 07 f3 06 be 06 89
 06 54 06 1f 06 ea 05 b5 05 80 05 4b 05 16 05 e1 04 ac 04 77 04 42 04 0d 04
 d8 03 a3 03 6e 03 39 03 04 03 cf 02 9a 02 65 02 30 02 fb 01 c6 01 91 01 5c
 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 01 40 00 85 00 05 03 c2 03 5d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 85 00 04 03 c2 03 5c 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 85 00 03 03 c2 03 5b 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 85 00 02 03 c2 03 5a 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 85 00 01 03 c2 03 59 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 85 00 00 03 c2 03 58
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 46 03
 c2 03 57 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86
 00 45 03 c2 03 56 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 86 00 44 03 c2 03 55 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 86 00 43 03 c2 03 54 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 86 00 42 03 c2 03 53 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 86 00 41 03 c2 03 52 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 40 03 c2 03 51 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 3f 03 c2 03 50 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 3e 03 c2
 03 4f 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00
 3d 03 c2 03 4e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 86 00 3c 03 c2 03 4d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 86 00 3b 03 c2 03 4c 28 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 00 00 01 40 00 86 00 3a 03 c2 03 4b 28 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 00 00 01 40 00 86 00 39 03 c2 03 4a 28 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 38 03 c2 03 49 28 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 37 03 c2 03 48 28 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 36 03 c2 03
 47 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 35
 03 c2 03 46 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00
 86 00 34 03 c2 03 45 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00
 01 40 00 86 00 33 03 c2 03 44 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 86 00 32 03 c2 03 43 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 86 00 31 03 c2 03 42 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 86 00 30 03 c2 03 41 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 2f 03 c2 03 40 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 2e 03 c2 03 3f
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 2d 03
 c2 03 3e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86
 00 2c 03 c2 03 3d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 86 00 2b 03 c2 03 3c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 86 00 2a 03 c2 03 3b 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 86 00 29 03 c2 03 3a 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 86 00 28 03 c2 03 39 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 27 03 c2 03 38 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 26 03 c2 03 37 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 25 03 c2
 03 36 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00
 24 03 c2 03 35 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 86 00 23 03 c2 03 34 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 86 00 22 03 c2 03 33 28 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 00 00 01 40 00 86 00 21 03 c2 03 32 28 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 00 00 01 40 00 86 00 20 03 c2 03 31 28 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 1f 03 c2 03 30 28 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 1e 03 c2 03 2f 28 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 1d 03 c2 03
 2e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 1c
 03 c2 03 2d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00
 86 00 1b 03 c2 03 2c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00
 01 40 00 86 00 1a 03 c2 03 2b 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 86 00 19 03 c2 03 2a 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 86 00 18 03 c2 03 29 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 86 00 17 03 c2 03 28 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 16 03 c2 03 27 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 15 03 c2 03 26
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 14 03
 c2 03 25 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86
 00 13 03 c2 03 24 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 86 00 12 03 c2 03 23 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 86 00 11 03 c2 03 22 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 86 00 10 03 c2 03 21 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 86 00 0f 03 c2 03 20 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 0e 03 c2 03 1f 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 0d 03 c2 03 1e 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 0c 03 c2
 03 1d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00
 0b 03 c2 03 1c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 86 00 0a 03 c2 03 1b 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 86 00 09 03 c2 03 1a 28 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 00 00 01 40 00 86 00 08 03 c2 03 19 28 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 00 00 01 40 00 86 00 07 03 c2 03 18 28 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 06 03 c2 03 17 28 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 05 03 c2 03 16 28 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 04 03 c2 03
 15 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 86 00 03
 03 c2 03 14 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00
 86 00 02 03 c2 03 13 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00
 01 40 00 86 00 01 03 c2 03 12 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 86 00 00 03 c2 03 11 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 84 00 46 03 c2 03 10 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 84 00 45 03 c2 03 0f 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 44 03 c2 03 0e 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 43 03 c2 03 0d
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 42 03
 c2 03 0c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84
 00 41 03 c2 03 0b 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 84 00 40 03 c2 03 0a 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 84 00 3f 03 c2 03 09 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 84 00 3e 03 c2 03 08 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 84 00 3d 03 c2 03 07 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 3c 03 c2 03 06 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 3b 03 c2 03 05 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 3a 03 c2
 03 04 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00
 39 03 c2 03 03 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 84 00 38 03 c2 03 02 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 84 00 37 02 c2 03 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 84 00 36 03 c2 02 64 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 84 00 35 03 c2 02 63 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 84 00 34 03 c2 02 62 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 33 03 c2 02 61 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 32 03 c2 02 60
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 31 03
 c2 02 5f 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84
 00 30 03 c2 02 5e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 84 00 2f 03 c2 02 5d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 84 00 2e 03 c2 02 5c 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 84 00 2d 03 c2 02 5b 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 84 00 2c 03 c2 02 5a 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 2b 03 c2 02 59 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 2a 03 c2 02 58 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 29 03 c2
 02 57 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00
 28 03 c2 02 56 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 84 00 27 03 c2 02 55 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 84 00 26 03 c2 02 54 28 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 00 00 01 40 00 84 00 25 03 c2 02 53 28 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 00 00 01 40 00 84 00 24 03 c2 02 52 28 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 23 03 c2 02 51 28 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 22 03 c2 02 50 28 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 21 03 c2 02
 4f 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 20
 03 c2 02 4e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00
 84 00 1f 03 c2 02 4d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00
 01 40 00 84 00 1e 03 c2 02 4c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 84 00 1d 03 c2 02 4b 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 84 00 1c 03 c2 02 4a 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 00 00 01 40 00 84 00 1b 03 c2 02 49 28 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 1a 03 c2 02 48 28 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 19 03 c2 02 47
 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 18 03
 c2 02 46 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84
 00 17 03 c2 02 45 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01
 40 00 84 00 16 03 c2 02 44 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 00 00 01 40 00 84 00 15 03 c2 02 43 28 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 00 00 01 40 00 84 00 14 03 c2 02 42 28 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 00 00 01 40 00 84 00 13 03 c2 02 41 28 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 12 03 c2 02 40 28 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 11 03 c2 02 3f 28
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 10 03 c2
 02 3e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00
 0f 03 c2 02 3d 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40
 00 84 00 0e 03 c2 02 3c 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00
 00 01 40 00 84 00 0d 03 c2 02 3b 28 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 00 00 01 40 00 84 00 0c 03 c2 02 3a 28 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 00 00 01 40 00 84 00 0b 03 c2 02 39 28 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 0a 03 c2 02 38 28 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 09 03 c2 02 37 28 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 08 03 c2 02
 36 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00 84 00 07
 03 c2 02 35 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00 01 40 00
 84 00 06 03 c2 02 34 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 00 00
 01 40 00 84 00 05 03 c2 02 33 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 00 00 01 40 00 84 00 04 03 c2 02 32 28 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 00 00 01 40 00 84 00 03 03 c2 02 31 28 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78
CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:2 OP:10.9 ENC:0 RBL:0
index redo (kdxair): apply xat do to itl 1 (count=2)
KTB Redo 
op: 0x05  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: R  itc: 1
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x001e.00d.00009d48  0x00c05595.2254.01  -B--    1  fsc 0x0000.00000000
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000013.017c LEN: 0x0044 VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  3 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:8 AFN:5 DBA:0x01400088 OBJ:350367 SCN:0x0b86.0d003e8e SEQ:3 OP:13.22 ENC:0 RBL:0
Redo on Level1 Bitmap Block
Redo for state change
Len: 1 Offset: 4 newstate: 140243567116289
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000013.01c0 LEN: 0x01ac VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  4 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:2 OP:5.2 ENC:0 RBL:0
ktudh redo: slt: 0x000d sqn: 0x00000000 flg: 0x000a siz: 120 fbi: 0
            uba: 0x00c05596.2254.01    pxid:  0x0000.000.00000000
CHANGE #2 TYP:1 CLS:76 AFN:3 DBA:0x00c05596 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 120 spc: 0 flg: 0x000a seq: 0x2254 rec: 0x01
            xid:  0x001e.00d.00009d48  
ktubu redo: slt: 13 rci: 0 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
Undo type:  Regular undo       Undo type:  Last buffer split:  No 
Tablespace Undo:  No 
             0x00c05595
index general undo (branch) operations
KTB Redo 
op: 0x05  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: R  itc: 2
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
Dump kdige : block dba :0x0140008d, seghdr dba: 0x0140008a
make leaf block empty
(2):  01 00
CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008d OBJ:350367 SCN:0x0b86.0d003e8e SEQ:2 OP:10.8 ENC:0 RBL:0
index redo (kdxlne): (count=4) init header of newly allocated leaf block
KTB Redo 
op: 0x05  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: R  itc: 2
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x001e.00d.00009d48  0x00c05596.2254.01  -B--    0  fsc 0x0000.00000000
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
kdxlnitl = 1
kdxlnnco = 2
kdxlndsz = 6
kdxlncol = 246
kdxlnflg = 0
kdxlnnxt = 0x0
kdxlnprv = 0x140008c
new block has 1 rows
dumping row index
Dump of memory from 0x00007F8D3FC10078 to 0x00007F8D3FC1007C
7F8D3FC10070                   1F5D1F26                   [&.].]    
dumping rows
Dump of memory from 0x00007F8D3FC1007C to 0x00007F8D3FC100B1
7F8D3FC10070                            40010101              [...@]
7F8D3FC10080 06008500 5E03C203 78787828 78787878  [.......^(xxxxxxx]
7F8D3FC10090 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC100B0 30303078                             [x000]            
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000014.017c LEN: 0x0048 VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  4 11/02/2018 19:16:45
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008b OBJ:350367 SCN:0x0b86.0d003e8f SEQ:2 OP:4.1 ENC:0 RBL:0
Block cleanout record, scn:  0x0b86.0d004016 ver: 0x01 opt: 0x02, entries follow...
  itli: 1  flg: 2  scn: 0x0b86.0d003e8f
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000014.01c4 LEN: 0x00e0 VLD: 0x01
SCN: 0x0b86.0d004016 SUBSCN:  5 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:76 AFN:3 DBA:0x00c05596 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:2 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 84 spc: 8028 flg: 0x0022 seq: 0x2254 rec: 0x02
            xid:  0x001e.00d.00009d48  
ktubu redo: slt: 13 rci: 1 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
Undo type:  Regular undo       Undo type:  Last buffer split:  No 
Tablespace Undo:  No 
             0x00000000
index general undo (branch) operations
KTB Redo 
op: 0x04  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: L  itl: xid:  0x0019.01d.00009119 uba: 0x00c17029.1ea1.01
                      flg: CB--    lkc:  0     scn: 0x0b86.0d003e8f
Dump kdige : block dba :0x0140008b, seghdr dba: 0x0140008a
branch block row purge
(4):  01 00 01 00
CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008b OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.15 ENC:0 RBL:0
index redo (kdxbin) :  insert branch block row, count=3
KTB Redo 
op: 0x01  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x001e.00d.00009d48    uba: 0x00c05596.2254.02
REDO itl: 1 insert into slot 1, child dba 0x140008d
new key : (5):  03 c2 03 5e fe
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000015.00b4 LEN: 0x1fb0 VLD: 0x01
SCN: 0x0b86.0d004017 SUBSCN:  1 11/02/2018 19:16:45
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:3 OP:5.4 ENC:0 RBL:0
ktucm redo: slt: 0x000d sqn: 0x00009d48 srt: 0 sta: 9 flg: 0x2 ktucf redo: uba: 0x00c05596.2254.02 ext: 14 spc: 7942 fbi: 0 
CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:3 OP:10.8 ENC:0 RBL:0
index redo (kdxlne): (count=4) init leaf block being split
zeroed lock count and free space, kdxlenxt = 0x140008d
new block has 145 rows
dumping row index
Dump of memory from 0x00007F8D3FC10028 to 0x00007F8D3FC1014C
7F8D3FC10020                   00370000 00A5006E          [..7.n...]
7F8D3FC10030 011300DC 0181014A 01EF01B8 025D0226  [....J.......&.].]
7F8D3FC10040 02CB0294 03390302 03A70370 041503DE  [......9.p.......]
7F8D3FC10050 0483044C 04F104BA 055F0528 05CD0596  [L.......(._.....]
7F8D3FC10060 063B0604 06A90672 071706E0 0785074E  [..;.r.......N...]
7F8D3FC10070 07F307BC 0861082A 08CF0898 093D0906  [....*.a.......=.]
7F8D3FC10080 09AB0974 0A1909E2 0A870A50 0AF50ABE  [t.......P.......]
7F8D3FC10090 0B620B2C 0BD00B99 0C3E0C07 0CAC0C75  [,.b.......>.u...]
7F8D3FC100A0 0D1A0CE3 0D880D51 0DF60DBF 0E640E2D  [....Q.......-.d.]
7F8D3FC100B0 0ED20E9B 0F400F09 0FAE0F77 101C0FE5  [......@.w.......]
7F8D3FC100C0 108A1053 10F810C1 1166112F 11D4119D  [S......./.f.....]
7F8D3FC100D0 1242120B 12B01279 131E12E7 138C1355  [..B.y.......U...]
7F8D3FC100E0 13FA13C3 14681431 14D6149F 1544150D  [....1.h.......D.]
7F8D3FC100F0 15B2157B 162015E9 168E1657 16FC16C5  [{..... .W.......]
7F8D3FC10100 176A1733 17D817A1 1846180F 18B4187D  [3.j.......F.}...]
7F8D3FC10110 192218EB 19901959 19FE19C7 1A6C1A35  [..".Y.......5.l.]
7F8D3FC10120 1ADA1AA3 1B481B11 1BB61B7F 1C241BED  [......H.......$.]
7F8D3FC10130 1C921C5B 1D001CC9 1D6E1D37 1DDC1DA5  [[.......7.n.....]
7F8D3FC10140 1E4A1E13 1EB81E81 1F261EEF           [..J.......&.]    
dumping rows
Dump of memory from 0x00007F8D3FC1014C to 0x00007F8D3FC11F50
7F8D3FC10140                            40010000              [...@]
7F8D3FC10150 03008400 3102C203 78787828 78787878  [.......1(xxxxxxx]
7F8D3FC10160 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10180 01000078 00840040 02C20304 78782832  [x...@.......2(xx]
7F8D3FC10190 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC101B0 78787878 00007878 84004001 C2030500  [xxxxxx...@......]
7F8D3FC101C0 78283302 78787878 78787878 78787878  [.3(xxxxxxxxxxxxx]
7F8D3FC101D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC101E0 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC101F0 03060084 283402C2 78787878 78787878  [......4(xxxxxxxx]
7F8D3FC10200 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10220 40010000 07008400 3502C203 78787828  [...@.......5(xxx]
7F8D3FC10230 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10250 78787878 01000078 00840040 02C20308  [xxxxx...@.......]
7F8D3FC10260 78782836 78787878 78787878 78787878  [6(xxxxxxxxxxxxxx]
7F8D3FC10270 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10280 78787878 78787878 00007878 84004001  [xxxxxxxxxx...@..]
7F8D3FC10290 C2030900 78283702 78787878 78787878  [.....7(xxxxxxxxx]
7F8D3FC102A0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC102B0 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC102C0 00400100 030A0084 283802C2 78787878  [..@.......8(xxxx]
7F8D3FC102D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC102F0 78787878 40010000 0B008400 3902C203  [xxxx...@.......9]
7F8D3FC10300 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC10310 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10320 78787878 78787878 01000078 00840040  [xxxxxxxxx...@...]
7F8D3FC10330 02C2030C 7878283A 78787878 78787878  [....:(xxxxxxxxxx]
7F8D3FC10340 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10350 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC10360 84004001 C2030D00 78283B02 78787878  [.@.......;(xxxxx]
7F8D3FC10370 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10390 00787878 00400100 030E0084 283C02C2  [xxx...@.......<(]
7F8D3FC103A0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC103C0 78787878 78787878 40010000 0F008400  [xxxxxxxx...@....]
7F8D3FC103D0 3D02C203 78787828 78787878 78787878  [...=(xxxxxxxxxxx]
7F8D3FC103E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC103F0 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC10400 00840040 02C20310 7878283E 78787878  [@.......>(xxxxxx]
7F8D3FC10410 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10430 00007878 84004001 C2031100 78283F02  [xx...@.......?(x]
7F8D3FC10440 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10460 78787878 00787878 00400100 03120084  [xxxxxxx...@.....]
7F8D3FC10470 284002C2 78787878 78787878 78787878  [..@(xxxxxxxxxxxx]
7F8D3FC10480 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10490 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC104A0 13008400 4102C203 78787828 78787878  [.......A(xxxxxxx]
7F8D3FC104B0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC104D0 01000078 00840040 02C20314 78782842  [x...@.......B(xx]
7F8D3FC104E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10500 78787878 00007878 84004001 C2031500  [xxxxxx...@......]
7F8D3FC10510 78284302 78787878 78787878 78787878  [.C(xxxxxxxxxxxxx]
7F8D3FC10520 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10530 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC10540 03160084 284402C2 78787878 78787878  [......D(xxxxxxxx]
7F8D3FC10550 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10570 40010000 17008400 4502C203 78787828  [...@.......E(xxx]
7F8D3FC10580 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC105A0 78787878 01000078 00840040 02C20318  [xxxxx...@.......]
7F8D3FC105B0 78782846 78787878 78787878 78787878  [F(xxxxxxxxxxxxxx]
7F8D3FC105C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC105D0 78787878 78787878 00007878 84004001  [xxxxxxxxxx...@..]
7F8D3FC105E0 C2031900 78284702 78787878 78787878  [.....G(xxxxxxxxx]
7F8D3FC105F0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10600 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC10610 00400100 031A0084 284802C2 78787878  [..@.......H(xxxx]
7F8D3FC10620 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10640 78787878 40010000 1B008400 4902C203  [xxxx...@.......I]
7F8D3FC10650 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC10660 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10670 78787878 78787878 01000078 00840040  [xxxxxxxxx...@...]
7F8D3FC10680 02C2031C 7878284A 78787878 78787878  [....J(xxxxxxxxxx]
7F8D3FC10690 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC106A0 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC106B0 84004001 C2031D00 78284B02 78787878  [.@.......K(xxxxx]
7F8D3FC106C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC106E0 00787878 00400100 031E0084 284C02C2  [xxx...@.......L(]
7F8D3FC106F0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10710 78787878 78787878 40010000 1F008400  [xxxxxxxx...@....]
7F8D3FC10720 4D02C203 78787828 78787878 78787878  [...M(xxxxxxxxxxx]
7F8D3FC10730 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10740 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC10750 00840040 02C20320 7878284E 78787878  [@... ...N(xxxxxx]
7F8D3FC10760 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10780 00007878 84004001 C2032100 78284F02  [xx...@...!...O(x]
7F8D3FC10790 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC107B0 78787878 00787878 00400100 03220084  [xxxxxxx...@...".]
7F8D3FC107C0 285002C2 78787878 78787878 78787878  [..P(xxxxxxxxxxxx]
7F8D3FC107D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC107E0 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC107F0 23008400 5102C203 78787828 78787878  [...#...Q(xxxxxxx]
7F8D3FC10800 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10820 01000078 00840040 02C20324 78782852  [x...@...$...R(xx]
7F8D3FC10830 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10850 78787878 00007878 84004001 C2032500  [xxxxxx...@...%..]
7F8D3FC10860 78285302 78787878 78787878 78787878  [.S(xxxxxxxxxxxxx]
7F8D3FC10870 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10880 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC10890 03260084 285402C2 78787878 78787878  [..&...T(xxxxxxxx]
7F8D3FC108A0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC108C0 40010000 27008400 5502C203 78787828  [...@...'...U(xxx]
7F8D3FC108D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC108F0 78787878 01000078 00840040 02C20328  [xxxxx...@...(...]
7F8D3FC10900 78782856 78787878 78787878 78787878  [V(xxxxxxxxxxxxxx]
7F8D3FC10910 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10920 78787878 78787878 00007878 84004001  [xxxxxxxxxx...@..]
7F8D3FC10930 C2032900 78285702 78787878 78787878  [.)...W(xxxxxxxxx]
7F8D3FC10940 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10950 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC10960 00400100 032A0084 285802C2 78787878  [..@...*...X(xxxx]
7F8D3FC10970 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10990 78787878 40010000 2B008400 5902C203  [xxxx...@...+...Y]
7F8D3FC109A0 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC109B0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC109C0 78787878 78787878 01000078 00840040  [xxxxxxxxx...@...]
7F8D3FC109D0 02C2032C 7878285A 78787878 78787878  [,...Z(xxxxxxxxxx]
7F8D3FC109E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC109F0 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC10A00 84004001 C2032D00 78285B02 78787878  [.@...-...[(xxxxx]
7F8D3FC10A10 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10A30 00787878 00400100 032E0084 285C02C2  [xxx...@.......\(]
7F8D3FC10A40 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10A60 78787878 78787878 40010000 2F008400  [xxxxxxxx...@.../]
7F8D3FC10A70 5D02C203 78787828 78787878 78787878  [...](xxxxxxxxxxx]
7F8D3FC10A80 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10A90 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC10AA0 00840040 02C20330 7878285E 78787878  [@...0...^(xxxxxx]
7F8D3FC10AB0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10AD0 00007878 84004001 C2033100 78285F02  [xx...@...1..._(x]
7F8D3FC10AE0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10B00 78787878 00787878 00400100 03320084  [xxxxxxx...@...2.]
7F8D3FC10B10 286002C2 78787878 78787878 78787878  [..`(xxxxxxxxxxxx]
7F8D3FC10B20 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10B30 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC10B40 33008400 6102C203 78787828 78787878  [...3...a(xxxxxxx]
7F8D3FC10B50 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10B70 01000078 00840040 02C20334 78782862  [x...@...4...b(xx]
7F8D3FC10B80 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10BA0 78787878 00007878 84004001 C2033500  [xxxxxx...@...5..]
7F8D3FC10BB0 78286302 78787878 78787878 78787878  [.c(xxxxxxxxxxxxx]
7F8D3FC10BC0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10BD0 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC10BE0 03360084 286402C2 78787878 78787878  [..6...d(xxxxxxxx]
7F8D3FC10BF0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10C10 40010000 37008400 2803C202 78787878  [...@...7...(xxxx]
7F8D3FC10C20 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10C40 78787878 40010000 38008400 0203C203  [xxxx...@...8....]
7F8D3FC10C50 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC10C60 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10C70 78787878 78787878 01000078 00840040  [xxxxxxxxx...@...]
7F8D3FC10C80 03C20339 78782803 78787878 78787878  [9....(xxxxxxxxxx]
7F8D3FC10C90 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10CA0 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC10CB0 84004001 C2033A00 78280403 78787878  [.@...:....(xxxxx]
7F8D3FC10CC0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10CE0 00787878 00400100 033B0084 280503C2  [xxx...@...;....(]
7F8D3FC10CF0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10D10 78787878 78787878 40010000 3C008400  [xxxxxxxx...@...<]
7F8D3FC10D20 0603C203 78787828 78787878 78787878  [....(xxxxxxxxxxx]
7F8D3FC10D30 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10D40 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC10D50 00840040 03C2033D 78782807 78787878  [@...=....(xxxxxx]
7F8D3FC10D60 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10D80 00007878 84004001 C2033E00 78280803  [xx...@...>....(x]
7F8D3FC10D90 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10DB0 78787878 00787878 00400100 033F0084  [xxxxxxx...@...?.]
7F8D3FC10DC0 280903C2 78787878 78787878 78787878  [...(xxxxxxxxxxxx]
7F8D3FC10DD0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10DE0 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC10DF0 40008400 0A03C203 78787828 78787878  [...@....(xxxxxxx]
7F8D3FC10E00 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10E20 01000078 00840040 03C20341 7878280B  [x...@...A....(xx]
7F8D3FC10E30 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10E50 78787878 00007878 84004001 C2034200  [xxxxxx...@...B..]
7F8D3FC10E60 78280C03 78787878 78787878 78787878  [..(xxxxxxxxxxxxx]
7F8D3FC10E70 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10E80 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC10E90 03430084 280D03C2 78787878 78787878  [..C....(xxxxxxxx]
7F8D3FC10EA0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10EC0 40010000 44008400 0E03C203 78787828  [...@...D....(xxx]
7F8D3FC10ED0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10EF0 78787878 01000078 00840040 03C20345  [xxxxx...@...E...]
7F8D3FC10F00 7878280F 78787878 78787878 78787878  [.(xxxxxxxxxxxxxx]
7F8D3FC10F10 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10F20 78787878 78787878 00007878 84004001  [xxxxxxxxxx...@..]
7F8D3FC10F30 C2034600 78281003 78787878 78787878  [.F....(xxxxxxxxx]
7F8D3FC10F40 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10F50 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC10F60 00400100 03000086 281103C2 78787878  [..@........(xxxx]
7F8D3FC10F70 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC10F90 78787878 40010000 01008600 1203C203  [xxxx...@........]
7F8D3FC10FA0 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC10FB0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10FC0 78787878 78787878 01000078 00860040  [xxxxxxxxx...@...]
7F8D3FC10FD0 03C20302 78782813 78787878 78787878  [.....(xxxxxxxxxx]
7F8D3FC10FE0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC10FF0 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC11000 86004001 C2030300 78281403 78787878  [.@........(xxxxx]
7F8D3FC11010 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11030 00787878 00400100 03040086 281503C2  [xxx...@........(]
7F8D3FC11040 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11060 78787878 78787878 40010000 05008600  [xxxxxxxx...@....]
7F8D3FC11070 1603C203 78787828 78787878 78787878  [....(xxxxxxxxxxx]
7F8D3FC11080 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11090 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC110A0 00860040 03C20306 78782817 78787878  [@........(xxxxxx]
7F8D3FC110B0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC110D0 00007878 86004001 C2030700 78281803  [xx...@........(x]
7F8D3FC110E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11100 78787878 00787878 00400100 03080086  [xxxxxxx...@.....]
7F8D3FC11110 281903C2 78787878 78787878 78787878  [...(xxxxxxxxxxxx]
7F8D3FC11120 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11130 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC11140 09008600 1A03C203 78787828 78787878  [........(xxxxxxx]
7F8D3FC11150 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11170 01000078 00860040 03C2030A 7878281B  [x...@........(xx]
7F8D3FC11180 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC111A0 78787878 00007878 86004001 C2030B00  [xxxxxx...@......]
7F8D3FC111B0 78281C03 78787878 78787878 78787878  [..(xxxxxxxxxxxxx]
7F8D3FC111C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC111D0 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC111E0 030C0086 281D03C2 78787878 78787878  [.......(xxxxxxxx]
7F8D3FC111F0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11210 40010000 0D008600 1E03C203 78787828  [...@........(xxx]
7F8D3FC11220 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11240 78787878 01000078 00860040 03C2030E  [xxxxx...@.......]
7F8D3FC11250 7878281F 78787878 78787878 78787878  [.(xxxxxxxxxxxxxx]
7F8D3FC11260 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11270 78787878 78787878 00007878 86004001  [xxxxxxxxxx...@..]
7F8D3FC11280 C2030F00 78282003 78787878 78787878  [..... (xxxxxxxxx]
7F8D3FC11290 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC112A0 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC112B0 00400100 03100086 282103C2 78787878  [..@.......!(xxxx]
7F8D3FC112C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC112E0 78787878 40010000 11008600 2203C203  [xxxx...@......."]
7F8D3FC112F0 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC11300 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11310 78787878 78787878 01000078 00860040  [xxxxxxxxx...@...]
7F8D3FC11320 03C20312 78782823 78787878 78787878  [....#(xxxxxxxxxx]
7F8D3FC11330 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11340 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC11350 86004001 C2031300 78282403 78787878  [.@.......$(xxxxx]
7F8D3FC11360 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11380 00787878 00400100 03140086 282503C2  [xxx...@.......%(]
7F8D3FC11390 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC113B0 78787878 78787878 40010000 15008600  [xxxxxxxx...@....]
7F8D3FC113C0 2603C203 78787828 78787878 78787878  [...&(xxxxxxxxxxx]
7F8D3FC113D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC113E0 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC113F0 00860040 03C20316 78782827 78787878  [@.......'(xxxxxx]
7F8D3FC11400 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11420 00007878 86004001 C2031700 78282803  [xx...@.......((x]
7F8D3FC11430 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11450 78787878 00787878 00400100 03180086  [xxxxxxx...@.....]
7F8D3FC11460 282903C2 78787878 78787878 78787878  [..)(xxxxxxxxxxxx]
7F8D3FC11470 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11480 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC11490 19008600 2A03C203 78787828 78787878  [.......*(xxxxxxx]
7F8D3FC114A0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC114C0 01000078 00860040 03C2031A 7878282B  [x...@.......+(xx]
7F8D3FC114D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC114F0 78787878 00007878 86004001 C2031B00  [xxxxxx...@......]
7F8D3FC11500 78282C03 78787878 78787878 78787878  [.,(xxxxxxxxxxxxx]
7F8D3FC11510 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11520 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC11530 031C0086 282D03C2 78787878 78787878  [......-(xxxxxxxx]
7F8D3FC11540 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11560 40010000 1D008600 2E03C203 78787828  [...@........(xxx]
7F8D3FC11570 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11590 78787878 01000078 00860040 03C2031E  [xxxxx...@.......]
7F8D3FC115A0 7878282F 78787878 78787878 78787878  [/(xxxxxxxxxxxxxx]
7F8D3FC115B0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC115C0 78787878 78787878 00007878 86004001  [xxxxxxxxxx...@..]
7F8D3FC115D0 C2031F00 78283003 78787878 78787878  [.....0(xxxxxxxxx]
7F8D3FC115E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC115F0 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC11600 00400100 03200086 283103C2 78787878  [..@... ...1(xxxx]
7F8D3FC11610 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11630 78787878 40010000 21008600 3203C203  [xxxx...@...!...2]
7F8D3FC11640 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC11650 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11660 78787878 78787878 01000078 00860040  [xxxxxxxxx...@...]
7F8D3FC11670 03C20322 78782833 78787878 78787878  ["...3(xxxxxxxxxx]
7F8D3FC11680 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11690 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC116A0 86004001 C2032300 78283403 78787878  [.@...#...4(xxxxx]
7F8D3FC116B0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC116D0 00787878 00400100 03240086 283503C2  [xxx...@...$...5(]
7F8D3FC116E0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11700 78787878 78787878 40010000 25008600  [xxxxxxxx...@...%]
7F8D3FC11710 3603C203 78787828 78787878 78787878  [...6(xxxxxxxxxxx]
7F8D3FC11720 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11730 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC11740 00860040 03C20326 78782837 78787878  [@...&...7(xxxxxx]
7F8D3FC11750 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11770 00007878 86004001 C2032700 78283803  [xx...@...'...8(x]
7F8D3FC11780 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC117A0 78787878 00787878 00400100 03280086  [xxxxxxx...@...(.]
7F8D3FC117B0 283903C2 78787878 78787878 78787878  [..9(xxxxxxxxxxxx]
7F8D3FC117C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC117D0 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC117E0 29008600 3A03C203 78787828 78787878  [...)...:(xxxxxxx]
7F8D3FC117F0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11810 01000078 00860040 03C2032A 7878283B  [x...@...*...;(xx]
7F8D3FC11820 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11840 78787878 00007878 86004001 C2032B00  [xxxxxx...@...+..]
7F8D3FC11850 78283C03 78787878 78787878 78787878  [.<(xxxxxxxxxxxxx]
7F8D3FC11860 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11870 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC11880 032C0086 283D03C2 78787878 78787878  [..,...=(xxxxxxxx]
7F8D3FC11890 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC118B0 40010000 2D008600 3E03C203 78787828  [...@...-...>(xxx]
7F8D3FC118C0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC118E0 78787878 01000078 00860040 03C2032E  [xxxxx...@.......]
7F8D3FC118F0 7878283F 78787878 78787878 78787878  [?(xxxxxxxxxxxxxx]
7F8D3FC11900 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11910 78787878 78787878 00007878 86004001  [xxxxxxxxxx...@..]
7F8D3FC11920 C2032F00 78284003 78787878 78787878  [./...@(xxxxxxxxx]
7F8D3FC11930 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11940 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC11950 00400100 03300086 284103C2 78787878  [..@...0...A(xxxx]
7F8D3FC11960 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11980 78787878 40010000 31008600 4203C203  [xxxx...@...1...B]
7F8D3FC11990 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC119A0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC119B0 78787878 78787878 01000078 00860040  [xxxxxxxxx...@...]
7F8D3FC119C0 03C20332 78782843 78787878 78787878  [2...C(xxxxxxxxxx]
7F8D3FC119D0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC119E0 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC119F0 86004001 C2033300 78284403 78787878  [.@...3...D(xxxxx]
7F8D3FC11A00 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11A20 00787878 00400100 03340086 284503C2  [xxx...@...4...E(]
7F8D3FC11A30 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11A50 78787878 78787878 40010000 35008600  [xxxxxxxx...@...5]
7F8D3FC11A60 4603C203 78787828 78787878 78787878  [...F(xxxxxxxxxxx]
7F8D3FC11A70 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11A80 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC11A90 00860040 03C20336 78782847 78787878  [@...6...G(xxxxxx]
7F8D3FC11AA0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11AC0 00007878 86004001 C2033700 78284803  [xx...@...7...H(x]
7F8D3FC11AD0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11AF0 78787878 00787878 00400100 03380086  [xxxxxxx...@...8.]
7F8D3FC11B00 284903C2 78787878 78787878 78787878  [..I(xxxxxxxxxxxx]
7F8D3FC11B10 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11B20 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC11B30 39008600 4A03C203 78787828 78787878  [...9...J(xxxxxxx]
7F8D3FC11B40 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11B60 01000078 00860040 03C2033A 7878284B  [x...@...:...K(xx]
7F8D3FC11B70 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11B90 78787878 00007878 86004001 C2033B00  [xxxxxx...@...;..]
7F8D3FC11BA0 78284C03 78787878 78787878 78787878  [.L(xxxxxxxxxxxxx]
7F8D3FC11BB0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11BC0 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC11BD0 033C0086 284D03C2 78787878 78787878  [..<...M(xxxxxxxx]
7F8D3FC11BE0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11C00 40010000 3D008600 4E03C203 78787828  [...@...=...N(xxx]
7F8D3FC11C10 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11C30 78787878 01000078 00860040 03C2033E  [xxxxx...@...>...]
7F8D3FC11C40 7878284F 78787878 78787878 78787878  [O(xxxxxxxxxxxxxx]
7F8D3FC11C50 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11C60 78787878 78787878 00007878 86004001  [xxxxxxxxxx...@..]
7F8D3FC11C70 C2033F00 78285003 78787878 78787878  [.?...P(xxxxxxxxx]
7F8D3FC11C80 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11C90 78787878 78787878 78787878 00787878  [xxxxxxxxxxxxxxx.]
7F8D3FC11CA0 00400100 03400086 285103C2 78787878  [..@...@...Q(xxxx]
7F8D3FC11CB0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11CD0 78787878 40010000 41008600 5203C203  [xxxx...@...A...R]
7F8D3FC11CE0 78787828 78787878 78787878 78787878  [(xxxxxxxxxxxxxxx]
7F8D3FC11CF0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11D00 78787878 78787878 01000078 00860040  [xxxxxxxxx...@...]
7F8D3FC11D10 03C20342 78782853 78787878 78787878  [B...S(xxxxxxxxxx]
7F8D3FC11D20 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11D30 78787878 78787878 78787878 00007878  [xxxxxxxxxxxxxx..]
7F8D3FC11D40 86004001 C2034300 78285403 78787878  [.@...C...T(xxxxx]
7F8D3FC11D50 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11D70 00787878 00400100 03440086 285503C2  [xxx...@...D...U(]
7F8D3FC11D80 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11DA0 78787878 78787878 40010000 45008600  [xxxxxxxx...@...E]
7F8D3FC11DB0 5603C203 78787828 78787878 78787878  [...V(xxxxxxxxxxx]
7F8D3FC11DC0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11DD0 78787878 78787878 78787878 01000078  [xxxxxxxxxxxxx...]
7F8D3FC11DE0 00860040 03C20346 78782857 78787878  [@...F...W(xxxxxx]
7F8D3FC11DF0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11E10 00007878 85004001 C2030000 78285803  [xx...@.......X(x]
7F8D3FC11E20 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11E40 78787878 00787878 00400100 03010085  [xxxxxxx...@.....]
7F8D3FC11E50 285903C2 78787878 78787878 78787878  [..Y(xxxxxxxxxxxx]
7F8D3FC11E60 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11E70 78787878 78787878 78787878 40010000  [xxxxxxxxxxxx...@]
7F8D3FC11E80 02008500 5A03C203 78787828 78787878  [.......Z(xxxxxxx]
7F8D3FC11E90 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11EB0 01000078 00850040 03C20303 7878285B  [x...@.......[(xx]
7F8D3FC11EC0 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
        Repeat 1 times
7F8D3FC11EE0 78787878 00007878 85004001 C2030400  [xxxxxx...@......]
7F8D3FC11EF0 78285C03 78787878 78787878 78787878  [.\(xxxxxxxxxxxxx]
7F8D3FC11F00 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
7F8D3FC11F10 78787878 78787878 00787878 00400100  [xxxxxxxxxxx...@.]
7F8D3FC11F20 03050085 285D03C2 78787878 78787878  [......](xxxxxxxx]
7F8D3FC11F30 78787878 78787878 78787878 78787878  [xxxxxxxxxxxxxxxx]
  Repeat 1 times
 
REDO RECORD - Thread:1 RBA: 0x00316e.00000025.0164 LEN: 0x0330 VLD: 0x09
SCN: 0x0b86.0d004018 SUBSCN:  1 11/02/2018 19:16:45
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x01400085 OBJ:350366 SCN:0x0b86.0d003f20 SEQ:2 OP:11.2 ENC:0 RBL:0
KTB Redo 
op: 0x11  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x0019.004.00009104    uba: 0x00c1702a.1ea1.2a
Block cleanout record, scn:  0x0b86.0d004016 ver: 0x01 opt: 0x02, entries follow...
  itli: 1  flg: 2  scn: 0x0b86.0d003f1f
  itli: 2  flg: 2  scn: 0x0b86.0d003f20
KDO Op code: IRP row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x01400085  hdba: 0x01400082
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 slot: 6(0x6) size/delt: 99
fb: --H-FL-- lb: 0x1  cc: 3
null: ---
col  0: [ 3]  c2 03 5e
col  1: [40]
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
col  2: [50]
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
CHANGE #2 TYP:0 CLS:65 AFN:3 DBA:0x00c000a8 OBJ:4294967295 SCN:0x0b86.0d003fdc SEQ:1 OP:5.2 ENC:0 RBL:0
ktudh redo: slt: 0x0004 sqn: 0x00009104 flg: 0x0012 siz: 136 fbi: 0
            uba: 0x00c1702a.1ea1.2a    pxid:  0x0000.000.00000000
CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008d OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.5 ENC:0 RBL:0
index redo (kdxlre):  restore leaf row
KTB Redo 
op: 0x01  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x0019.004.00009104    uba: 0x00c1702a.1ea1.2b
REDO: SINGLE / -- / -- 
itl: 2, sno: 0, row size 55
CHANGE #4 TYP:0 CLS:65 AFN:3 DBA:0x00c000a8 OBJ:4294967295 SCN:0x0b86.0d004018 SEQ:1 OP:5.4 ENC:0 RBL:0
ktucm redo: slt: 0x0004 sqn: 0x00009104 srt: 0 sta: 9 flg: 0x2 ktucf redo: uba: 0x00c1702a.1ea1.2b ext: 60 spc: 2072 fbi: 0 
CHANGE #5 TYP:0 CLS:66 AFN:3 DBA:0x00c1702a OBJ:4294967295 SCN:0x0b86.0d003fdb SEQ:2 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 136 spc: 2324 flg: 0x0012 seq: 0x1ea1 rec: 0x2a
            xid:  0x0019.004.00009104  
ktubl redo: slt: 4 rci: 0 opc: 11.1 [objn: 350366 objd: 350366 tsn: 7]
Undo type:  Regular undo        Begin trans    Last buffer split:  No 
Temp Object:  No 
Tablespace Undo:  No 
             0x00000000  prev ctl uba: 0x00c1702a.1ea1.28 
prev ctl max cmt scn:  0x0b86.0d003d78  prev tx cmt scn:  0x0b86.0d003d88 
txn start scn:  0x0b86.0d004017  logon user: 62  prev brb: 12677159  prev bcl: 0 BuExt idx: 0 flg2: 0
KDO undo record:
KTB Redo 
op: 0x04  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: L  itl: xid:  0x001e.013.0000a096 uba: 0x00c05594.2254.32
                      flg: C---    lkc:  0     scn: 0x0b86.0d003f1f
KDO Op code: DRP row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x01400085  hdba: 0x01400082
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 slot: 6(0x6)
CHANGE #6 TYP:0 CLS:66 AFN:3 DBA:0x00c1702a OBJ:4294967295 SCN:0x0b86.0d004018 SEQ:1 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 112 spc: 2186 flg: 0x0022 seq: 0x1ea1 rec: 0x2b
            xid:  0x0019.004.00009104  
ktubu redo: slt: 4 rci: 42 opc: 10.22 objn: 350367 objd: 350367 tsn: 7
Undo type:  Regular undo       Undo type:  Last buffer split:  No 
Tablespace Undo:  No 
             0x00000000
index undo for leaf key operations
KTB Redo 
op: 0x03  ver: 0x01  
compat bit: 4 (post-11) padding: 1
op: Z
Dump kdilk : itl=2, kdxlkflg=0x1 sdc=0 indexid=0x140008a block=0x0140008d
(kdxlpu): purge leaf row
key :(45): 
 03 c2 03 5e 28 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
END OF REDO DUMP
----- Redo read statistics for thread 1 -----
Read rate (ASYNC): 19Kb in 0.02s => 0.93 Mb/sec
Total redo bytes: 1023Kb Longest record: 8Kb, moves: 0/9 moved: 0Mb (0%)
Longest LWN: 19Kb, reads: 1 
Last redo scn: 0x0b86.0d004018 (12670371643416)
Change vector header moves = 0/22 (0%)
----------------------------------------------
[oracle@linux11 trace]$ 



Summary

REDO RECORD - Thread:1 RBA: 0x00316e.00000002.0010 LEN: 0x0074 VLD: 0x05
(LWN RBA: 0x00316e.00000002.0010 LEN: 0038 NST: 0001 SCN: 0x0b86.0d004016)
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d003f20 SEQ:2 OP:4.1 ENC:0 RBL:0
        Cleanout (the DBA is the high values leaf)

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000002.0084 LEN: 0x0144 VLD: 0x01
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d003fe6 SEQ:1 OP:5.2 ENC:0 RBL:0
        Update undo seg header 
                Start transaction (saves old transaction control data)

CHANGE #2 TYP:0 CLS:76 AFN:3 DBA:0x00c05593 OBJ:4294967295 SCN:0x0b86.0d003fe5 SEQ:2 OP:5.1 ENC:0 RBL:0
ktubl redo: slt: 13 rci: 0 opc: 10.21 [objn: 350367 objd: 350367 tsn: 7]
        index general undo (branch) operations
                Actually undo for high-values leaf.
                Detail "unlock block"

CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.6 ENC:0 RBL:0
        Lock block (the DBA is the high values leaf)

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000002.01c8 LEN: 0x20a4 VLD: 0x01
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.2 ENC:0 RBL:0
        Update undo segment header 
                Updates current undo block address
                set to record 1 of new block

CHANGE #2 TYP:1 CLS:76 AFN:3 DBA:0x00c05595 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.1 ENC:0 RBL:0
ktubu redo: slt: 13 rci: 0 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
        index general undo (branch) operations
                Actually undo for high-values leaf.
                Saves leaf block before image
                vector size 8,148 bytes

CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:2 OP:10.9 ENC:0 RBL:0
        Modify service ITL
                the DBA is the high values leaf block
                Set Flag to -B--, Lck to 1

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000013.017c LEN: 0x0044 VLD: 0x01
CHANGE #1 TYP:0 CLS:8 AFN:5 DBA:0x01400088 OBJ:350367 SCN:0x0b86.0d003e8e SEQ:3 OP:13.22 ENC:0 RBL:0
        Update L1 bitmap block 
                Space management for the index segment

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000013.01c0 LEN: 0x01ac VLD: 0x01
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:2 OP:5.2 ENC:0 RBL:0
        Update segment header
                Updates current undo block address
                set to record 1 of new block

CHANGE #2 TYP:1 CLS:76 AFN:3 DBA:0x00c05596 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:1 OP:5.1 ENC:0 RBL:0
ktubu redo: slt: 13 rci: 0 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
        index general undo (branch) operations
                Actually undo for new index leaf block
                Make leaf block empty.

CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008d OBJ:350367 SCN:0x0b86.0d003e8e SEQ:2 OP:10.8 ENC:0 RBL:0
        init header of newly allocated leaf block
                the DBA is next block to current high
                Include ITL (with service set to B), and row(s)

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000014.017c LEN: 0x0048 VLD: 0x01
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008b OBJ:350367 SCN:0x0b86.0d003e8f SEQ:2 OP:4.1 ENC:0 RBL:0
        Block cleanout 
                the DBA is the root block
                set the SCN in the service ITL

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000014.01c4 LEN: 0x00e0 VLD: 0x01
CHANGE #1 TYP:0 CLS:76 AFN:3 DBA:0x00c05596 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:2 OP:5.1 ENC:0 RBL:0
kitubu redo: slt: 13 rci: 1 opc: 10.21 objn: 350367 objd: 350367 tsn: 7
        index general undo (branch) operations
                Modify root block
                Replace service ITL entry
                Purge a row

CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008b OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.15 ENC:0 RBL:0
        Insert branch block row
                The DBA is the root block
                inserts pointer to new high value block

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000015.00b4 LEN: 0x1fb0 VLD: 0x01
CHANGE #1 TYP:0 CLS:75 AFN:3 DBA:0x00c00170 OBJ:4294967295 SCN:0x0b86.0d004016 SEQ:3 OP:5.4 ENC:0 RBL:0
        Commit transaction
                The recursive transaction for the split

CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:350367 SCN:0x0b86.0d004016 SEQ:3 OP:10.8 ENC:0 RBL:0
        init leaf block being split
                the DBA is current high values
                writes most of a block image to disc 

------

REDO RECORD - Thread:1 RBA: 0x00316e.00000025.0164 LEN: 0x0330 VLD: 0x09
CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x01400085 OBJ:350366 SCN:0x0b86.0d003f20 SEQ:2 OP:11.2 ENC:0 RBL:0
        Insert row piece
                (to table)
                cleans out current ITL entries

CHANGE #2 TYP:0 CLS:65 AFN:3 DBA:0x00c000a8 OBJ:4294967295 SCN:0x0b86.0d003fdc SEQ:1 OP:5.2 ENC:0 RBL:0
        start transaction 
                new seg header, and transaction table slot

CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008d OBJ:350367 SCN:0x0b86.0d004016 SEQ:1 OP:10.5 ENC:0 RBL:0
        restore leaf row
                This is the new high-values block
                Was the image of data from the previous block marked deleted ?

CHANGE #4 TYP:0 CLS:65 AFN:3 DBA:0x00c000a8 OBJ:4294967295 SCN:0x0b86.0d004018 SEQ:1 OP:5.4 ENC:0 RBL:0
        Commit

CHANGE #5 TYP:0 CLS:66 AFN:3 DBA:0x00c1702a OBJ:4294967295 SCN:0x0b86.0d003fdb SEQ:2 OP:5.1 ENC:0 RBL:0
ktubl redo: slt: 4 rci: 0 opc: 11.1 [objn: 350366 objd: 350366 tsn: 7]
        undo row operation
                Undo for a table row insert
                Also holds transaction table control data

CHANGE #6 TYP:0 CLS:66 AFN:3 DBA:0x00c1702a OBJ:4294967295 SCN:0x0b86.0d004018 SEQ:1 OP:5.1 ENC:0 RBL:0
ktubu redo: slt: 4 rci: 42 opc: 10.22 objn: 350367 objd: 350367 tsn: 7
        undo leaf key operation
                Purge the new row from the new leaf block

One detail to note – the sys-recursive transaction operates in “classic” redo style – i.e. one change at a time in separate records in the order of activity; then the user transaction runs in the “modern” redo style using private redo to batch the forward and backward change vectors.

November 8, 2018

Where / Having

Filed under: CBO,Conditional SQL,Execution plans,Oracle — Jonathan Lewis @ 12:11 pm GMT Nov 8,2018

There’s a very old mantra about the use of the “having” clause that tells us that if it’s valid (i.e. will always give the same results) then any predicate that could be moved from the having clause to the where clause should be moved. In recent versions of Oracle the optimizer will do this for itself in some cases but (for reasons that I’m not going to mention) I came across a silly example recently where a little manual editing produced a massive performance improvement.

Here’s a quick demo:


rem
rem     Script:         where_having.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

reate table t1
as
select * 
from all_objects 
where rownum <= 50000   -- > comment to avoid WordPress format issue
;

spool where_having.lst

set serveroutput off

select /*+ gather_plan_statistics */ 
        object_type, count(*) 
from    t1 
group by 
        object_type 
having  count(*) > 0 
and     1 = 2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
;

The big question is: will Oracle do a full tablescan of t1, or will it apply a “null is not null” filter early to bypass that part of the plan. Here’s the plan pulled from memory, with run-time statistics (all versions from 11g to 18c):


--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      0 |00:00:00.02 |     957 |    955 |       |       |          |
|*  1 |  FILTER             |      |      1 |        |      0 |00:00:00.02 |     957 |    955 |       |       |          |
|   2 |   HASH GROUP BY     |      |      1 |      1 |     27 |00:00:00.02 |     957 |    955 |  1186K|  1186K| 1397K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |  50000 |  50000 |00:00:00.01 |     957 |    955 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter((COUNT(*)>0 AND 1=2))


As you can see, the filter at operation 1 includes the contradiction “1=2”, but Oracle tests this only after doing the full tablescan and aggregation. If you move the “1=2” into the where clause the tablescan doesn’t happen.

Interestingly, if you write the query with an in-line view and trailing where clause:


select /*+ gather_plan_statistics */
        *
from    (
        select
                object_type, count(*)
        from    t1
        group by
                object_type
        having  count(*) > 0
        )
where
        1 = 2
;

The optimizer is clever enough to push the final predicate inside the view (where you might expect it to become part of the having clause) and push it all the way down into a where clause on the base table.


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      0 |00:00:00.01 |
|*  1 |  FILTER              |      |      1 |        |      0 |00:00:00.01 |
|   2 |   HASH GROUP BY      |      |      1 |      1 |      0 |00:00:00.01 |
|*  3 |    FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   4 |     TABLE ACCESS FULL| T1   |      0 |  50000 |      0 |00:00:00.01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(COUNT(*)>0)
   3 - filter(NULL IS NOT NULL)



A quirky case of the optimizer handling the (apparently) more complex query than it does the simpler query.

November 5, 2018

UKOUG Tech 18

Filed under: Non-technical — Jonathan Lewis @ 10:46 am GMT Nov 5,2018

One month day [ed: updated 1st Dec] to go before the big event in Liverpool. so I’ve been browsing the agenda to get some idea of the talks I’ll probably go to. At present this is what my list looksl like after filling a couple of gaps left by my first attempt:

Sunday

14:00	Database block checking - the unknown truth
15:00	You probably don't need to read a 10053 trace file (ME)
16:10	Oracle Database 12c consolidation: why and how to manage CPU resources
17:10	Securefiles - the hidden storage organisation inside LOB segments

Monday

 9:00	The Optimizer & the road to the latest generation of the Oracle database
11:20	Making Materialized View great again	
11:50	Winning performance challenges in Oracle Multitenant
13:35	Struggling with Statistics (ME)
16:15	Constraint Optimization (or the difference one comma makes)
17:10	10 Behavior Changes in Oracle 12c/18c

Tuesday

 9:00	The basics of understanding Execution Plans (ME - double session)
11:40	Dissecting SQL Plan Management Options
12:35	Cost Based Optimsation - The round table (ME)
14:25	Single Row vs. the Array Interface vs. Parallelism
15:20	Cost Based Optimisation - The Panel (ME with several others)
16:35	Declarative Constraints - Features and Performance impact
17:05	Oracle SQL Developer - Everything you need to know about tuning

Wednesday

 9:00	Successful Star Schemas
 9:55	Hardening the Oracle database
11:40	Tracing parallel execution
12:35	Advanced RAC programming features
14:25	Maximum Availability Architecture (MAA) with Oracle 12cR2 & GoldenGate 12cR2/R3 : A Customer Case Study 
15:20	Pitfalls and Surprises with dbms_stats; how to solve them

I reserve the right to change my mind on the day, of course, since the competition is strong – and I may get wrapped up in conversations with other attendees and not notice the time passing.

Update 1st Dec 2018

An important part of attending conferences is the opportunity you get to talk to other people who do similar work. So if you see me standing around doing nothing (or even if I’m in conversation with someone else) feel free to come and talk to me. If you have a problem you want to discuss don’t be embarrassed about asking me about it – at best I might tell you that there’s a straight-forward and safe solution, at worst I might start asking you for important background details and end up telling you that the problem is too complex to resolve in a few minutes chat.

 

November 1, 2018

Join Cardinality – 5

Filed under: CBO,Histograms,Oracle,Statistics — Jonathan Lewis @ 1:34 pm GMT Nov 1,2018

So far in this series I’ve written about the way that the optimizer estimates cardinality for an equijoin where one end of the join has a frequency histogram and the other end has a histogram of type:

It’s now time to look at a join where the other end has a height-balanced histogram. Arguably it’s not sensible to spend time writing about this since you shouldn’t be creating them in 12c (depending, instead, on the hybrid histogram that goes with the auto_sample_size), and the arithmetic is different in 11g. However, there still seem to be plenty of people running 12c but not using the auto_sample_size and that means they could be generating some height-balanced histograms – so let’s generate some data and see what happens.


rem
rem     Script:         freq_hist_join_04a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4        Different results
rem

drop table t2 purge;
drop table t1 purge;

set linesize 156
set trimspool on
set pagesize 60

set feedback off

execute dbms_random.seed(0)

create table t1(
        id              number(6),
        n04             number(6),
        n05             number(6),
        n20             number(6),
        j1              number(6)
)
;

create table t2(
        id              number(8,0),
        n20             number(6,0),
        n30             number(6,0),
        n50             number(6,0),
        j2              number(6,0)      
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   4) + 1                    n04,
        mod(rownum,   5) + 1                    n05,
        mod(rownum,  20) + 1                    n20,
        trunc(2.5 * trunc(sqrt(v1.id*v2.id)))   j1
from
        generator       v1,
        generator       v2
where
        v1.id <= 10 -- > comment to avoid WordPress format issue
and     v2.id <= 10 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   20) + 1                   n20,
        mod(rownum,   30) + 1                   n30,
        mod(rownum,   50) + 1                   n50,
        28 - round(abs(7*dbms_random.normal))   j2      
from
        generator       v1
where
        rownum <= 800 -- > comment to avoid WordPress format issue
;

commit;

prompt  ==========================================================
prompt  Using estimate_percent => 100 to get height-balanced in t2
prompt  ==========================================================

begin
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1 for columns j1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T2',
                estimate_percent => 100,
                method_opt       => 'for all columns size 1 for columns j2 size 20'
        );
end;
/

As in earlier examples I’ve created some empty tables, then inserted randomly generated data (after calling the dbms_random.seed(0) function to make the data reproducible). Then I’ve gathered stats, knowing that there will be 22 distinct values in t2 so forcing a height-balanced histogram of 20 buckets to appear.

When we try to calculate the join cardinality we’re going to need various details from the histogram information, such as bucket sizes, number of distinct values, and so on, so in the next few queries to display the histogram information I’ve captured a few values into SQL*Plus variables. Here’s the basic information about the histograms on the join columns t1.j1 and t2.j2:


column num_distinct new_value m_t2_distinct
column num_rows     new_value m_t2_rows
column num_buckets  new_value m_t2_buckets
column bucket_size  new_value m_t2_bucket_size

select  table_name, column_name, histogram, num_distinct, num_buckets, density
from    user_tab_cols
where   table_name in ('T1','T2')
and     column_name in ('J1','J2')
order by
        table_name
;

select  table_name, num_rows, decode(table_name, 'T2', num_rows/&m_t2_buckets, null) bucket_size
from    user_tables
where   table_name in ('T1','T2')
order by
        table_name
;

column table_name format a3 heading "Tab"
break on table_name skip 1 on report skip 1

with f1 as (
select
        table_name,
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'J1'
),
f2 as (
select
        table_name,
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'J2'
)
select f1.* from f1
union all
select f2.* from f2
order by 1,2
;


Tab                  COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS    DENSITY
-------------------- -------------------- --------------- ------------ ----------- ----------
T1                   J1                   FREQUENCY                 10          10       .005
T2                   J2                   HEIGHT BALANCED           22          20 .052652266

Tab                    NUM_ROWS BUCKET_SIZE
-------------------- ---------- -----------
T1                          100
T2                          800          40

Tab      VALUE ROW_OR_BUCKET_COUNT ENDPOINT_NUMBER
--- ---------- ------------------- ---------------
T1           2                   5               5
             5                  15              20
             7                  15              35
            10                  17              52
            12                  13              65
            15                  13              78
            17                  11              89
            20                   7              96
            22                   3              99
            25                   1             100

T2           1                   0               0
            14                   1               1
            17                   1               2
            18                   1               3
            19                   1               4
            20                   1               5
            21                   2               7
            22                   1               8
            23                   1               9
            24                   2              11
            25                   2              13
            26                   3              16
            27                   2              18
            28                   2              20

As you can see, there is a frequency histogram on t1 reporting a cumulative total of 100 rows; and the histogram on t2 is a height-balanced histogram of 20 buckets, showing 21, 24, 25, 26, 27 and 28 as popular values with 2,2,2,2,3 and 2 endpoints (i.e. buckets) respectively. You’ll also note that the t2 histogram has 21 rows with row/bucket 0 showing us the minimum value in the column and letting us know that bucket 1 is not exclusively full of the value 14. (If 14 had been the minimum value for the column as well as an end point Oracle would not have created a bucket 0 – that may be a little detail that isn’t well-known – and will be the subject of a little follow-up blog note.)

Let’s modify the code to join the two sets of hisogram data on data value – using a full outer join so we don’t lose any data but restricting ourselves to values where the histograms overlap. We’re going to follow the idea we’ve developed in earlier postings and multiply frequencies together to derive a join frequency, so we’ll start with a simple full outer join and assume that when we find a real match value we should behave as if the height-balanced buckets (t2) where the bucket count is 2 or greater represent completely full buckets and are popular values..

I’ve also included in this query (because it had a convenient full outer join) a column selection that counts how many rows there are in t1 with values that fall inside the range of the t2 histogram but don’t match a popular value in t2.


column unmatch_ct   new_value m_unmatch_ct
column product format 999,999.99

break on report skip 1
compute sum of product on report

with f1 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T1' 
and     column_name = 'J1'
),
f2 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T2' 
and     column_name = 'J2'
),
join1 as (
select
        f1.value t1_value, 
        f2.value t2_value, 
        f1.frequency t1_frequency, 
        f2.frequency t2_frequency, 
        sum(
                case
                        when f2.frequency > 1 and f1.frequency is not null
                                then 0
                                else f1.frequency
                end
        ) over()        unmatch_ct,
        f2.frequency * &m_t2_bucket_size *
        case
                when f2.frequency > 1 and f1.frequency is not null
                        then f1.frequency
        end     product
from
        f1
full outer join
        f2
on
        f2.value = f1.value
where
        coalesce(f1.value, f2.value) between 2 and 25
--      coalesce(f1.value, f2.value) between &m_low and &m_high
order by
        coalesce(f1.value, f2.value)
)
select  *
from    join1
;

  T1_VALUE   T2_VALUE T1_FREQUENCY T2_FREQUENCY UNMATCH_CT     PRODUCT
---------- ---------- ------------ ------------ ---------- -----------
	 2			 5			99
	 5			15			99
	 7			15			99
	10			17			99
	12			13			99
		   14			      1 	99
	15			13			99
	17	   17		11	      1 	99
		   18			      1 	99
		   19			      1 	99
	20	   20		 7	      1 	99
		   21			      2 	99
	22	   22		 3	      1 	99
		   23			      1 	99
		   24			      2 	99
	25	   25		 1	      2 	99	 80.00
							   -----------
sum								 80.00


We captured the bucket size (&m_bucket_size) for the t2 histogram as 40 in the earlier SQL, and we can see now that in the overlap range (which I’ve hard coded as 2 – 25) we have three buckets that identify popular values, but only one of them corresponds to a value in the frequency histogram on t1, so the Product column shows a value of 1 * 2 * 40 = 80. Unfortunately this is a long way off the prediction that the optimizer is going to make for the simple join. (Eventually we’ll see it’s 1,893 so we have a lot more rows to estimate for).

Our code so far only acounts for items that are popular in both tables. Previous experience tells us that when a popular value exists only at one end of the join predicate we need to derive a contribution to the total prediction through an “average selectivity” calculated for the other end of the join predicate. For frequency histograms we’ve seen that “half the number of the least frequently occuring value” seems to be the appropriate frequency estimate, and if we add that in we’ll get two more contributions to the total from the values 21 and 24 which appear in the height-balanced (t2) histogram as popular but don’t appear in the frequency (t1) histogram. Since the lowest frequency in t1 is 1 this would give us two contributions of 0.5 * 2 (buckets) * 40 (bucket size) viz: two contributions of 40 bringing our total to 160 – still a serious shortfall from Oracle’s prediction. So we need to work out how Oracle generates an “average frequency” for the non-popular values of t2 and then apply it to the 99 rows in t1 that haven’t yet been accounted for in the output above.

To calculate the “average selectivity” of a non-popular row in t2 I need a few numbers (some of which I’ve already acquired above). The total number of rows in the table (NR), the number of distinct values (NDV), and the number of popular values (NPV), from which we can derive the the number of distinct non-popular values and the number of rows for the non-popular values. The model that Oracle uses to derive these numbers is simply to assume that a value is popular if its frequency in the histogram is greater than one and the number of rows for that value is “frequency * bucket size”.

The first query we ran against the t2 histogram showed 6 popular values, accounting for 13 buckets of 40 rows each. We reported 22 distinct values for the column and 800 rows for the table so the optimizer assumes the non-popular values account for (22 – 6) = 16 distinct values and (800 – 13 * 40) = 280 rows. So the selectivity of non-popular values is (280/800) * (1/16) = 0.021875. This needs to be multiplied by the 99 rows in t1 which don’t match a popular value in t2 – so we now need to write some SQL to derive that number.

We could enhance our earlier full outer join and slot 0.5, 99, and 0.021875 into it as “magic” constants. Rather than do that though I’m going to write a couple of messy queries to derive the values (and the low/high range we’re interested in) so that I will be able to tweak the data later on and see if the formula still produces the right answer.


column range_low    new_value m_low
column range_high   new_value m_high
column avg_t1_freq  new_value m_avg_t1_freq
column new_density  new_value m_avg_t2_dens

with f1 as (
        select  endpoint_value ep_val,
                endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
        from    user_tab_histograms
        where   table_name  = 'T1'
        and     column_name = 'J1'
),
f2 as (
        select  endpoint_value ep_val,
                endpoint_number ep_num,
                endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
        from    user_tab_histograms
        where   table_name  = 'T2'
        and     column_name = 'J2'
)
select
        max(min_v) range_low, min(max_v) range_high, min(min_f)/2 avg_t1_freq, max(new_density) new_density
from    (
        select  min(ep_val) min_v, max(ep_val) max_v, min(frequency) min_f, to_number(null) new_density
        from f1
        union all
        select  min(ep_val) min_v, max(ep_val) max_v, null           min_f,
                (max(ep_num) - sum(case when frequency > 1 then frequency end)) /
                (
                        max(ep_num) *
                        (&m_t2_distinct - count(case when frequency > 1 then 1 end))
                )       new_density
        from    f2
        )
;

 RANGE_LOW RANGE_HIGH AVG_T1_FREQ NEW_DENSITY
---------- ---------- ----------- -----------
         2         25          .5     .021875


This query finds the overlap by querying the two histograms and reporting the lower high value and higher low value. It also reports the minimum frequency from the frequency histogram and divides by 2, and calculates the number of non-popular rows divided by the total number of rows and the number of distinct non-popular values. (Note that I’ve picked up the number of distinct values in t2.j2 as a substituion variable generated by one of my earlier queries.) In my full script this messy piece of code runs before the query that showed I showed earlier on that told us how well (or badly) the two histograms matched.

Finally we can use the various values we’ve picked up in a slightly more complex version of the full outer join – with a special row added through a union all to give us our the estimate:


break on report skip 1
compute sum of product on report

with f1 as (
select
        table_name,
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency,
        endpoint_number
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'J1'
),
f2 as (
select
        table_name,
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency,
        endpoint_number
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'J2'
),
join1 as (
select
        f1.value t1_value, f2.value t2_value,
        f1.frequency t1_frequency, f2.frequency t2_frequency,
        f2.frequency *
        case
                when f2.frequency > 1 and f1.frequency is not null
                        then f1.frequency
                when f2.frequency > 1 and f1.frequency is null
                        then &m_avg_t1_freq
        end *
        &m_t2_bucket_size        product
from
        f1
full outer join
        f2
on
        f2.value = f1.value
where
        coalesce(f1.value, f2.value) between &m_low and &m_high
order by
        coalesce(f1.value, f2.value)
)
select  *
from    join1
union all
select
        null,
        &m_avg_t2_dens,
        &m_unmatch_ct,
        &m_t2_rows * &m_avg_t2_dens,
        &m_t2_rows * &m_avg_t2_dens * &m_unmatch_ct
from
        dual
;


  T1_VALUE   T2_VALUE T1_FREQUENCY T2_FREQUENCY     PRODUCT
---------- ---------- ------------ ------------ -----------
         2                       5
         5                      15
         7                      15
        10                      17
        12                      13
                   14                         1
        15                      13
        17         17           11            1
                   18                         1
                   19                         1
        20         20            7            1
                   21                         2       40.00
        22         22            3            1
                   23                         1
                   24                         2       40.00
        25         25            1            2       80.00
              .021875           99         17.5    1,732.50
                                                -----------
sum                                                1,892.50


It remains only to check what the optimizer thinks the cardinality will be on a simple join, and then modify the data slightly to see if the string of queries continues to produce the right result. Here’s a starting test:


set serveroutput off

alter session set statistics_level = all;
alter session set events '10053 trace name context forever';
alter session set tracefile_identifier='BASELINE';

select
        count(*)
from
        t1, t2
where
        t1.j1 = t2.j2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;
alter session set events '10053 trace name context off';


 COUNT(*)
----------
      1327


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  f8wj7karu0hhs, child number 0
-------------------------------------
select         count(*) from         t1, t2 where         t1.j1 = t2.j2

Plan hash value: 906334482

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      41 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      41 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1893 |   1327 |00:00:00.01 |      41 |  2545K|  2545K| 1367K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       7 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       7 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."J1"="T2"."J2")

The E-rows for the hash join operation reports 1893 – and a quick check of the 10053 trace file shows that this is 1892.500000 rounded – a perfect match for the result from my query. I’ve modified the data in various ways (notably updating the t1 table to change the value 25 (i.e. the current maximum value of j1) to other, lower, values) and the algorithm in the script seems to be sound – for 12c and 18c. I won’t be surprised, however, if someone comes up with a data pattern where the wrong estimate appears.

Don’t look back

Upgrades are a pain. With the same data set and same statistics on 11.2.0.4, running the same join query between t1 and t2, here’s the execution plan I got:


SQL_ID  f8wj7karu0hhs, child number 0
-------------------------------------
select         count(*) from         t1, t2 where         t1.j1 = t2.j2

Plan hash value: 906334482

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      12 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      12 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1855 |   1327 |00:00:00.01 |      12 |  2440K|  2440K| 1357K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       6 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       6 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."J1"="T2"."J2")

Notice that the E-rows value is different. The join cardinality algorithm seems to have changed in the upgrade from 11.2.0.4 to 12c. I haven’t quite figured out how to get to the 11g result, but I seem to get quite close most of the time by making a simple change to the final query that I used to predict the optimizer’s estimate. In the case expression that chooses between the actual t1.j1 frequency and the “average frequency” don’t choose, just use the latter:


        case
                when f2.frequency > 1 and f1.frequency is not null
                        -- then f1.frequency    -- 12c
                        then &m_avg_t1_freq     -- 11g
                when f2.frequency > 1 and f1.frequency is null
                        then &m_avg_t1_freq
        end *
 

As I modified the t1 row with the value 25 to hold other values this change kept producing results that were exactly 2, 2.5, or 3.0 different from the execution plan E-Rows – except in one case where the error was exactly 15.5 (which looks suspiciously like 17.5: the “average frequency in t2” minus 2). I’m not keen to spend time trying to work out exactly what’s going on but the takeaway from this change is that anyone upgrading from 11g to 12c may find that some of their queries change plans because they happen to match the type of example I’ve been working with in this post.

In some email I exchanged with Chinar Aliyev, he suggested three fix-controls that might be relevant. I’ve added these to an earlier posting I did when I first hit the anomaly a few days ago but I’ll repeat them here. I will be testing their effects at some point in the not too distant future:

14033181 1 QKSFM_CARDINALITY_14033181   correct ndv for non-popular values in join cardinality comp.         (12.1.0.1)
19230097 1 QKSFM_CARDINALITY_19230097   correct join card when popular value compared to non popular         (12.2.0.1)
22159570 1 QKSFM_CARDINALITY_22159570   correct non-popular region cardinality for hybrid histogram          (12.2.0.1)
Next Page »

Powered by WordPress.com.