Oracle Scratchpad

April 24, 2020

Conversion Errors

Filed under: Oracle — Jonathan Lewis @ 11:03 am BST Apr 24,2020

I’ve been meaning to write this note for at least three years and was prompted to write up my draft notes this morning as a follow-up to yesterday’s note on the perils of applying a to_date() function to a date column. But then I took a look at the most recent questions on the Oracle Developer Forum and discovered that Tim Hall (@oraclebase) had (inevitably) already done the necessary write-up, so I’ve just left a brief note here (more for my own benefit than anything else) of the highlights with a link to his page.

Key features available in 12.2 to avoid conversion errors are:

  • The validate_conversion() function that returns a 1 or 0 depending whether on not an expression can be converted successfully to a specific type using a particular – returns null if the expression evaluates to null.
  • Extension to generic conversion functions (e.g. to_date()) that allow a “default” value to be used to replace the supplied value if the attempted conversion raises a conversion error.

Code Sample – report rows with a valid date (in French) that is earlier than the start of this year’s (English) tax year:


rem
rem     Script:         validate_conversion.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1 (v1 varchar2(42));
insert into t1 values('15-June-2016');
insert into t1 values('15-Juin-2016');
commit;

prompt  ==================================================================
prompt  Single predicate test - only rows that validate as dates in French
prompt  ==================================================================

select  *
from    t1
where   validate_conversion(v1 as date, 'dd-month-yyyy' , 'nls_date_language=french') = 1
/

prompt  ===========================================================
prompt  Valid French dates that are before 6th April 2017 (English)
prompt  ===========================================================

select
        *
from    t1
where
        validate_conversion(v1 as date, 'dd-month-yyyy' , 'nls_date_language=french') = 1
and     to_date(v1, 'dd-month-yyyy' , 'nls_date_language=french') < to_date('06-Apr-2017','dd-mon-yyyy')
/

prompt  =========================================
prompt  Repeat the above with reversed predicates
prompt  =========================================

select
        *
from    t1
where
        to_date(v1, 'dd-month-yyyy' , 'nls_date_language=french') < to_date('06-Apr-2017','dd-mon-yyyy')
and     validate_conversion(v1 as date, 'dd-month-yyyy' , 'nls_date_language=french') = 1
/

prompt  ====================================================
prompt  Repeat the above but force the order of evaluation
prompt  This will raise error ORA-01843: not a invalid month
prompt  ====================================================

select
        /*+ ordered_predicates */
        *
from    t1
where
        to_date(v1, 'dd-month-yyyy' , 'nls_date_language=french') < to_date('06-Apr-2017','dd-mon-yyyy')
and     validate_conversion(v1 as date, 'dd-month-yyyy' , 'nls_date_language=french') = 1
/

prompt  ========================================================================
prompt  Handle the requirement with the 12.2 extended "to_date()" functionality
prompt  ========================================================================

select
        *
from
        t1
where
        to_date(
                v1 default '06-Avril-2017' on conversion error,
                'dd-month-yyyy',
                'nls_date_language=french'
        ) < to_date(
                '06-Apr-2017',
                'dd-mon-yyyy',
                'nls_date_language=English'
        )
/

Is there any guarantee that a validate_conversion() function will always be called before a call that attempts to use the conversion in another predicate. I doubt it, there’s no indication in the manuals and no general guarantee from Oracle about order of execution of predicates, so I wouldn’t risk it. Maybe the only safe use would be a CASE expression (which short-circuits) with a “when successful then” clause for evaluation of the actual conversion. You’d need to take a little extra care to remember to handle nulls correctly.

Maybe replace the table with a non-mergeable inline view that eliminates the failures? Possibly not, the optimizer might still do a simple filter pushdown.

 

 

 

 

April 23, 2020

date_to_date

Filed under: Execution plans,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 12:00 pm BST Apr 23,2020

Every now and again someone posts a piece of SQL on the Oracle Developer Forum that includes a predicate with an expression like to_date(date_column). This is a problem for several reasons – not the least being the type of performance problem that showed up in a post from a couple of years back that has just been resurrected.

Before I examine the performance detail, here’s a simple demo of the “wrong data” problem that can go unnoticed, cut-n-paste from a 12.2.0.1 session of SQL*Plus:


SQL> create table t1 (d1 date);

Table created.

SQL> insert into t1 values(sysdate);

1 row created.

SQL> select * from t1 where d1 = to_date(d1);

no rows selected

SQL> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

Session altered.

SQL> select * from t1 where d1 = to_date(d1);

D1
--------------------
22-apr-2020 15:12:36

1 row selected.


Note particularly how changing the nls_date_format can change the result of a query! (There’s another simple example on the referenced blog note.)

What’s going on? to_date(date_col) is equivalent to to_date(to_char(date_col)) using the nls_date_format to do the two conversions, and the most common default format is one that truncates the date column to date-only. So in may systems  to_date(date_col) is nearly (though doing it an expensive way) the same as trunc(date_col).

But let’s go further and see how we wreck the benefit of an index, even if we’ve made sure that we still get the correct results. The following is a minimalist model of a common billing requirement: conversion between currencies:


rem
rem     Script:         date_to_date.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        'GBP'                           from_currency,
        'USD'                           to_currency,
        trunc(sysdate - 1000) + rownum  conversion_date,
        'Corporate'                     conversion_type,
        round(
                1.25 + dbms_random.value/4,
                6
        )                               conversion_rate,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
where
        rownum <= 1000 -- > comment to avoid WordPress format issue
;

create unique index t1_i1 on t1(from_currency, to_currency, conversion_date, conversion_type)
/
alter table t1 add constraint t1_pk primary key(
        from_currency, to_currency, conversion_date, conversion_type
)
/

create table driver(
        invoice_currency        varchar2(3),
        billing_currency        varchar2(3),
        client_type             varchar2(10),
        invoice_date            date
);

insert into driver values(
        'GBP', 'USD','Corporate',trunc(sysdate)
);

commit;

execute dbms_stats.gather_table_stats(null,'driver');

I’ve created table t1 to model the exchange rates between US dollars and UK pounds over a range of about three years ending “today”. There’s also a conversion_type column in the unique key to this table that allows us to have multiple reasons for exchanges, allowing multiple exchange rates on the same day. I really ought to have a check constraint on this table that says something like: check (conversion_date = trunc(conversion_date)).

I’ve also created a “driver” table that holds the data that might be exactly the data we need to extract an exchange rate for a single invoice. So let’s run the SQL that gets the appropriate exchange rate for this one invoice:


set serveroutput off
alter session set statistics_level = all;

prompt  =====================================
prompt  First run with simple date comparison
prompt  =====================================

select
        /*+
                leading(driver t1)
                use_nl_with_index(t1)
        */
        driver.*,
        t1.conversion_rate
from
        driver, t1
where
        t1.from_currency = driver.invoice_currency
and     t1.to_currency = driver.billing_currency
and     t1.conversion_type = driver.client_type
and     t1.conversion_date = driver.invoice_date
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  =================================
prompt  Now run with to_date(date_column)
prompt  =================================

select
        /*+
                leading(driver t1)
                use_nl_with_index(t1)
        */
        driver.*,
        t1.conversion_rate
from
        driver, t1
where
        t1.from_currency = driver.invoice_currency
and     t1.to_currency = driver.billing_currency
and     t1.conversion_type = driver.client_type
and     to_date(t1.conversion_date) = to_date(driver.invoice_date)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

In the first case I’ve used the correct join predicate between these tables; in the second I’ve put in a redundant to_date() function call at both ends of the predicate. (If you think this is unrealistic – it’s an exact match for the production code I reported in the blog note I cited above).

Here are the two execution plans – with their rowsource execution stats:

----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |      1 |00:00:00.01 |      10 |      8 |
|   1 |  NESTED LOOPS                |        |      1 |      1 |      1 |00:00:00.01 |      10 |      8 |
|   2 |   NESTED LOOPS               |        |      1 |      1 |      1 |00:00:00.01 |       9 |      8 |
|   3 |    TABLE ACCESS FULL         | DRIVER |      1 |      1 |      1 |00:00:00.01 |       7 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | T1_I1  |      1 |      1 |      1 |00:00:00.01 |       2 |      8 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |      1 |      1 |00:00:00.01 |       1 |      0 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."FROM_CURRENCY"="DRIVER"."INVOICE_CURRENCY" AND
              "T1"."TO_CURRENCY"="DRIVER"."BILLING_CURRENCY" AND "T1"."CONVERSION_DATE"="DRIVER"."INVOICE_DATE"
              AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE")


----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |      1 |00:00:00.01 |      15 |      4 |
|   1 |  NESTED LOOPS                |        |      1 |      1 |      1 |00:00:00.01 |      15 |      4 |
|   2 |   NESTED LOOPS               |        |      1 |      1 |      1 |00:00:00.01 |      14 |      4 |
|   3 |    TABLE ACCESS FULL         | DRIVER |      1 |      1 |      1 |00:00:00.01 |       7 |      0 |
|*  4 |    INDEX RANGE SCAN          | T1_I1  |      1 |      1 |      1 |00:00:00.01 |       7 |      4 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |      1 |      1 |00:00:00.01 |       1 |      0 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."FROM_CURRENCY"="DRIVER"."INVOICE_CURRENCY" AND
              "T1"."TO_CURRENCY"="DRIVER"."BILLING_CURRENCY" AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE")
       filter((TO_DATE(INTERNAL_FUNCTION("T1"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("DRIVE
              R"."INVOICE_DATE")) AND "T1"."CONVERSION_TYPE"="DRIVER"."CLIENT_TYPE"))

Three things to take note of:

  1. The index unique scan at operation 4 has changed to an index range scan.
  2. The predicate information for operation 4 has changed from a pure access predicate to an access predicate plus a very messy filter predicate
  3. The range scan that now appears at operation 4 gets 7 buffers (that’s one root block and 6 leaf blocks) to find the one rowid we need, and will have applied the messy filter predicate to all 1,000 index entries where the two currency codes were GBP/USD.

In the case of the older article the problem query was spending almost all of its time in a join like this, doing 59 buffer gets to find a single rowid for every invoice presented – possibly doing thousands of comparisons as it went.

 

April 22, 2020

Eureka!

Filed under: Execution plans,Oracle,Troubleshooting — Jonathan Lewis @ 1:08 pm BST Apr 22,2020

I woke up last night with a brilliant solution to a problem that’s been bugging me for more than a year. How does a call to report_sql_monitor() manage to produce output like this:

SQL Plan Monitoring Details (Plan Hash Value=4262489872)
======================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read  | Read  | Activity | Activity Detail |
|    |                                 |       | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes |   (%)    |   (# samples)   |
======================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |      |        13 |     +2 |     1 |        1 |       |       |          |                 |
|  1 |   SORT AGGREGATE                |       |       1 |      |        13 |     +2 |     1 |        1 |       |       |          |                 |
|  2 |    NESTED LOOPS                 |       |      5M |  14M |        13 |     +2 |     1 |       4M |       |       |          |                 |
|  3 |     NESTED LOOPS                |       |      5M |  14M |        13 |     +2 |     1 |       4M |       |       |          |                 |
|  4 |      TABLE ACCESS FULL          | T1    |      5M | 4686 |        13 |     +2 |     1 |       4M |  1050 | 260MB |          |                 |
|  5 |      INDEX RANGE SCAN           | T2_I1 |       1 |    2 |        14 |     +1 |    5M |       4M |  5494 |  83MB |          |                 |
|  6 |     TABLE ACCESS BY INDEX ROWID | T2    |       1 |    3 |        13 |     +2 |    5M |       4M | 17743 | 260MB |          |                 |
======================================================================================================================================================


If you’re wondering why this is a problematic output take a look at the A-Rows column for operation 4 (and 5), and the Execs colukmn for operations 5 (and 6). Given the nature of a nested loop join the number of Execs of operation 5 should match the number of rows (A-rows) generated by operation 4, and the number of Execs of operation 6 should match the number of rows generated by operation 5.

But Oracle claims to have generated 4 million rows then executed 5 million times in both cases. How does that happen?

The very simple thought I had last night was “formatting” – if the actual value is 4,500,000 maybe the A-rows rounds down and the Execs rounds up, so I set up a little test to check the hypothesis. Since the version where I’d first seen the anomaly was 12.1.0.2 that’s the version I tested first.

rem
rem     Script:         report_sql_monitor_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem 

drop table t2 purge;
drop table t1 purge;

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',25,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 4500100 -- > comment to avoid WordPress format issue
/

create index t1_i1 on t1(id);

create table t2 as select * from t1;
create index t2_i1 on t2(id);

alter system flush shared_pool;
alter system flush buffer_cache;

select  /*+
                monitor
                qb_name(main)
                leading(@inline t1@inline t2@inline)
                use_nl_with_index(@inline t2@inline)
        */
        count(n2)
from    (
        select
                /*+
                        qb_name(inline)
                */
                t1.n1, t2.n1 n2
        from    t1, t2
        where   t1.id <= 4499999
--      where   t1.id <= 4500000
--      where   t1.id <= 4500001
        and     t2.id =  t1.n1
        )
;

prompt  =======================================
prompt  Now run report_sql_monitor.sql from SYS
prompt  =======================================

You’ll note that I’ve allowed three possible values for the number of rows I get from t1 to drive the index probe into t2 – I thought I’d have to do several experiments to make the oddity appear. In fact it appeared on the very first attempt: A-Rows reported 4M, and Execs report a clearly incorrect 5M. (Repeating the test on 12.2.0.1 the anomaly disappeared – all the output reported 4M for 4,499,999; and 5M for 4,500,000.)

You’ll notice the “flush buffer_cache” command in this script, this came about because of some secondary testing. Because I wanted to check a few variations I commented out the “drop table” commands. The consequence of not dropping the tables was that the anomaly disappeared – in some way it was connected to reading blocks from disc and didn’t appear once the table and index were cached. In fact when I set the driving count to 4,998,000 in one of my tests I got a result that looked slightly more counter-intuitive than the original:

=====================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity | Activity Detail |
|    |                                 |       | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |   (# samples)   |
=====================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |      |        11 |     +2 |     1 |        1 |      |       |          |                 |
|  1 |   SORT AGGREGATE                |       |       1 |      |        11 |     +2 |     1 |        1 |      |       |          |                 |
|  2 |    NESTED LOOPS                 |       |      4M |  14M |        11 |     +2 |     1 |       4M |      |       |          |                 |
|  3 |     NESTED LOOPS                |       |      4M |  14M |        11 |     +2 |     1 |       4M |      |       |          |                 |
|  4 |      TABLE ACCESS FULL          | T1    |      4M | 4686 |        11 |     +2 |     1 |       4M | 1050 | 260MB |          |                 |
|  5 |      INDEX RANGE SCAN           | T2_I1 |       1 |    2 |        12 |     +1 |    4M |       4M |  345 |  83MB |          |                 |
|  6 |     TABLE ACCESS BY INDEX ROWID | T2    |       1 |    3 |        11 |     +2 |    5M |       4M | 1048 | 262MB |          |                 |
=====================================================================================================================================================


Notice how I’ve (“correctly”) acquried 4M index rowids at operation 5, but (apparently) managed to access the table 5M times!

As I noted above, the oddity seems to have disappeared in 12.2.0.1 so it’s not worth pursuing further. I think it might relate either to Oracle double counting the Exec when its first access is a phyiscal read, or maybe it’s managing to increment the wrong counter when it visits a space management block to find the next batch of blocks to read. The detail is really not very important.

What might be important, though, is that sometimes the number of Execs you see is truthfully greater than ought to be possible because of the strange effects that can appear with read-consistency. For a very long time I believed that the oddity I’ve show here was something to do with read-consistency and query restart, but I couldn’t come up with a hypothesis to explain how that could be happening. So I was very relieved last night to come up with a simple explanation (which actually turned out to be wrong, but did at least help me recognise how the anomaly could happen in perfectly harmless circumstances).

Footnote:

What’s the difference between 4 million and 5 million – sometimes it’s one (just one, not one million). When testing under 12.2.0.1 I only had to add one row to the driving rowsource (changing 4,499,999 to 4,500,000) to see the A-rows and Execs change from 4M to 5M. If you forget the implicit rounding errors in reports like this you can end up chasing a tiny anomaly because it looks like a huge anomaly.

 

April 20, 2020

Execution Plans

Filed under: Execution plans,Oracle — Jonathan Lewis @ 3:37 pm BST Apr 20,2020

One of the most important skills needed when investigating badly performing SQL is the ability to read Execution Plans. It’s a topic I’ve written and spoken about frequently – even to the extent of doing full-day seminars – but there’s always scope for finding another way of presenting the method.

This is a note based on a few introductory Powerpoint slides I created for the (sadly cancelled) Polish OUG Workshop and Tanel Poder’s Virtual Conference taking a slightly different approach from one I normally use to get people stated on interpreting (serial) execution plans.

I want to begin with a query, rather than a plan:

rem
rem     Script:         poug_plan_01a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

explain plan for
select
        /*+
                qb_name(main)
        */
        ord.id,
        ord.valuation,
        ord.status,
        (select /*+ qb_name(company) */ max(com.name)  from companies  com  where com.id  = ord.id_company) company,
        (select /*+ qb_name(product) */ max(prd1.name) from products   prd1 where prd1.id = orl.id_product) product,
        orl.quantity
from
        orders          ord,
        order_lines     orl
where
        ord.date_placed > trunc(sysdate) - 7
and     orl.id_ord = ord.id
and     orl.id_product in (
                select  /*+ qb_name(class) */
                        prd2.id
                from    products prd2
                where   prd2.class = 'Group25'
        )
/

I’ve got a query with 4 “query blocks” – every time you see the key words “select”, “insert”, “merge”, etc. that’s a query block, and it’s very helpful to adopt a habit of naming the query blocks in your SQL statements, as I have done here, using the the qb_name() hint.

What’s the optimizer going to do with this query, and how is Oracle going to run the query?

There are a number of possibilities – some of them version dependent, some dictated by the statistics Oracle has about the data, some dictated by the available indexes and constraints, but there’s one key feature that will come into play: the optimizer “likes” statements that consist of a single query block because it’s easy to optimize statements of that shape, so the optimizer will attempt to transform this query in ways that will reduce the number of query blocks it has to handle.

In newer versions of Oracle the optimizer uses “cost based query transformation” almost everywhere, but in older versions of Oracle a number of the transformation were “heuristic” (i.e.: “if I see an X I’m going to transform it into a Y”). Cost based query transformation means the optimizer will try to  work out the efficiency of applying a transformation and may then decide not to do it.

Heuristic Transformations: Sometimes the optimizer trace file (10053 trace) will tell you that it has performed a heuristic transformation by printing  a message like (e.g.):  SU: Performing unnesting that does not require costing.

I’m going to do two things with this query – first I’ll tell the optimizer that it should not do any cost-based transformations, then I’ll give it free rein to do whatever it fancies. All I have to do for the first case is add the hint /*+ no_query_transformation */ to the query, which gets me following execution plan under 12.2.0.1:


select * from table(dbms_xplan.display(null,null,'alias'));

--------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |     1 |    34 |  3567   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE              |             |     1 |    35 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| COMPANIES   |     1 |    35 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | COM_PK      |     1 |       |     1   (0)| 00:00:01 |
|   4 |  SORT AGGREGATE              |             |     1 |    35 |            |          |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    35 |     2   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 |
|*  7 |  FILTER                      |             |       |       |            |          |
|*  8 |   HASH JOIN                  |             |  3500 |   116K|    52  (24)| 00:00:01 |
|*  9 |    TABLE ACCESS FULL         | ORDERS      |   700 | 16100 |    12  (34)| 00:00:01 |
|  10 |    TABLE ACCESS FULL         | ORDER_LINES | 50000 |   537K|    37  (14)| 00:00:01 |
|* 11 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    12 |     2   (0)| 00:00:01 |
|* 12 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - COMPANY
   2 - COMPANY / COM@COMPANY
   3 - COMPANY / COM@COMPANY
   4 - PRODUCT
   5 - PRODUCT / PRD1@PRODUCT
   6 - PRODUCT / PRD1@PRODUCT
   7 - MAIN
   9 - MAIN    / ORD@MAIN
  10 - MAIN    / ORL@MAIN
  11 - CLASS   / PRD2@CLASS
  12 - CLASS   / PRD2@CLASS

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("COM"."ID"=:B1)
   6 - access("PRD1"."ID"=:B1)
   7 - filter( EXISTS (SELECT /*+ QB_NAME ("CLASS") */ 0 FROM "PRODUCTS" "PRD2"
              WHERE "PRD2"."ID"=:B1 AND "PRD2"."CLASS"='Group25'))
   8 - access("ORL"."ID_ORD"="ORD"."ID")
   9 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-7)
  11 - filter("PRD2"."CLASS"='Group25')
  12 - access("PRD2"."ID"=:B1)

I’ll say more about this plan later, but for the moment I want to pick up two key points.

  • First, you can almost see the 4 query blocks in the body of execution plan. The join between orders and order_lines appears in operations 8 – 10; the query for max(companies.name) is highly visible in operations 1 – 3, as is the query for max(products.name) in operations 4 – 6; but the “IN” subquery is slightly less obvious in the combination of the FILTER operation at operation 7 and the indexed access in operations 11 and 12. Looking at operation 7 in the Predicate Information section of the plan you will note that the “IN” subquery has changed to an “EXISTS” subquery despite the hint that the optimizer shouldn’t do any cost based query transformation – and that’s because this was a heuristic transformation, not a costed transformation, the optimizer did it because it could!
  • The second thing I particularly want to draw your attention to is the section labelled Query Block Name / Object Alias – which appears as the result of including the ‘alias’ format option in my call to dbms_xplan.display(). This gives us two bits of information: for each operation in the plan (except for a few gaps) it has told us which “final” (also referred to as an “outline_leaf”) query block owns that operation; similarly, for any operation involving an object name in the plan it has given us the “fully qualified” alias, i.e. the underlying object alias combined with the query block where the alias was introduced.

For complex plans this alias information is so useful that it’s a bit of a shame that it doesn’t appear in the body of the plan when you ask for it – so before I go on I’m going to edit the output to show it the way I’d like to see it:


-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Query Block | Alias        |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |     1 |    34 |  3567   (1)| 00:00:01 |             |              |
|   1 |  SORT AGGREGATE              |             |     1 |    35 |            |          | COMPANY     |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| COMPANIES   |     1 |    35 |     2   (0)| 00:00:01 | COMPANY     | COM@COMPANY  |
|*  3 |    INDEX UNIQUE SCAN         | COM_PK      |     1 |       |     1   (0)| 00:00:01 | COMPANY     | COM@COMPANY  |
|   4 |  SORT AGGREGATE              |             |     1 |    35 |            |          | PRODUCT     |              |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    35 |     2   (0)| 00:00:01 | PRODUCT     | PRD1@PRODUCT |
|*  6 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 | PRODUCT     | PRD1@PRODUCT |
|*  7 |  FILTER                      |             |       |       |            |          |             |              |
|*  8 |   HASH JOIN                  |             |  3500 |   116K|    52  (24)| 00:00:01 | MAIN        |              |
|*  9 |    TABLE ACCESS FULL         | ORDERS      |   700 | 16100 |    12  (34)| 00:00:01 | MAIN        | ORD@MAIN     |
|  10 |    TABLE ACCESS FULL         | ORDER_LINES | 50000 |   537K|    37  (14)| 00:00:01 | MAIN        | ORL@MAIN     |
|* 11 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    12 |     2   (0)| 00:00:01 | CLASS       | PRD2@CLASS   |
|* 12 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 | CLASS       | PRD2@CLASS   |
-------------------------------------------------------------------------------------------------------------------------

In this initial example the extra columns don’t really add much value – although my alias column does show us very clearly that PRODUCTS (the Name) at operation 11 comes from the CLASS subquery while PRODUCTS at operation 5 comes from the scalar subquery named PRODUCT. Such things are not always so obvious (some readers may immediately think of problems seeing multiple copies of table gl_code_combinations in very long execution plans).

Let’s move on to the second call to the query where we let the 12.2 optimizer do whatever it thinks best with my query – this time I’ll go straight to the edited version of the plan (and also leave the Query Block/Alias section in place):


-------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Query Block  | Alias        |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |             |    71 |  8236 |    89  (18)| 00:00:01 |              |              |
|*  1 |  HASH JOIN OUTER      |             |    71 |  8236 |    89  (18)| 00:00:01 | SEL$040EE093 |              |
|*  2 |   HASH JOIN OUTER     |             |    71 |  5751 |    75  (19)| 00:00:01 |              |              |
|*  3 |    HASH JOIN          |             |    71 |  3266 |    65  (20)| 00:00:01 |              |              |
|*  4 |     TABLE ACCESS FULL | ORDERS      |   700 | 16100 |    12  (34)| 00:00:01 | SEL$040EE093 | ORD@MAIN     |
|*  5 |     HASH JOIN         |             |  1008 | 23184 |    53  (17)| 00:00:01 |              |              |
|*  6 |      TABLE ACCESS FULL| PRODUCTS    |   200 |  2400 |    13   (8)| 00:00:01 | SEL$040EE093 | PRD2@CLASS   |
|   7 |      TABLE ACCESS FULL| ORDER_LINES | 50000 |   537K|    37  (14)| 00:00:01 | SEL$040EE093 | ORL@MAIN     |
|   8 |    TABLE ACCESS FULL  | COMPANIES   | 10000 |   341K|    10  (10)| 00:00:01 | SEL$040EE093 | COM@COMPANY  |
|   9 |   TABLE ACCESS FULL   | PRODUCTS    | 10000 |   341K|    13   (8)| 00:00:01 | SEL$040EE093 | PRD1@PRODUCT |
-------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$040EE093
   4 - SEL$040EE093 / ORD@MAIN
   6 - SEL$040EE093 / PRD2@CLASS
   7 - SEL$040EE093 / ORL@MAIN
   8 - SEL$040EE093 / COM@COMPANY
   9 - SEL$040EE093 / PRD1@PRODUCT

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("PRD1"."ID"(+)="ORL"."ID_PRODUCT")
   2 - access("COM"."ID"(+)="ORD"."ID_COMPANY")
   3 - access("ORL"."ID_ORD"="ORD"."ID")
   4 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-7)
   5 - access("ORL"."ID_PRODUCT"="PRD2"."ID")
   6 - filter("PRD2"."CLASS"='Group25')

Note
-----
   - this is an adaptive plan

It’s a little unsatisfactory that some of the lines in the plan don’t record a query block name at all – however it is still fairly obvious that the entire query has been collapsed into a single query block that is nothing more than a five-table hash join. To get this single query block the optimizer has “unnested” three subqueries and eliminated two ‘group by’s. It’s when the optimzer transforms the query this much that the alias information can be so useful – how easy would it be to decide which occurrence of PRODUCTS at operations 6 and 9 corresponded to which part of the original query?

First Child First – Recursive Descent (FCF)

This is where we finally get to the key rule for reading a (serial) execution plan. A (section of a) plan involving a single query block obeys the sound-bite: “first child first – recursive descent”. (or FCF for short)

For a plan involving multiple query blocks – each query block individually follows the rule, and then you have to learn specific rules for stitching together multiple query blocks. I’ll supply examples of possible stitch-ups over the next few weeks. For now I’ll just show “FCF” applied to the second plan, then take a slightly closer look at the first plan.

Each operation in a query block produces (or “is”, or “represents”) a rowsource. When we talk about “the order of execution” of a plan there’s an implicit interpretation of the phrase to mean “the order in which the operations produce their rowsource”.

FCF encapsulates the idea that to produce a rowsource an operation calls each of its child operations in turn to produce their rowsources then takes some action to combine the child rowsources. The order in which the child operations are called is, in the first case, exactly the order in which the child operations are printed in the execution plan.

There are two important details to add to this.

  • First, although a parent operation will call its child operations in turn, the parent may cycle through the child operations multiple times. The obvious example of this is the nested loop join where the parent will fetch a row from its first child then fetch (zero or more) rows from its second child, then fetch the next row from its first child then call its second child again, and repeat until is has consumed the entire rowsource from the first child.
  • Secondly, an operation may be blocking or non-blocking, and it’s worth remembering this when thinking about whether you want to tune a query for latency (time to first row(s) returned) or throughput (time to last row returned). Some operations will have to create their entire rowsource before they pass anything up to their parent (blocking); some operations will create and pass their rowsource piece by piece on demand.

Let’s apply FCF to the single block plan – which I’ll repeat here without the predicate and alias information:

-------------------------------------------------------------------------------------
| Id  | Operation             | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |             |    71 |  8236 |    89  (18)| 00:00:01 |
|*  1 |  HASH JOIN OUTER      |             |    71 |  8236 |    89  (18)| 00:00:01 |
|*  2 |   HASH JOIN OUTER     |             |    71 |  5751 |    75  (19)| 00:00:01 |
|*  3 |    HASH JOIN          |             |    71 |  3266 |    65  (20)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL | ORDERS      |   700 | 16100 |    12  (34)| 00:00:01 |
|*  5 |     HASH JOIN         |             |  1008 | 23184 |    53  (17)| 00:00:01 |
|*  6 |      TABLE ACCESS FULL| PRODUCTS    |   200 |  2400 |    13   (8)| 00:00:01 |
|   7 |      TABLE ACCESS FULL| ORDER_LINES | 50000 |   537K|    37  (14)| 00:00:01 |
|   8 |    TABLE ACCESS FULL  | COMPANIES   | 10000 |   341K|    10  (10)| 00:00:01 |
|   9 |   TABLE ACCESS FULL   | PRODUCTS    | 10000 |   341K|    13   (8)| 00:00:01 |
-------------------------------------------------------------------------------------

We start from the top – ignoring the “operation 0 SELECT” which isn’t really a plan operation.

  • Operation 1 is a “hash join outer” – we expect exactly two child operations, the first will supply the “build” table to the hash join, the second will supply the “probe” table. We will have to get the entire rowsource from the first child before calling the second child to start supplying its rowsource. Looking at the text indents, the first child is operation 2, the second is operation 9 so we have to start by calling operation 2.
  • Operation 2 is a “hash join outer” – so, again, we expect it to have two child operations that it will use to construct a rowsource to pass up to its parent. Its first child is operation 3, its second child is operation 8 so we have to start by calling operation 3.
  • Operation 3 is a “hash join” – so, again, we expect it to have two child operations that it will use to construct a rowsource to pass up to its parent. Its first child is operation 4, its second child is operation 5 so we have to start by calling operation 4.
  • Operation 4 is a “table access full” of orders – which is an operation that has no child operations and can immediately start to return a rowsource to its parent. Operation 4 is the first thing that “happens” in this plan.
  • After acquiring the entire rowsource from operation 4 and building an in-memory hash table from it, operation 3 can now call its second child (operation 5) to supply a rowsource to probe the hash table.
  • But operation 5 is a “hash join” – so, yet again, we expect to see two child operations that it will use to construct the rowsource it has to pass up to its parent. Its first child is operation 6, its second child is operation 7 so we have to start by calling operation 6
  • Operation 6 is a “table access full” of products – which is an operation that has no child operations and can immediately start to return a rowsource to its parent. Operation 6 is the second thing that “happens” in this plan.
  • After acquiring the entire rowsource from operation 6 and creating a build table from it, operation 5 can now call its second child (operation 7) to supply a rowsource to use as the probe table.
  • Operation 7 is a “table access full” of order_lines – which is an operation that has no child operations and can immediately start to return a rowsource to its parent. Operation 7 is the third thing that “happens” in this plan
  • Operation 5 can now use the rowsource from operation 7 to probe its build table and start passing the results up to its parent (operation 3) So operation 5 is the fourth thing that “happens” in this plan.
  • Operation 3 can now use the rowsource from operation 5 (its second child) to probe the build table it created from operation 4 and start passing the results up to its parent (operation 2). So operation 3 is the fifth thing that “happens” in this plan..
  • Operation 2 can now absorb the entire row source from operation 3 (its first child) to construct its build table, after which it has to call its second child (operation 8) to acquire a rowsource to use as its probe table.
  • Operation 8 is a “table access full” of companies – which is an operation that has no child operations and can immediately start to return a rowsource to its parent (operation 2). Operation 8 is the sixth thing that “happens” in this plan.
  • Operation 2 can now use the rowsource from operation 8 to probe its build table and start passing the results up to its parent (operation 1) So operation 2 is the seventh thing that “happens” in this plan.
  • Operation 1 can now absorb the entire row source from operation 2 (its first child) to construct its build table, after which it has to call its second child (operation 9) to acquire a rowsource to use as its probe table.
  • Operation 9 is a “table access full” of products – which is an operation that has no child operations and can immediately start to return a rowsource to its parent (operation 1). Operation 9 is the eighth thing that “happens” in this plan.
  • Operation 1 can now use the rowsource from operation 9 to probe its build table and start passing the results up to its parent (“operation 0″/ end-user) So operation 1 is the ninth thing that “happens” in this plan.

Extracting the ordering information the order of operation is:

  • Operation 4 is the first thing that happens
  • Operation 6 is the second thing that happens
  • Operation 7 is the third thing that happens
  • Operation 5 is the fourth thing that happens
  • Operation 3 is the fifth thing that happens
  • Operation 8 is the sixth thing that happens
  • Operation 2 is the seventh thing that happens
  • Operation 9 is the eighth thing that happens
  • Operation 1 is the ninth thing that happens

The rule is very simple but the process seems very long-winded when you have to write down every step completely (fortunately it doesn’t take much practice for this to become a rapid mental exercise in many cases). Start at the top and keep repeating (recursive descent) “what’s the first child” until you get to the operation that doesn’t have any child operations – that’s the first operation in the order of execution. Then back up one step to see if there’s a second (then third, then …) child to consider. and repeat the process of descending and ascending.

There’s one further important observation to make about this particular case.  The order of operation started with the orders table. The first two tables to be joined were the products and order_lines tables. The optimizer has produced a plan where the order of operation does not match the join order. This is an artefact of the optimizer’s ability to “swap join inputs” for hash joins when evaluating a join order.

Multiple Query Blocks

I’ll finish with a brief look at the plan I got when I blocked cost-based query transformation.  This time I’ll show the plan with the edited query block names still in place:


-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Query Block | Alias        |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |     1 |    34 |  4017   (1)| 00:00:01 |             |              |
|   1 |  SORT AGGREGATE              |             |     1 |    35 |            |          | COMPANY     |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| COMPANIES   |     1 |    35 |     2   (0)| 00:00:01 | COMPANY     | COM@COMPANY  |
|*  3 |    INDEX UNIQUE SCAN         | COM_PK      |     1 |       |     1   (0)| 00:00:01 | COMPANY     | COM@COMPANY  |
|   4 |  SORT AGGREGATE              |             |     1 |    35 |            |          | PRODUCT     |              |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    35 |     2   (0)| 00:00:01 | PRODUCT     | PRD1@PRODUCT |
|*  6 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 | PRODUCT     | PRD1@PRODUCT |
|*  7 |  FILTER                      |             |       |       |            |          |             |              |
|*  8 |   HASH JOIN                  |             |  3950 |   131K|    52  (24)| 00:00:01 | MAIN        |              |
|*  9 |    TABLE ACCESS FULL         | ORDERS      |   790 | 18170 |    12  (34)| 00:00:01 | MAIN        | ORD@MAIN     |
|  10 |    TABLE ACCESS FULL         | ORDER_LINES | 50000 |   537K|    37  (14)| 00:00:01 | MAIN        | ORL@MAIN     |
|* 11 |   TABLE ACCESS BY INDEX ROWID| PRODUCTS    |     1 |    12 |     2   (0)| 00:00:01 | CLASS       | PRD2@CLASS   |
|* 12 |    INDEX UNIQUE SCAN         | PRD_PK      |     1 |       |     1   (0)| 00:00:01 | CLASS       | PRD2@CLASS   |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("COM"."ID"=:B1)
   6 - access("PRD1"."ID"=:B1)
   7 - filter( EXISTS (SELECT /*+ QB_NAME ("CLASS") */ 0 FROM "PRODUCTS" "PRD2"
              WHERE "PRD2"."ID"=:B1 AND "PRD2"."CLASS"='Group25'))
   8 - access("ORL"."ID_ORD"="ORD"."ID")
   9 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-7)
  11 - filter("PRD2"."CLASS"='Group25')
  12 - access("PRD2"."ID"=:B1)

Under the Query Block column you can see the four separate blocks clearly labelled, and if you use the column to split the plan into 4 separate pieces (which will lose you operation 0 and operation 7) then you should be able to work out what’s happening in the 4 separare sub-plans quite easily.

After that it’s necessary to work out how these 4 subplans should be connected, and in almost all cases you’ll find that you need to look at the query itself to get some clues about the connections. There are a couple of rules that I’ll mention for this example, though.

a) Scalar subqueries in the select list appear as separate sub-plans ABOVE the driving plan. So we know that the COMPANY and PRODUCT sub-plans represent the two scalar subqueries in our select list which will (notionally) run once for every row returned by the main query. What we don’t know is how many times those subqueries will run (partly because we don’t know how many distinct products and companies will be reported by the main query, and partly because we don’t know how effective the “scalar subquery caching” is going to be.

b) When we see a FILTER operation there’s a good chance that there’s going to be a subquery in the “where” clause of the main query – though there are other interpretations.  In our case we know we have an IN subquery in the original text, and can see that this has turned to an EXISTS in the predicate information section so we can connect together the MAIN and CLASS sub-plans using the “filter-subquery” mechanism which behaves a little like a nested loop join, viz: for each row supplied by the first child call the second child (and subquent children in turn) to see if the row should be passed up to the parent operation. We have only two child operations in this case, but if there were several child operations Oracle would call them in order, stopping at the earliest possible moment.

The FILTER operation covers a multitude of sins, so I’ll be working through more examples in greater detail in future notes.

Costing

A final comment to make about this example, and other plans involving multiple query blocks. The optimizer really has very little idea of how many times any sub-plan / subquery will operate and for a very long time the final Cost (operation 0) of the query would cater only for the cost of the main query block – which meant that sometime you could get a plan where the total cost was lower than the cost of one of the sub-plans. As time passed various algorithms were introduced that resulted in costs that appearing that attempted to estimate the number of times that some of the sub-plan / subquery components were likely to run.

In this example the cost of the main query (operation 8) is 52 with a predicted rowsource of 3,500 rows. The total cost of the plan is shown as 3,567 – which means the increment cost of running the existence subquery 3,500 times (at a cost of something between 1 and 2 each time) has been added, and (possibly) some multiple (the number of rows surviving the FILTER which, unfortunately, is not reported) of the cost of running the other two subqueries has been added to that. A quick check of the plan with the inline scalar subqueries removed indicates that their cost may have been added just once each, and the cost of the filter subquery (slightly over 1) was added 3,500 times.

Clearly there’s plenty of scope for the optimizer to produce poor estimates of run-time costs as it compares different cost-based transformations of a query, so it’s important to be able to recognise the patterns caused by different transformations and taking advantage of hinting to block particular transformations if the optimizer has fooled by its arithmetic into making a bad choice.

 

 

 

 

April 10, 2020

raw timestamp

Filed under: Histograms,Oracle,Troubleshooting — Jonathan Lewis @ 6:59 pm BST Apr 10,2020

Many years ago I wrote a note with some comments about conveting the low_value and high_value columns of user_tab_columns to see what values they actually held. At the time the dbms_stats package held a few procedures to convert raw values and it was necessary to wrap these procedures in user-defined functions if you wanted a function to do the job.

In recent versions of Oracle (18c onwards) the package has finally acquired several functions to do a proper job, these are:

  • convert_raw_to_bin_double()
  • convert_raw_to_bin_float()
  • convert_raw_to_date()
  • convert_raw_to_number()
  • convert_raw_to_nvarchar()
  • convert_raw_to_rowid()
  • convert_raw_to_varchar()

You might note that these functions do not appear in the PL/SQL Packages and Types Reference, but they are documented in the script $ORACLE_HOME/rdbms/admin/dbmsstat.sql that creates the package:


-- Convert the internal representation of a minimum or maximum value
-- into a datatype-specific value. The minval and maxval fields
-- of the StatRec structure as filled in by get_column_stats or
-- prepare_column_values are appropriate values for input.

One thing you’ll notice about the list is that there’s no convert_raw_to_timestamp(), and a question came up recently on the oracle-l list server asking how to do this. This note answers that question and, in passing, demonstrates the typical use of the other functions.

As ever we start by creating some data:


rem
rem     Script:         raw_to_timestamp.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1(
        t0 timestamp(0),
        t3 timestamp(3),
        t6 timestamp(6),
        t9 timestamp(9),
        ts timestamp
);

insert into t1
with t_tab as (
        select to_timestamp('01-Apr-2020 21:15:10.123456789','dd-mon-yyyy hh24:mi:ss.ff9') ts
        from dual
)
select  t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts
from    t_tab
;

insert into t1
with t_tab as (
        select to_timestamp('01-Apr-2020 21:15:10.987654321','dd-mon-yyyy hh24:mi:ss.ff9') ts
        from dual
)
select  t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts, t_tab.ts
from    t_tab
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 2'
        );
end;
/

I’ve creaed a table to hold timestamps constrained to different levels of precision. The maximum precision allowed is 9 decimal places for the seconds, the default (column ts) is 6. I’ve then created two rows in the table using slightly different timestamps for the rows but giving all the columns in a single row the same value. Then I’ve gathered stats – including a histogram – on the table and all its columns.

I can now query user_tab_cols to pick up the low ahd high values:


select
        column_name, low_value, high_value
from
        user_tab_cols
where
        table_name = 'T1'
order by
        column_name
/

COLUMN_NAME          LOW_VALUE                  HIGH_VALUE
-------------------- -------------------------- --------------------------
T0                   7878040116100B             7878040116100C
T3                   7878040116100B0754D4C0     7878040116100B3AE3AF00
T6                   7878040116100B075BCDE8     7878040116100B3ADE6770
T9                   7878040116100B075BCD15     7878040116100B3ADE68B1
TS                   7878040116100B075BCDE8     7878040116100B3ADE6770


And here’s a dump of the columns so that you can see the actual values held in the table in their internal representation.


select
        dump(t0,16), 
        dump(t3,16), 
        dump(t6,16), 
        dump(t9,16), 
        dump(ts,16) 
from 
        t1;


DUMP(T0,16)
-------------------------------------------------------------------------------------
DUMP(T3,16)
-------------------------------------------------------------------------------------
DUMP(T6,16)
-------------------------------------------------------------------------------------
DUMP(T9,16)
-------------------------------------------------------------------------------------
DUMP(TS,16)
-------------------------------------------------------------------------------------
Typ=180 Len=7: 78,78,4,1,16,10,b
Typ=180 Len=11: 78,78,4,1,16,10,b,7,54,d4,c0
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,e8
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,15
Typ=180 Len=11: 78,78,4,1,16,10,b,7,5b,cd,e8

Typ=180 Len=7: 78,78,4,1,16,10,c
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,e3,af,0
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,67,70
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,68,b1
Typ=180 Len=11: 78,78,4,1,16,10,b,3a,de,67,70

You’ll notice that, despite the attempt to limit the precision, all the columns other than the first use 11 bytes. The first column is different, with zero precision we use only 7 bytes and if you look closely (and know your internal storage formats) you’ll realise that those 7 bytes are using the standard format for stored dates. The last 4 bytes hold the fraction of the second stored as a number of nano-seconds as a simple 32-bit binary number. (Warning: it’s possible that you will see a difference in this set of bytes if your machine uses a different endianness from mine – I can’t test that for myself at present.)

So let’s see what we get if we try to convert the raw values using the call to dbms_stats.convert_raw_to_date(). I’ll start by setting the nls_XXX formats to get the full conntent of a date column or a timestamp column reported from SQL*Plus.


alter session set nls_date_format = 'dd-Mon-yyyy hh24:mi:ss';
alter session set nls_timestamp_format = 'dd-Mon-yyyy hh24:mi:ss.ff9';

set linesize 165

column t0 format a32
column t3 format a32
column t6 format a32
column t9 format a32
column ts format a32

select  t0, t3, t6, t9, ts from t1;


select
        column_name, 
        dbms_stats.convert_raw_to_date(low_value)       date_low,
        dbms_stats.convert_raw_to_date(high_value)      date_high
from 
        user_tab_cols
where
        table_name = 'T1'
order by 
        column_name
/



T0                               T3                               T6                               T9                               TS
-------------------------------- -------------------------------- -------------------------------- -------------------------------- --------------------------------
01-Apr-2020 21:15:10.000000000   01-Apr-2020 21:15:10.123000000   01-Apr-2020 21:15:10.123457000   01-Apr-2020 21:15:10.123456789   01-Apr-2020 21:15:10.123457000
01-Apr-2020 21:15:11.000000000   01-Apr-2020 21:15:10.988000000   01-Apr-2020 21:15:10.987654000   01-Apr-2020 21:15:10.987654321   01-Apr-2020 21:15:10.987654000

2 rows selected.


COLUMN_NAME          DATE_LOW             DATE_HIGH
-------------------- -------------------- --------------------
T0                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:11
T3                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
T6                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
T9                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10
TS                   01-Apr-2020 21:15:10 01-Apr-2020 21:15:10

5 rows selected.

Since I’ve inserted only two rows, with each row holding a single value with different precision, we expect those values to be reflected in the low_value and high_value for the columns – and we’re not disappointed (probably) but, as you might have expected, Oracle has processed the timestamp data type into a date data type by simply ignoring the last 4 bytes and dealing with the first 7 bytes as if they were a date. We need to do a little more check that we can convert the low and high values into exactly the values that appear in the table itself.

So we can write a messy bit of SQL that extracts the first 7 bytes, converts them using the convert_to_date() call, then extracts the last 4 bytes and converts them to a number using a simple to_number() call (with an nvl() thrown in to deal with the special case of there being no nanosecond component), divides by 1e9, converts the result to an interval in seconds using the numtointervalds() function, and adds that to the date. To keep things clean I’ll only apply the mess to the low_value column. You’ll note that I’ve


select 
        column_name, 
        low_value,
--
--      convert bytes 8 onwards to numeric with an nvl()
--      to handle the case of zero precision when there 
--      are no bytes to show the number of nanoseconds
--
--      Note - I've used substr() to bytes 8 onwards turns
--      into characters 15 - 22.
--
        nvl(
                to_number(
                        hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                ),
                0
        ) nanoseconds,
--
--      repeat the conversion, and wrap with numtodsinterval()
--      to show the resulting interval in seconds.
--
        numtodsinterval(
                nvl(
                        to_number(
                                hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                        )/1e9,
                        0
                )+0,
                'SECOND'
        ) interval_val,
--
--      Converted the 1st 7 bytes (14 characters) to a date,
--      coerce the result to a timestamp, then convert the 
--      nanoseconds bytes to an interval and add
--
        to_timestamp(
                dbms_stats.convert_raw_to_date(hextoraw(substr(low_value,1,14)))
        ) + 
        numtodsinterval(
                nvl(
                        to_number(
                                hextoraw(substr(low_value,15,8)),'XXXXXXXX'
                        )/1e9,
                        0
                ),
                'SECOND'
        ) timestamp_val
from 
        user_tab_cols
where
        table_name = 'T1'
order by
        column_name
/

COLUMN_NAME          LOW_VALUE                  NANOSECONDS INTERVAL_VAL                     TIMESTAMP_VAL
-------------------- -------------------------- ----------- -------------------------------- --------------------------------
T0                   7878040116100B                       0 +000000000 00:00:00.000000000    01-Apr-2020 21:15:10.000000000
T3                   7878040116100B0754D4C0       123000000 +000000000 00:00:00.123000000    01-Apr-2020 21:15:10.123000000
T6                   7878040116100B075BCDE8       123457000 +000000000 00:00:00.123457000    01-Apr-2020 21:15:10.123457000
T9                   7878040116100B075BCD15       123456789 +000000000 00:00:00.123456789    01-Apr-2020 21:15:10.123456789
TS                   7878040116100B075BCDE8       123457000 +000000000 00:00:00.123457000    01-Apr-2020 21:15:10.123457000

5 rows selected.

If you go back to the original output we got from selecting the actual values from the table you can see that the timestamp_val column we’ve generated correctly matches the column values for the lower of the two original values we inserted into the table.

Q.E.D.

March 22, 2020

ANSI hinting

Filed under: ANSI Standard,Execution plans,Hints,Oracle — Jonathan Lewis @ 8:32 pm GMT Mar 22,2020

I’ve made casual remarks in the past about how “ANSI”-style SQL introduces extra complications in labelling or identifying query blocks – which means it’s harder to hint correctly. This is a note to show how the optimizer first transforms “ANSI” SQL into “Oracle” syntax. I’m going to write a simple 4-table join in classic Oracle form and check the execution plan with its query block names and fully qualified table aliases; then I’ll translate to the ANSI equivalent and repeat the check for query block names and aliases , finally I’ll rewrite the query in classic Oracle syntax that reproduces the query block names and fully qualified table aliases that we got from the ANSI form.

We start by creating and indexing 4 tables (with a script that I’ve been using for various tests for several years, but the results I’ll show come from 19c):

rem
rem     Script:         ansi_hint_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2014
rem

create table t1
as
select 
        trunc((rownum-1)/4)     t1_n1,
        trunc((rownum-1)/4)     t1_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged1,
        rpad(rownum,180)        t1_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t2
as
select 
        mod(rownum,200)         t2_n1,
        mod(rownum,200)         t2_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged2,
        rpad(rownum,180)        t2_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t3
as
select 
        trunc((rownum-1)/4)     t3_n1,
        trunc((rownum-1)/4)     t3_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged3,
        rpad(rownum,180)        t3_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t4
as
select 
        trunc((rownum-1)/4)     t4_n1,
        trunc((rownum-1)/4)     t4_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged4,
        rpad(rownum,180)        t4_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create index t1_i1 on t1(t1_n1);
create index t2_i1 on t2(t2_n1);
create index t3_i1 on t3(t3_n1);
create index t4_i1 on t4(t4_n1);

Then we check the execution plan for a simple statement with what looks like a single named query block:


explain plan for
select
        /*+ qb_name(main) */
        *
from
        t1, t2, t3, t4
where
        t2.t2_n1 = t1.t1_n2
and     t3.t3_n1 = t2.t2_n2
and     t4.t4_n1 = t3.t3_n2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - MAIN
   2 - MAIN / T4@MAIN
   4 - MAIN / T3@MAIN
   6 - MAIN / T2@MAIN
   7 - MAIN / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"MAIN" "T4"@"MAIN")
      SWAP_JOIN_INPUTS(@"MAIN" "T3"@"MAIN")
      SWAP_JOIN_INPUTS(@"MAIN" "T2"@"MAIN")
      USE_HASH(@"MAIN" "T4"@"MAIN")
      USE_HASH(@"MAIN" "T3"@"MAIN")
      USE_HASH(@"MAIN" "T2"@"MAIN")
      LEADING(@"MAIN" "T1"@"MAIN" "T2"@"MAIN" "T3"@"MAIN" "T4"@"MAIN")
      FULL(@"MAIN" "T4"@"MAIN")
      FULL(@"MAIN" "T3"@"MAIN")
      FULL(@"MAIN" "T2"@"MAIN")
      FULL(@"MAIN" "T1"@"MAIN")
      OUTLINE_LEAF(@"MAIN")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Note in the Query Block Name / Object Alias information that all 4 tables were “sourced from”, or have aliases qualified by, “@MAIN”, and in the final plan all the tables are used in a query block called MAIN.

Now look at the basic ANSI equivalent:


explain plan for
select 
        /*+ qb_name(main) */
        *
from
        t1
join 
        t2
on      t2.t2_n1 = t1.t1_n2
join 
        t3
on      t3.t3_n1 = t2.t2_n2
join 
        t4
on      t4.t4_n1 = t3.t3_n2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$43767242
   2 - SEL$43767242 / T4@SEL$3
   4 - SEL$43767242 / T3@SEL$2
   6 - SEL$43767242 / T2@SEL$1
   7 - SEL$43767242 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T4"@"SEL$3")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T3"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T2"@"SEL$1")
      USE_HASH(@"SEL$43767242" "T4"@"SEL$3")
      USE_HASH(@"SEL$43767242" "T3"@"SEL$2")
      USE_HASH(@"SEL$43767242" "T2"@"SEL$1")
      LEADING(@"SEL$43767242" "T1"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$2" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T3"@"SEL$2")
      FULL(@"SEL$43767242" "T2"@"SEL$1")
      FULL(@"SEL$43767242" "T1"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$58A6D7F6")
      OUTLINE(@"SEL$3")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$9E43CB6E")
      OUTLINE(@"MAIN")
      MERGE(@"SEL$9E43CB6E" >"MAIN")
      OUTLINE_LEAF(@"SEL$43767242")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Check the Plan Hash Value – it gives you a strong clue that the execution plans are the same, and a close examination of the body of the plan and the Predicate information confirm that the two queries operate in exactly the same way at exactly the same cost. But there’s a significant difference in the query blocks and table aliases.

The Query Block Name / Alias Alias information tells us that query block “main” has disappeared and the query operates completely from a query block with the internally generated name SEL$43767242; moreover we can see that tables t1 and t2 appear to be sourced from a query block called sel$1, while t3 comes from sel$2 and t4 comes from sel$3.

Finally here’s a messy Oracle form to reproduce the ANSI query block names and table aliases:


explain plan for
select  /*+ qb_name(main) */
        *
from    (
        select  /*+ qb_name(sel$3) */
                *
        from
                (
                select  /*+ qb_name(sel$2) */
                        *
                from    (
                        select 
                                /*+ qb_name(sel$1) */
                                *
                        from    
                                t1,
                                t2
                        where   t2.t2_n1 = t1.t1_n2
                        ) v1,
                        t3
                where   t3.t3_n1 = v1.t2_n2
                )       v2,
                t4
        where   t4.t4_n1 = v2.t3_n2
        )
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$43767242
   2 - SEL$43767242 / T4@SEL$3
   4 - SEL$43767242 / T3@SEL$2
   6 - SEL$43767242 / T2@SEL$1
   7 - SEL$43767242 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T4"@"SEL$3")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T3"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T2"@"SEL$1")
      USE_HASH(@"SEL$43767242" "T4"@"SEL$3")
      USE_HASH(@"SEL$43767242" "T3"@"SEL$2")
      USE_HASH(@"SEL$43767242" "T2"@"SEL$1")
      LEADING(@"SEL$43767242" "T1"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$2"
              "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T3"@"SEL$2")
      FULL(@"SEL$43767242" "T2"@"SEL$1")
      FULL(@"SEL$43767242" "T1"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$58A6D7F6")
      OUTLINE(@"SEL$3")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$9E43CB6E")
      OUTLINE(@"MAIN")
      MERGE(@"SEL$9E43CB6E" >"MAIN")
      OUTLINE_LEAF(@"SEL$43767242")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Again a quick check of the Plan Hash Value confirms that the messier query is a match for the previous query with its ANSI transformation, and the plan body and Query Block Name / Object Alias information confirm the match throughout in the naming.

Any time you write ANSI syntax this layering of nested inline views is what happens to your query before any other transformation is applied – and sometimes (though very rarely in recent versions of Oracle) this can result in unexpected limitations in the way the optimizer subsequently transforms the query.

Apart from “accidents”, though, the big issue with the “ANSI rewrite” comes from the side effects of all the extra query blocks. In anything but the simplest cases you have to work a little harder to figure out the query block names you need to use if you want to apply hints to fix an optimizer problem – you can’t create your own meaningful names for every query block in the query you wrote. Fortunately this task is made a little easier if you check the execution plan of the query after adding the hint /*+ no_query_transformation */, as this tends to produce a plan that looks like a step by step “translation” of the way the query was written (apart from the ANSI transformation, of course). This might be enough to identify the base-level query blocks that the optimizer starts with when you use ANSI syntax.

 

March 18, 2020

USING bug

Filed under: ANSI Standard,Bugs,Oracle — Jonathan Lewis @ 11:10 am GMT Mar 18,2020

The Oracle Developer Community forum often sees SQL that is hard to read – sometimes because it’s a brutal tangle of subqueries, sometimes because the format it bad, sometimes because the use of table and column aliases is poorly done. One particular case of the last weakness is the code where the same table alias (typically the letter A) is used a dozen times in the course of the query.

I’ve said that every table in a query should have a different alias and the alias should be used at every column usage in the query (the note at this URL includes a couple of refinements). I’ve just discovered another reason why this is a good idea and why you shouldn’t use the same alias twice in a query. Here’s a simplified demonstration of the threat – tested on 19.3.0.0:


rem     Script:         using_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t2
as
select  rownum id, object_id, object_name, object_type, rpad('x',100) padding 
from    all_objects 
where   rownum <= 5000 
and     mod(object_id,2) = 1
/

create table t1
as
select  rownum id, object_id, object_name, object_type, rpad('x',100) padding 
from    all_objects 
where   rownum <= 5000 
and     mod(object_id,2) = 0
/

I’ve created two tables from the view all_objects, one of the tables holds rows where the object_id is even, the other where it is odd, so if I join these two tables on object_id the result set will be empty. So here are three queries that join the two tables – with the little twist that I’ve (accidentally) given both tables the same alias X in all three cases:


prompt  =======================================
prompt  Here's a query that might "confuse" the
prompt  optimizer when we try to explain it
prompt  =======================================

explain plan for
select max(object_name) from t1 X join t2 X using (object_id);

prompt  ==================================
prompt  So might this one, but it doesn't.
prompt  ==================================

explain plan for
select max(object_id) from t1 X join t2 X using (object_id);
select * from table(dbms_xplan.display);

prompt  ===================================================
prompt  With this one A-rows matches E-rows: and it's NOT 0
prompt  ===================================================

alter session set statistics_level = all;

set serveroutput off
set linesize 156

select count(*) from t1 X join t2 X using (object_id);
select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));


In the absence of the explicit aliases the first query should produce an execution plan; but when both tables are given the same alias the attempt to explain (or run) the query produced the error “ORA-00918: column ambiguously defined”.

The second query does better – or worse, depending on your point of view. Nominally the join is perfectly valid and the optimizer produces an execution plan for the query. But the plan predicts a Cartesian merge join with a result set of 25M rows – which doesn’t look like a good estimate – and the plan doesn’t have a Predicate Information section.

So we use a count(*) for the third query – just in case the result set is, somehow, 25M rows – and enable rowsource execution statistics, and acquire the plan from memory after running the query (which takes nearly 14 seconds of hammering the CPU to death). And here’s the output:


  COUNT(*)
----------
  25000000

1 row selected.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  85ygrcg4n3ymz, child number 0
-------------------------------------
select count(*) from t1 X join t2 X using (object_id)

Plan hash value: 4259280259

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |      1 |        | 67358 (100)|      1 |00:00:13.38 |     200 |    198 |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |      1 |            |      1 |00:00:13.38 |     200 |    198 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN|      |      1 |     25M| 67358   (5)|     25M|00:00:10.55 |     200 |    198 |       |       |          |
|   3 |    TABLE ACCESS FULL  | T2   |      1 |   5000 |    15   (7)|   5000 |00:00:00.01 |     100 |     99 |       |       |          |
|   4 |    BUFFER SORT        |      |   5000 |   5000 | 67343   (5)|     25M|00:00:04.54 |     100 |     99 |   133K|   133K|  118K (0)|
|   5 |     TABLE ACCESS FULL | T1   |      1 |   5000 |    13   (0)|   5000 |00:00:00.01 |     100 |     99 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------

Yes, instead of zero rows Oracle managed to produce 25M rows. The execution plan is a Cartesian merge join, and that’s because the optimizer has lost the join predicate (I didn’t supply a Predicate Information section because there wasn’t one – note the absence of any asterisks against any operations in the plan).

Interestingly the Query Block / Alias section of the plan (when I called for it) reported the two aliases as X_0001 and X_0002, so internally Oracle did manage to find two different aliases – but too late, presumably.

Conclusion

Give a little thought to using table aliases sensibly and safely. It’s trivial to fix this example, but some of the messy SQL that goes into production might end up running into the same issue without it being so easy to spot the source of the anomaly.

Footnote

This is Bug 25342699 : WRONG RESULTS WITH ANSI JOIN USING AND IDENTICAL TABLE ALIASES reported Jan 2017 against 12.1.0.2, not yet fixed.

 

March 12, 2020

Sequence Costs

Filed under: Bugs,Oracle,Performance,trace files,Troubleshooting — Jonathan Lewis @ 8:18 pm GMT Mar 12,2020

You’re probably aware of the “identity” construct that appeared in 12.1 and uses Oracle’s sequence mechanism to model the Autonumber or Identity columns that other databases have. A posting from Clay Jackson on the Oracle-L list server suggests that something about their code path has introduced a surprising overhead in 19c … when you’re not using them.

The following code is a slightly modified version of a test case that Clay Jackson posted to demonstrate a strange difference in performance between 12.2 and 19.3

rem
rem     Script:         19c_sequence_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

drop table test_tab2;
drop table test_tab1;

drop sequence test_seq1;
drop sequence test_seq2;

create sequence test_seq1;
create sequence test_seq2;

create table test_tab1 (
   id number primary key,
   c1 varchar2(1000)
);

create table test_tab2 (
   id number,
   t1_id number,
   c2 varchar2(1000),
   constraint test_tab2_fk foreign key (t1_id) 
        references test_tab1 (id) on delete cascade
);

spool 19c_sequence_cost.lst

set feedback off
set pagesize 0
set serveroutput off

execute snap_my_stats.start_snap
alter session set events '10046 trace name context forever';

prompt  ==============================================================================
prompt  start_1000.sql cascades to start_1.sql
prompt  start_1.sql holds:
prompt
prompt  insert into test_tab1 values (test_seq1.nextval,'hello');
prompt  insert into test_tab2 values (test_seq2.nextval, test_seq1.currval, 'byebye');
prompt  commit;
prompt
prompt  ==============================================================================

start start_1000

set serveroutput on
set pagesize 60
alter session set events '10046 trace name context off';
execute snap_my_stats.end_snap

spool off

I’ve got a couple of tables with columns that I plan to generate from sequences, and I’ve got a referential integrity constraint between those tables. I’ll be using nextval from one sequence to populate the first table, then use currval from the same sequence for the foreign key and nextval from the other sequence as a unique identifier for the second table.

I’ve used my “repeater” code to run a short script from the SQL*Plus prompt 1,000 times (as suggested by Clay Jackson in his posting). I’ve also added lines to enable SQL_trace at the basic level, and taken a snapshot of the session activity stats.

Just as Clay Jackson had reported – 19.3.0.0 took significantly longer than 12.2.0.1 to complete the 1,000 calls to the script. (Though in absolute terms we’re only talking fractions of a second for a small single user test.) Examination of the stats – which prompted me to add the 10046 trace and repeat – made it very clear where the difference lay. Here are a few figures picked from the session activity stats:

Version    Statistic                        Value
========   ========================         ===== 
12.2.0.1   CPU used by this session            53  
19.3.0.0   CPU used by this session           114

12.2.0.1   recursive cpu usage                 18
19.3.0.0   recursive cpu usage                 44

12.2.0.1   recursive calls                  1,182
19.3.0.0   recursive calls                  7,215

12.2.0.1   execute count                    2,137                                     
19.3.0.0   execute count                    8,151

Clearly it’s worth finding out what was happening in those 6,000 extra recursive calls. And this is what the 19.3 trace file (after passing through tkprof … sort=execnt) showed:


select count(1)
from
 sys.idnseq$ where seqobj# = :1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   6000      0.07       0.09          0          0          0           0
Fetch     6000      0.03       0.04          0       6000          0        6000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    12001      0.10       0.14          0       6000          0        6000

If you can’t guess what the table idnseq$ is about, the following definition appears in $ORACLE_HOME/rdbms/admin/dcore.bsq:

create table idnseq$     /* stores table identity column to sequence mapping */
( obj#         number not null,                       /* table object number */
  intcol#      number not null,                    /* identity column number */
  seqobj#      number not null,                    /* sequence object number */
  startwith    number not null,                   /* sequence starting value */
  supplemental log group idnseq$_log_grp (obj#, intcol#, seqobj#) always
)
/

We’re not using identity columns in this test, but every time we run the script we do six checks against this data dictionary table to see (presumably) if we’re using a sequence that is internally associated with an identity column. The requirement doesn’t seem completely unreasonable – but it’s a bit surprising that we do every check twice. (Note – we have to check test_seq1 for both statements but test_seq2 only for the second statement, which is how we get 6,000 executions: 2 * (2 checks for test_seq1 + 1 check for test_seq2) * 1,000 executions of the script.

The doubling looks like a bug, and you have to wonder why a suitable piece of information isn’t kept in the dictionary cache anyway to allow Oracle to avoid executing the recursive statement. I should point out that the surprisingly large impact is visible because I’ve executed 2,000 separate statements from the client side; mechanisms like array inserts and pl/sql loops might only have to do this check once for each array or pl/sql call. (Tests of this hypothesis are left as exercise for the interested reader.)

 

dense_rank

Filed under: Execution plans,Oracle,Performance,sorting,subqueries — Jonathan Lewis @ 6:42 pm GMT Mar 12,2020

I’ve just been prompted to complete and publish a draft I started a few years ago. It’s (ultimately) about a feature that appeared in 9i but doesn’t seem to show up very often at client sites or as a common solution to performance problems on the various Oracle forums – but maybe that’s not surprising given how slowly analytic functions have been taken up.

I want to work towards the feature by starting with a requirement, then examine several solutions. To supply a touch of realism I’ll create an orders table, which holds a customer id and an order date (including time), ,and then ask for a report of the most recent order for each customer. Here’s some starting data:

rem
rem     Script:         order_sample.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2006
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.0        Costs are consistent
rem             11.2.0.4        Costs become consistent by 11.2.0.3
rem             11.1.0.7
rem             10.2.0.3
rem              9.2.0.8
rem

create table orders
as
with generator as (
        select
                rownum id 
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        rownum                                                                  id,
        mod(rownum-1,200)                                                       customer,
        sysdate - 20 + dbms_random.value(0,20)                                  date_ordered,
        rpad('x' || to_char(trunc(dbms_random.value(0,1000)),'FM009'),100)      padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid wordpress format issue
;

alter table orders modify customer not null;
alter table orders modify date_ordered not null;
alter table orders add constraint ord_pk primary key(id);

create index ord_cus on orders(customer);
-- create unique index ord_cus_date on orders(customer, date_ordered);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'orders',
                method_opt       => 'for all columns size 1',
                cascade          => true
        );
end;
/

I’ve got 200 customers, at 50 orders per customer dating back over the last 20 days. There’s a primary key on the table and (as it stands) an obvious “foreign key index” on the customer column, though I’ve allowed for changing this to a more useful (customer, date_ordered) combination which I’ve decided could be declared as unique.

With this data, how do I report “the most recent order for each customer”? The first question to ask in response to this request is: “do you literally mean ‘THE’ most recent; what if the customer has placed two or more orders on the same day or, in my initial case, at the same time?” There’s a special case to think about the moment you start to turn the natural language request into a formal language specification.

In this case I’m going to run with the “customer-only” index and allow for the fact that two or more orders could be placed at the same time by the same customer, and report both (all) of them if any such simultaneously placed orders appear.

Strategy number 1:

Start with a list showing the most recent order date for each customer and then report all orders that we can identify using that list of (customer, date_ordered). To do that I’ll start with a simple aggregate query and use the result it produced in an “IN” subquery:


prompt  =========================
prompt  Use IN subquery for max()
prompt  =========================

select  
        /*+ qb_name(main) */
        ord1.* 
from
        orders  ord1
where
        (ord1.customer, ord1.date_ordered) in (
                select  /*+ qb_name(subq) */
                        ord2.customer, max(ord2.date_ordered)
                from
                        orders  ord2
                group by 
                        ord2.customer
        )
order by
        ord1.customer
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

Plan hash value: 1500776991

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |      1 |        |    54 (100)|    200 |00:00:00.01 |     344 |       |       |          |
|   1 |  SORT ORDER BY        |          |      1 |      1 |    54  (15)|    200 |00:00:00.01 |     344 |   115K|   115K|  102K (0)|
|*  2 |   HASH JOIN RIGHT SEMI|          |      1 |      1 |    53  (14)|    200 |00:00:00.01 |     344 |  1695K|  1695K| 1568K (0)|
|   3 |    VIEW               | VW_NSO_1 |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |       |       |          |
|   4 |     HASH GROUP BY     |          |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |  1484K|  1484K| 1421K (0)|
|   5 |      TABLE ACCESS FULL| ORDERS   |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
|   6 |    TABLE ACCESS FULL  | ORDERS   |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ORD1"."CUSTOMER"="CUSTOMER" AND "ORD1"."DATE_ORDERED"="MAX(ORD2.DATE_ORDERED)")

I’ve included the qb_name() hint in both query blocks here – it’s always a good idea as it gives you a little extra edge in interpreting the execution plan when the queries get more complicated.

The first thing you’ll notice about the resulting execution plan is that the optimizer has “unnested” the subquery to create an inline view (which it has named VW_NSO_1) and then used a simple join to get the final result. That’s an interesting observation, and it’s something that will often happen with an “IN” subquery – and that brings us to strategy 2.

Strategy number 2:

Some people will take as gospel the claim that the optimizer “cannot handle subqueries efficiently” and will prefer to write their own inline views (possibly using the “WITH subquery” a.k.a. “Common Table Expression (CTE)” mechanism). There will be occasions, even in the latest versions of Oracle, where you may need to do this but there will also be occasions where the optimizer hasn’t done it because it would produce the wrong results – and I have seen a couple of accidents go into production code where this variant has been written incorrectly.


prompt  ==============================
prompt  Introduce max() as inline view
prompt  ==============================

select  
        /*+ qb_name(main) */
        ord1.* 
from
        (
                select  /*+ qb_name(in_line) */
                        ord2.customer, max(ord2.date_ordered) date_ordered
                from
                        orders  ord2
                group by 
                        ord2.customer
        )       ordv,
        orders  ord1
where
        ord1.customer     = ordv.customer
and     ord1.date_ordered = ordv.date_ordered
order by
        ord1.customer
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

Plan hash value: 2750501889

----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |        |      1 |        |    54 (100)|    200 |00:00:00.01 |     344 |       |       |          |
|   1 |  SORT ORDER BY        |        |      1 |    200 |    54  (15)|    200 |00:00:00.01 |     344 |   115K|   115K|  102K (0)|
|*  2 |   HASH JOIN           |        |      1 |    200 |    53  (14)|    200 |00:00:00.01 |     344 |  1695K|  1695K| 1531K (0)|
|   3 |    VIEW               |        |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |       |       |          |
|   4 |     HASH GROUP BY     |        |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |  1484K|  1484K| 1413K (0)|
|   5 |      TABLE ACCESS FULL| ORDERS |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
|   6 |    TABLE ACCESS FULL  | ORDERS |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ORD1"."CUSTOMER"="ORDV"."CUSTOMER" AND "ORD1"."DATE_ORDERED"="ORDV"."DATE_ORDERED")

You’ll notice, of course, the remarkable similarity between the previous plan and this one – the only significant difference being that the optimimzer used a “plain” hash join here rather than the “hash join right semi” that appeared in the previous plan. The “right semi” is an indication that the optimizer has first transformed your “IN” subquery to an equivalent “EXISTS” (“=ANY”) subquery. Don’t be misled by the “right”, by the way, this isn’t indicating any sort of outer join it’s just trying to let you know which table is the one where Oracle should stop its probe after finding the first row. It is, however, unfortunate that it gets a little awkward trying to sort out left from right when Oracle can do a “swap join inputs” on you.

It would have been nice if the VIEW operatio1n had reported the name of my inline view (to correspond to the generated VW_NSO_1 viewname from the previous plan) – but you if you included the ‘alias’ formatting option in the call to display_cursor() it would have reported the alias ordv@main at operation 3.

Strategy Number 3:

We might have decided to check every row in the table to see if the date in that row was the most recent date for the customer in that row, which we could do by running a correlated subquery to do the check for every row in the table.

prompt  ========================================
prompt  Orders with correlated EQUALITY subquery
prompt  ========================================

select  
        /*+ qb_name(main) */
        ord1.* 
from
        orders  ord1
where
        ord1.date_ordered = (
                select  /*+ qb_name(subq) */
                        max(ord2.date_ordered)
                from
                        orders  ord2
                where
                        ord2.customer = ord1.customer
        )
order by
        ord1.customer
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));


Plan hash value: 1152467146

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |         |      1 |        |    54 (100)|    200 |00:00:00.01 |     344 |       |       |          |
|   1 |  SORT ORDER BY        |         |      1 |    200 |    54  (15)|    200 |00:00:00.01 |     344 |   115K|   115K|  102K (0)|
|*  2 |   HASH JOIN           |         |      1 |    200 |    53  (14)|    200 |00:00:00.01 |     344 |  1695K|  1695K| 1622K (0)|
|   3 |    VIEW               | VW_SQ_1 |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |       |       |          |
|   4 |     HASH GROUP BY     |         |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |  1484K|  1484K| 1435K (0)|
|   5 |      TABLE ACCESS FULL| ORDERS  |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
|   6 |    TABLE ACCESS FULL  | ORDERS  |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ORD1"."DATE_ORDERED"="MAX(ORD2.DATE_ORDERED)" AND "ITEM_1"="ORD1"."CUSTOMER")

Yet again we end up with the same execution plan (barring the “right semi” issue) but with a different generated name for the unnested subquery. This is an interesting facet of Oracle (and SQL in general) – completely different ways of stating a requirement can end up doing the same work in the same way.

An important corrollary to this observation is that the first thing you should do when you start writing an SQL statement is to write it in a way that clearly expresses the requirement and is easy for others to comprehend. Don’t (at the first stage) try to do anything clever because (a) you may do it wrong and (b) the optimizer might have taken your clear, simple, code and done the clever bit behind the scenes for you.

However, we may have to move on to doing something new (-ish) and exciting.

Strategy number 4:

An “obvious” defect in the three plans so far is that we have to visit the orders table twice. Is there a way we can avoid doing this? The answer is yes. Oracle 8.1.6 gave us the analytic functions:


prompt  =======================
prompt  Analytic max() function
prompt  =======================

column padding noprint
column date_ordered noprint

select
        /*+ qb_name(main) */
        ordv.* 
from    (
        select  /*+ qb_name(inline) */
                customer, id, date_ordered, padding,
                max(date_ordered) over (
                        partition by customer
                ) max_date
        from    orders  ord2
        )       ordv
where
        ordv.date_ordered = ordv.max_date
order by
        ordv.customer
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline allstats last'));

Plan hash value: 813391662

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |        |      1 |        |   262 (100)|    200 |00:00:00.01 |     172 |       |       |          |
|*  1 |  VIEW               |        |      1 |  10000 |   262   (3)|    200 |00:00:00.01 |     172 |       |       |          |
|   2 |   WINDOW SORT       |        |      1 |  10000 |   262   (3)|  10000 |00:00:00.01 |     172 |  1612K|   624K| 1432K (0)|
|   3 |    TABLE ACCESS FULL| ORDERS |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("ORDV"."DATE_ORDERED"="ORDV"."MAX_DATE")

By adding the analytic max() function I can acquire the necessary “raw” data once and post-process it to find the max(date_ordered) for each customer before discarding all the rows where the row’s date doesn’t match the maximum date. The expression “max(date_ordered) over (partition by customer)” is like a virtual column that tells Oracle to partition the data by customer and find the maximum date within customer. Imagine copying the original data into a spreadsheet, sorting it by customer, then using one of the spreadsheet functions to add an extra column that derives it’s value by looking at the rows that have the same customer as the current row and you’ve got an exact replica of what Oracle is doing here.

So we’ve managed to produce the same result with a single tablescan of orders instead of the two tablescans we saw in every other plan. But there’s a drawback – to be able to partition by customer Oracle has had to fetch every row and column we’re interested in and sort the data before deriving values for the new column: the cost of this plan (262) is much higher than the cost of the plan (54) we got from the previous three queries.

In this case the variation in actual run-time for the two different plans was undetectable – and insignificant compared to the time spent getting the result set to the terminal and displaying. In general, though, you need to consider the trade off between the sorting that goes into the use of analytic functions and the “double visit” work of using subqueries.

Strategy number 5:

There is (at least) one more possibility that I’ve used in the past when the data structure has allowed it to produce the right answers; and it’s the one that is the ultimate target of this blog. Consider the following SQL:


select
        customer, 
        max(id)                 keep (dense_rank last order by date_ordered) as max_id,
        max(date_ordered)       keep (dense_rank last order by date_ordered) as max_date,
--      max(padding)            keep (dense_rank last order by date_ordered) as max_padding
        trim(
                max(padding)    keep (dense_rank last order by date_ordered)
        )       as max_padding
from
        orders
group by
        customer
;

(The trim() function on the padding column doesn’t change the fundamental behaviour of this query, it’s just there to avoid line-wrapping on my output.)

I’ve written a query that does an aggregate on customer, so “customer, max() group by customer”, but it’s a variant of the analytic max() function based on “keep(dense_rank last order by …)” rather then the more familiar “over(partition by … order by …)” form.

Because of the group by customer, the max() function is applied per customer (i.e. behaving like over(partition by customer)), and we’re not actually looking for the maximum value of the referenced column, we’re first ordering by the date_ordered (within customer) applying the dense_rank mechanism, keeping only the rows that have the highest (last) dense_rank, and then taking the maximum of that subset of the data.

Here’s an example applying the combination of mechanisms to a tiny data set:

Raw data
=========
   N1           V1
-----           ---
   93           'C'
   54           'Q',
   43           'A'
   13           'X'
   93           'G'
   54           'W',

Ordered by N1 and dense_rank() appended
========================================
   N1           V1              dr()
-----           ---             ----
   13           'X'             1
   43           'A'             2
   54           'W',            3
   54           'Q',            3
   93           'G'             4
   93           'C'             4

Keep(dense rank last)
=====================
   N1           V1              dr()
-----           ---             ----
   93           'G'             4
   93           'C'             4


max(v1) keep(dense rank last order by n1)
V1
---
'G'

In this tiny example we had cases where there were multiple rows for some of the rankings, but if we go back to our orders table and guarantee (by adding a unique constraint) that a customer will never have more than one row for any one value of date_ordered, then the expression max(id) keep (dense_rank last order by date_ordered) for any one customer will be the id of the row that has the maximum order date for that customer and, similarly, max(date_ordered) keep(…), and max(padding) keep (,,,) will also be the values from that same row.

Given the (absolutely critical) uniqueness constraint, we can get the data for the most recent for the customer using this dense_rank() strategy.

The question, of course, is why would we do something that may not be entirely intuitive and looks as if it could make Oracle do a lot of extra work to get the result. Here’s the answer – which is just the execution plan for the query on my orders table – with the unique constraint added:


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |      1 |        |    28 (100)|    200 |00:00:00.01 |     172 |       |       |          |
|   1 |  SORT GROUP BY     |        |      1 |    200 |    28  (18)|    200 |00:00:00.01 |     172 |   142K|   142K|  126K (0)|
|   2 |   TABLE ACCESS FULL| ORDERS |      1 |  10000 |    24   (5)|  10000 |00:00:00.01 |     172 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

The path uses a basic SORT GROUP BY, that “sorts” only 200 rows (A-rows) using only 126KB of memory. Compare that with the plan for the analytic max() over() in strategy 4 that takes 1.6MB of memory and sorts 10,000 rows and you’ll appreciate that the keep(dense_rank last) mechanism is doing something much more efficient. For cases where the drop from “num_rows” to “num_distinct” for the aggregating column(s) the benefit of using the somewhat non-intuitive dense_rank() approach may make a significant difference to memory, CPU, and even (if it avoids a spill to disk) I/O.

Footnotes

There are two major variations on how you can use the dense_rank() function, as well as this form in which dense_rank appears in the KEEP LAST (and FIRST) mechanism.

Remember the absolutely critical point that the “keep dense_rank last” strategy is only correct if there is a suitable unique constraint on the data set viz: unique on ({group by column(s)},{order by column(s)}).

There is another major option for getting the same “most recent” rows, which is to use the match_recognize() functionality, but I think I probably wrote this draft before the mechanism even existed – so it’s left as an exercise to the reader to work it out.  A key reason why I’m not going to do it myself is that (like the analytic over() in strategy 4) it will require all 10,000 rows to be sorted, and is therefore likely to be less efficient than strategy 5.

Finally – I thought I’d written a note explaining why a “sort group by” can use far less memory and CPU then a basic “sort order by”, but if I have it wasn’t on this blog.  I do have a note on how the mechanism to handle “rownum <= N” with a preceding “order by” minimises its use of memory, and that note should give you some idea of what the “sort group by” is doing to minimise memory usage. I’ll try to write a little note on the aggregate mechanism some time over the next few days.

 

 

February 25, 2020

count(*) – again

Filed under: Execution plans,Oracle,Performance — Jonathan Lewis @ 1:24 pm GMT Feb 25,2020

I’ve just received an email asking (yet again) a question about counting the number of rows in a table.

We have a large table with a CLOB column, where the CLOB occupies 85% storage space.
when we use select count(*) from , the DBA says that you should not use count(*) as it uses all columns and as this table contains CLOB it results in high CPU usage, where as if we use count(rowid), this brings us faster and same result.

Well I’ve pointed out in the past, in fact more than once, that count(*), count(1), count(declared-non-null-column) will all do the same thing … execute as count(*); I’ve also listed a few quirky anomalies, also more than once. However, count(rowid) is a little different, it doesn’t get transformed to count(*) as we can see from two pieces of evidence:

Exhibit A: fragments from a 10053 (CBO) trace file

----- Current SQL Statement for this session (sql_id=d381q70418ugs) -----
select count(rowid) from emp1

... 

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT("EMP1".ROWID) "COUNT(ROWID)" FROM "TEST_USER"."EMP1" "EMP1"

Unlike the various count() calls that are converted to count(*), counting rowids doesn’t seem to go througn the CNT transformation and the final state of the query still shows count(rowid) as the critical mechanism.

Exhibit B: variations in Column Projection Information

SQL> explain plan for select count(1) from emp1;

SQL> select * from table(dbms_xplan.display(null,null,'projection'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 2110459082

-----------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |     7  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |            |          |
|   2 |   INDEX FAST FULL SCAN| E1_PK | 20000 |     7  (15)| 00:00:01 |
-----------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]

SQL> explain plan for select count(rowid) from emp1;

SQL> select * from table(dbms_xplan.display(null,null,'projection'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 2110459082

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    12 |     7  (15)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    12 |            |          |
|   2 |   INDEX FAST FULL SCAN| E1_PK | 20000 |   234K|     7  (15)| 00:00:01 |
-------------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(ROWID)[22]
   2 - ROWID[ROWID,10]

The count(1) query transforms to count(*) and we don’t see any column information begin generated at operation 2 and passing up to the parent operation. However with count(rowid) we see Oracle at operation 2 constructing actual rowids from block addresses and row directory entries and passing them up to the the parent operation.

The CPU it takes to construct and pass the rowids is probably insignificant compared to the CPU usage of accessing data blocks in the first place so I doubt if there would be much visible difference in clock-time between count(1) (or count(*)) and count(rowid), but technically it looks as if count(rowid) would actually be slower and more resource-intensive than anything that went through the count(*) transformation.

In passing – the execution plan that appears in our count(1) example also tells us that count(*) doesn’t “use all columns” – after all, there aren’t many tables where every column is in the primary key and emp1 is no exception to the general rule, and the plan is doing an index fast full scan of the primary key index.

 

February 24, 2020

Fake Baselines – 2

Filed under: Bugs,dbms_xplan,Execution plans,Hints,Ignoring Hints,Oracle — Jonathan Lewis @ 3:38 pm GMT Feb 24,2020

Many years ago (2011) I wrote a note describing how you could attach the Outline Information from one query to the SQL_ID of another query using the official Oracle mechanism of calling dbms_spm.load_plans_from_cursor_cache(). Shortly after publishing that note I drafted a follow-up note with an example demonstrating that even when the alternative outline was technically relevant the optimizer might still fail to use the SQL Plan Baseline. Unfortunately I didn’t quite finish the draft – until today.

The example I started with nearly 10 years ago behaved correctly against 11.1.0.7, but failed to reproduce the plan when I tested it against 11.2.0.3, and it still fails against 19.3.0.0. Here’s the test data and the query we’re going to attempt to manipulate:


rem
rem     Script:         fake_sql_baseline_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2010
rem

create table emp1 (
        dept_no         number /* not null */,
        sal             number,
        emp_no          number,
        padding         varchar2(200),
        constraint e1_pk primary key(emp_no)
)
;

create table emp2 (
        dept_no         number /* not null */,
        sal             number,
        emp_no          number,
        padding         varchar2(200),
        constraint e2_pk primary key(emp_no)
)
;

insert into emp1
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',200)
from
        all_objects
where
        rownum <= 20000 -- > comment to avoid wordpress format issue
;

insert into emp2
select
        mod(rownum,6),
        rownum,
        rownum,
        rpad('x',200)
from
        all_objects
where
        rownum <= 20000 -- > comment to avoid wordpress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname                 => user,
                tabname                 => 'EMP1',
                cascade                 => true,
                method_opt              =>'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname                 => user,
                tabname                 => 'EMP2',
                cascade                 => true,
                method_opt              =>'for all columns size 1'
        );
end;
/

select
        /*+ target_query */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
;

select * from table(dbms_xplan.display_cursor(null, null, 'outline'));

I haven’t included the code I run on my testbed to delete all existing SQL Plan Baselines before running this test, I’ll post that at the end of the article.

The query is very simple and will, of course, return no rows since emp1 and emp2 are identical and we’re looking for departments in emp1 that don’t appear in emp2. The “obvious” plan for the optimizer is to unnest the subquery into a distinct (i.e. aggregate) inline view then apply an anti-join. It’s possible that the optimizer will also decide to do complex view merging and postpone the aggregation. Here’s the execution plan from 19.3:


----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |   168 (100)|          |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |            |          |
|*  2 |   HASH JOIN ANTI NA |      |  3333 | 19998 |   168   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):

Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  target_query

As expected the subquery unnested, we have the anti-join (in this case, since dept_no can be null, it’s a “Null-Aware” antijoin); and the optimizer has, indeed, decided to do the join before the aggregation.

Assume, now, that for reasons known only to me a merge (anti-)join would be more effective than a hash join. To get the optimizer to do this I’m going to capture the query and connect it to a plan that uses a merge join. There are several minor variations on how we could do this, but I’m going to follow the steps I took in 2011 – but cut out a couple of the steps where I loaded redundant baselines into the SMB (SQLPlan Management Base). As a starting point I’ll just record the sql_id and plan_hash_value for the query (and the child_number just in case I want to use dbms_xplan.display_cursor() to report the in-memory execution plan):

column  sql_id                  new_value       m_sql_id_1
column  plan_hash_value         new_value       m_plan_hash_value_1
column  child_number            new_value       m_child_number_1

select
        sql_id, plan_hash_value, child_number
from
        v$sql
where
        sql_text like '%target_query%'
and     sql_text not like '%v$sql%'
and     rownum = 1
;

Now I’ll hack the query to produce a plan that does the merge join. An easy first step is to look at the current outline and take advantage of the hints there. You’ll notice I included the ‘outline’ format in my call to dbms_xplan.display_cursor() above, even though I didn’t show you that part of the output – here it is now:

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
      FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
      USE_HASH(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      END_OUTLINE_DATA
  */

So I’m going to take the useful-looking hints, get rid of the use_hash() hint and, for good measure, turn it into a no_use_hash() hint. Here’s the resulting query, with its execution plan:

select
        /*+
                unnest(@sel$2)
                leading(@sel$5da710d3 emp1@sel$1 emp2@sel$2)
                no_use_hash(@sel$5da710d3 emp2@sel$2)
                full(@sel$5da710d3 emp2@sel$2)
                full(@sel$5da710d3 emp1@sel$1)
                alternate_query
        */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
;

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   178 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |     6 |            |          |
|   2 |   MERGE JOIN ANTI NA |      |  3333 | 19998 |   178  (11)| 00:00:01 |
|   3 |    SORT JOIN         |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|*  5 |    SORT UNIQUE       |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("EMP1"."DEPT_NO"="DEPT_NO")
       filter("EMP1"."DEPT_NO"="DEPT_NO")

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
      FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
      USE_MERGE(@"SEL$5DA710D3" "EMP2"@"SEL$2")
      END_OUTLINE_DATA
  */

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  alternate_query

Note that I’ve included the text “alternative_query” at the end of the hint list as something to use when I’m searaching v$sql. Note also, that the “no_use_hash()” hint has disappeared and been replaced by “use_merge()” hint.

The plan tells us that the optimizer is happy to use a “merge join anti NA”, so we can load this plan’s outline into the SMB by combining the sql_id and plan_hash_value for this query with (for older versions of Oracle, though you can now use the sql_id in recent versions) the text of the previous query so that we can store the old text with the new plan.


column  sql_id                  new_value       m_sql_id_2
column  plan_hash_value         new_value       m_plan_hash_value_2
column  child_number            new_value       m_child_number_2

select
        sql_id, plan_hash_value, child_number
from
        v$sql
where
        sql_text like '%alternate_query%'
and     sql_text not like '%v$sql%'
and     rownum = 1
;

declare
        m_clob  clob;
begin
        select
                sql_fulltext
        into
                m_clob
        from
                v$sql
        where
                sql_id = '&m_sql_id_1'
        and     child_number = &m_child_number_1
        ;

        dbms_output.put_line(m_clob);

        dbms_output.put_line(
                'Number of plans loaded: ' ||
                dbms_spm.load_plans_from_cursor_cache(
                        sql_id                  => '&m_sql_id_2',
                        plan_hash_value         => &m_plan_hash_value_2,
                        sql_text                => m_clob,
                        fixed                   => 'YES',
                        enabled                 => 'YES'
                )
        );

end;
/

At this point we have one SQL Plan Baseline in the SMB, and it says the old query should execute usng the new plan. So let’s give it a go:

set serveroutput off
alter system flush shared_pool;

alter session set events '10053 trace name context forever';

select
        /*+ target_query */
        count(*)
from
        emp1
where
        emp1.dept_no not in (
                select  dept_no
                from    emp2
        )
/

alter session set events '10053 trace name context off';

select * from table(dbms_xplan.display_cursor(null, null, 'alias outline'));

I’ve enabled the 10053 (optimizer) trace so that I can report a critical few lines from it later on. Here’s the execution plan, omitting the outline but including the alias information.

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |   168 (100)|          |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |            |          |
|*  2 |   HASH JOIN ANTI NA |      |  3333 | 19998 |   168   (5)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$5DA710D3
   3 - SEL$5DA710D3 / EMP1@SEL$1
   4 - SEL$5DA710D3 / EMP2@SEL$2

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  target_query

Note
-----
   - Failed to use SQL plan baseline for this statement

We haven’t used the SQL Plan Baseline – and in 19.3 we even have a note that the optimizer knew there was at least one baseline available that it failed to use! So what went wrong?

I have two diagnostics – first is the content of the baseline itself (warning – the SQL below will report ALL currently saved SQL Plan Baselines); I’ve just made sure that I have only one to report:

set linesize 90

select
        pln.*
from
        (select sql_handle, plan_name
         from   dba_sql_plan_baselines spb
         order by
                sql_handle, plan_name
        ) spb,
        table(dbms_xplan.display_sql_plan_baseline(spb.sql_handle, spb.plan_name)) pln
;


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL handle: SQL_ce3099e9e3bdaf2f
SQL text: select         /*+ target_query */         count(*) from         emp1
          where         emp1.dept_no not in (                 select  dept_no
                        from    emp2         )
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_cwc4tx7jvvbtg02bb0c12         Plan id: 45812754
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD-FROM-CURSOR-CACHE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 1517539632

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   178 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |     6 |            |          |
|   2 |   MERGE JOIN ANTI NA |      |  3333 | 19998 |   178  (11)| 00:00:01 |
|   3 |    SORT JOIN         |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| EMP1 | 20000 | 60000 |    83   (4)| 00:00:01 |
|*  5 |    SORT UNIQUE       |      | 20000 | 60000 |    89  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| EMP2 | 20000 | 60000 |    83   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("EMP1"."DEPT_NO"="DEPT_NO")
       filter("EMP1"."DEPT_NO"="DEPT_NO")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  alternate_query

We have an SQL Plan baseline that is accepted, enabled, and fixed; and it’s supposed to produce a “merge join anti NA”, and it clearly “belongs” to our query. So it should have been used.

Then we have the 10053 trace file, in which we find the following:


SPM: planId in plan baseline = 45812754, planId of reproduced plan = 1410137244
------- START SPM Plan Dump -------
SPM: failed to reproduce the plan using the following info:
  parse_schema name        : TEST_USER
  plan_baseline signature  : 14857544400522555183
  plan_baseline plan_id    : 45812754
  plan_baseline hintset    :
    hint num  1 len 27 text: IGNORE_OPTIM_EMBEDDED_HINTS
    hint num  2 len 35 text: OPTIMIZER_FEATURES_ENABLE('19.1.0')
    hint num  3 len 20 text: DB_VERSION('19.1.0')
    hint num  4 len  8 text: ALL_ROWS
    hint num  5 len 29 text: OUTLINE_LEAF(@"SEL$5DA710D3")
    hint num  6 len 16 text: UNNEST(@"SEL$2")
    hint num  7 len 17 text: OUTLINE(@"SEL$1")
    hint num  8 len 17 text: OUTLINE(@"SEL$2")
    hint num  9 len 36 text: FULL(@"SEL$5DA710D3" "EMP1"@"SEL$1")
    hint num 10 len 36 text: FULL(@"SEL$5DA710D3" "EMP2"@"SEL$2")
    hint num 11 len 54 text: LEADING(@"SEL$5DA710D3" "EMP1"@"SEL$1" "EMP2"@"SEL$2")
    hint num 12 len 41 text: USE_MERGE(@"SEL$5DA710D3" "EMP2"@"SEL$2")

During optimization the optimizer has found that SQL Plan Baseline. We can see that the hints in the baseline are exactly the hints from the plan that we wanted – but the optimizer says it can’t reproduce the plan we wanted. In fact if you try adding exactly these hints to the query itself you’ll still find that the merge join won’t appear and Oracle will use a hash join.

Conclusion

This is just a simple example of how the optimizer may be able to produce a plan if hinted in one way, but the outline consists of a different set of hints that won’t reproduce the plan they describe. My no_use_hash() has turned into a use_merge() but that hint fails to reproduce the merge join in circumstances that makes me think there’s a bug in the optimizer.

If you happen to be unlucky you may find that the plan you really need to see can’t be forced through a SQL Plan Baseline. In this example it may be necessary to use the SQL Patch mechanism to include the no_use_hash() hint in a set of hints that I associate with the query.

 

February 18, 2020

Interval Partition(s)

Filed under: Oracle,Parallel Execution,Partitioning — Jonathan Lewis @ 1:45 pm GMT Feb 18,2020

A quirky little feature of interval partitioning showed up on Twitter today – a parallel insert that would only use a single PX slave to do the inserting. With 1.1 billion rows and the option for running parallel 32 this made the loading process rather slower than it ought to have been.

Fortunately it’s quite easy to model (and work around) the oddity. So here’s a small data set and an empty partitioned table to work with:


rem
rem     Script:         pt_int_load_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table t1 
nologging 
as
select 
        ao.* 
from 
        all_Objects ao, 
        (select rownum id from dual connect by level <= 20)
;

create table pt1
partition  by range (object_id) interval (1000000) (
        partition p1 values less than (1)
)
as
select  * 
from    all_Objects
where   rownum = 0
/

I’ve created a table by copying all_objects 20 times which, for my little sandbox, has given me a total of about 1.2M rows. Then I’ve created an empty interval-partitioned clone of all_objects, with the first partition defined to hold all rows where the object_id is less than 1 (and there’s no object in the database that could match that criterion). I’ve defined the interval to be 1,000,000 and since the highest object_id in my database is about 90,000 the first partition that gets added to this table will be able to hold all the data from t1.

So now we try to do a parallel insert from t1 into pt1, and check the execution plan and parallel execution statistics:


set serveroutput off

insert /*+ append enable_parallel_dml parallel(6) */ into pt1 select * from t1;

select * from table(dbms_xplan.display_cursor);

start pq_tqstat

Note how I’ve used the hint /*+ enable_parallel_dml */ (possible a 12c hint back-ported to 11.2.0.4) rather than using an “alter session”, it’s just a little convenience to be able to embed the directive in the SQL. The pq_tqstat script is one I published some time ago to report the contents of the session-specific dynamic performance view v$pq_tqstat immediately after running a parallel statement.

Here’s the plan:


SQL_ID  25hub68pf1z1s, child number 0
-------------------------------------
insert /*+ append enable_parallel_dml parallel(6) */ into pt1 select *
from t1

Plan hash value: 2888707464

-------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                            |          |       |       |   631 (100)|          |        |      |            |
|   1 |  PX COORDINATOR                             |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)                       | :TQ10001 |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT (HIGH WATER MARK BROKERED)| PT1      |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     OPTIMIZER STATISTICS GATHERING          |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE                             |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX SEND RANDOM LOCAL                  | :TQ10000 |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | P->P | RANDOM LOCA|
|   7 |        PX BLOCK ITERATOR                    |          |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | PCWC |            |
|*  8 |         TABLE ACCESS FULL                   | T1       |  1235K|   159M|   631  (10)| 00:00:01 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
   - Degree of Parallelism is 6 because of hint

The most important detail of this plan is that the PX slaves do the load as select (operation 3), then send a message to the query coordinator (PX send QC, operation 2) to tell it about the data load. They do not send their data to the QC for the QC to do the load.

So the plan says we will be doing parallel DM, but here’s what v$pq_tqstat tells us:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES ROW_SHARE DATA_SHARE      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- --------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P006                215880   34785363     17.47      16.86         16          0           0
                                             1 P007                202561   34436325     16.39      16.69         17          0           0
                                             1 P008                207519   34564496     16.79      16.75         17          0           0
                                             1 P009                208408   34594770     16.86      16.77         17          0           0
                                             1 P00A                198915   33529627     16.10      16.25         16          0           0
                                             1 P00B                202537   34430603     16.39      16.69         16          0           0
                      Consumer               1 P000                     0        144      0.00       0.00         51         47           0
                                             1 P001                     0        144      0.00       0.00         51         47           0
                                             1 P002               1235820  206340464    100.00     100.00         75         47           0
                                             1 P003                     0        144      0.00       0.00         51         47           0
                                             1 P004                     0        144      0.00       0.00       1138       1134           0
                                             1 P005                     0        144      0.00       0.00       1137       1133           0

                    1 Producer               1 P000                     0         24      0.00       5.91         51         42           0
                                             1 P001                     0         24      0.00       5.91         50         41           0
                                             1 P002                     2        286    100.00      70.44         58         14           0
                                             1 P003                     0         24      0.00       5.91         51         43           0
                                             1 P004                     0         24      0.00       5.91         51         42           0
                                             1 P005                     0         24      0.00       5.91         51         43           0
                      Consumer               1 QC                       2        406    100.00     100.00        311        179           0

19 rows selected.

The query did run parallel 6 as hinted – and 6 PX slaves scanned the t1 table; but they all sent all their data to one PX slave in the second slave set and that one PX slave did all the inserts. The plan was parallel, but the execution was effectively serial. (You’ll note there is something a little odd about the waits and timeout for p004 and p005 when they are acting as consumers. I may worry about that later, but it could be a host-based side effect of running parallel 6 on a VM with 4 CPUs).

The serialization leads to two questions

  1. What went wrong?
  2. How do we work around this and make the insert “truly” parallel

My answer to (1) is “I don’t know – but I’ll look at it if necessary” combined with the guess – it’s something to do with the table having only one partition at the outset and this has an unexpected side effect on the randomising function for the PX distribution.

My answer to (2) is “if I’m right about (1), why not try pre-defining two partitions, and I’ll even let both of them stay empty”.

So here’s my new definition for pt1:


create table pt1
partition  by range (object_id) interval (1000000) (
        partition p0 values less than (0),
        partition p1 values less than (1)
)
as
select  * 
from    all_Objects
where   rownum = 0
/

Re-running the test with the completely redundant, and permanently empty p0 partition the plan doesn’t change but the results from v$pq_tqstat change dramatically:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES ROW_SHARE DATA_SHARE      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- --------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P006                207897   34581153     16.82      16.76         23          4           0
                                             1 P007                215669   34786429     17.45      16.86         30          5           0
                                             1 P008                221474   36749626     17.92      17.81         28          5           0
                                             1 P009                204959   34497164     16.58      16.72         22          2           0
                                             1 P00A                177755   30141002     14.38      14.61         21          0           0
                                             1 P00B                208066   35585810     16.84      17.25         25          2           0
                      Consumer               1 P000                213129   35612973     17.25      17.26         82         57           0
                                             1 P001                200516   33570586     16.23      16.27         84         55           0
                                             1 P002                203395   33950449     16.46      16.45         83         56           0
                                             1 P003                205458   34235575     16.63      16.59         82         54           0
                                             1 P004                204111   33999932     16.52      16.48        581        555           0
                                             1 P005                209211   34971669     16.93      16.95        580        553           0

                    1 Producer               1 P000                     2        286     16.67      16.67        422        149           0
                                             1 P001                     2        286     16.67      16.67        398        130           0
                                             1 P002                     2        286     16.67      16.67        405        128           0
                                             1 P003                     2        286     16.67      16.67        437        161           0
                                             1 P004                     2        286     16.67      16.67        406        116           0
                                             1 P005                     2        286     16.67      16.67        440        148           0
                      Consumer               1 QC                      12       1716    100.00     100.00        242        111           0



19 rows selected.

Every consumer receives and inserts roughly 200,000 rows – it’s a totally fair parallel DML. Timings are pretty irrelevant for such a small data set but the excution time did drop from 7 seconds to 4 seconds when parallelism was working “properly”.

I’ve tested this script on 12.2.0.1 and 19.3.0.0 – the same anomaly appears in both versions though it might be worth noting that the strange skew in the waits and timeouts doesn’t appear in 19.3.0.0.

February 17, 2020

Join Elimination bug

Filed under: Bugs,CBO,Execution plans,Oracle — Jonathan Lewis @ 3:37 pm GMT Feb 17,2020

It is possible to take subquery factoring (common table expressions / CTEs) too far. The most important purpose of factoring is to make a complex query easier to understand – especially if you can identify a messy piece of text that is used in more than one part of the query – but I have seen a couple of patterns appearing that make the SQL harder to read.

  • In one pattern each table is given its own factored subquery holding the non-join predicates (and, possibly, filter subqueries) and then the main query is just a simple join of the factored subqueries with nothing but join (ON) predicates.
  • In another pattern each factored subquery consists of the previous subquery with one more table added to it, so every subquery is no more that a “two-table” query block and the final subquery is a simple “select from last_factored_subquery”.

Neither of these patterns is helpful – but today’s blog note is not going to be about going to extremes with subquery factoring; instead it’s an example of a fairly reasonable use of subquery factoring that ran into a “wrong results” bug.

Consider a system that collects data from some type of meters. Here’s a two-table definition for meters and meter readings:


create table meters (
        meter_id        number,
        meter_type      varchar2(10),  -- references meter_types
        date_installed  date,
        padding         varchar2(100),
        constraint met_pk primary key(meter_id)
)
;

create table meter_readings (
        meter_id        number,
        date_read       date,
        reading         number(10,3),
        padding         varchar2(100),
        constraint      mrd_pk primary key(meter_id, date_read),
        constraint      mrd_fk_met foreign key (meter_id) references meters
)
;

insert into meters
select
        1e6 + rownum,
        case mod(rownum,3)
                when 0 then 'A'
                when 1 then 'B'
                       else 'C'
        end,
        trunc(sysdate) - mod(rownum,5),
        rpad('x',100,'x')
from
        dual 
connect by 
        level <= 10
;

execute dbms_stats.gather_table_stats(null,'meters')

insert into meter_readings 
select
        met.meter_id,
        met.date_installed - v.id + 2,
        dbms_random.value,
        rpad('x',100,'x')
from
        meters met,
        (select rownum id from dual connect by level <= 4) v
;

commit;

execute dbms_stats.gather_table_stats(null,'meter_readings')

I’ve picked the obvious primary keys for the two tables and defined the appropriate referential integrity constraint – which means the optimzer should be able to choose the best possible strategies for any query that joins the two tables.

I’ve created a very small data set – a few meters installed in the last few days, and a few readings per meters over the last few days. So lets report the readings for the last 48 hours, and include in the output any meters that haven’t logged a reading in that interval.

Here’s the query I wrote, with its output, running on a 19.3 instance on 17th Feb 2020:


with mrd_cte as (
        select 
                meter_id, date_read, reading
        from 
                meter_readings
        where 
                date_read in (trunc(sysdate), trunc(sysdate)+1)
)
select
        met.meter_id, met.date_installed, mrd_cte.date_read, reading
from
        meters met
left join 
        mrd_cte
on      mrd_cte.meter_id = met.meter_id
;

  METER_ID DATE_INST DATE_READ    READING
---------- --------- --------- ----------
   1000001 16-FEB-20 17-FEB-20       .063
   1000002 15-FEB-20
   1000003 14-FEB-20
   1000004 13-FEB-20
   1000005 17-FEB-20 18-FEB-20        .37
   1000005 17-FEB-20 17-FEB-20       .824
   1000006 16-FEB-20 17-FEB-20       .069
   1000007 15-FEB-20
   1000008 14-FEB-20
   1000009 13-FEB-20
   1000010 17-FEB-20 17-FEB-20       .161
   1000010 17-FEB-20 18-FEB-20       .818

12 rows selected.

The query returns 12 rows – which SQL*Plus can report because it counts them as it fetches them so it can give you the total at the end of the query.

Of course, sometimes people write preliminary queries to find out how big the result set would be before they run the query to acquire the result set itself. In cases like that (where they’re just going to select a “count(*)” the optimizer may a choose different execution path from the base query – perhaps finding a way to do an index-only execution, and maybe eliminating a few table joins from the query. So let’s execute a count of the above query:


rem
rem     Script:         join_elimination_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

with mrd_cte as (
        select 
                meter_id, date_read, reading
        from 
                meter_readings
        where 
                date_read in (trunc(sysdate), trunc(sysdate)+1)
)
select count(*) from (
        select
                met.meter_id, met.date_installed, mrd_cte.date_read, mrd_cte.reading
        from
                meters met
        left join 
                mrd_cte
        on      mrd_cte.meter_id = met.meter_id
)
;

  COUNT(*)
----------
        10

1 row selected.

You’ll have to take my word for it, of course, but no-one else was using this database while I was running this test, and no-one else has access to the schema I was using anyway. Unfortunately when I count the 12 rows instead of reporting them Oracle thinks there are only 10 rows. Oops!

Step 1 in investigating the problem – check the execution plans to see if there are any differences in the structure of the plan, the use of predicates, or the outline information. I won’t bother with the plan for the base query because it was very obvious from the count query where the problem lay.


-------------------------------------------------------------------
| Id  | Operation        | Name   | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT |        |       |     1 (100)|          |
|   1 |  SORT AGGREGATE  |        |     1 |            |          |
|   2 |   INDEX FULL SCAN| MET_PK |    10 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$69B21C86")
      ELIMINATE_JOIN(@"SEL$00F67CF8" "METER_READINGS"@"SEL$1")
      OUTLINE(@"SEL$00F67CF8")
      MERGE(@"SEL$C43CA2CA" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$C43CA2CA")
      MERGE(@"SEL$D28F6BD4" >"SEL$E6E74641")
      OUTLINE(@"SEL$E6E74641")
      ANSI_REARCH(@"SEL$4")
      OUTLINE(@"SEL$D28F6BD4")
      MERGE(@"SEL$1" >"SEL$006708EA")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$006708EA")
      ANSI_REARCH(@"SEL$3")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$3")
      INDEX(@"SEL$69B21C86" "MET"@"SEL$3" ("METERS"."METER_ID"))
      END_OUTLINE_DATA
  */

This is the plan as pulled from memory by a call to dbms_xplan.display_cursor(). We note particularly the following: meter_readings doesn’t appear in the plan, there is no predicate section (and no asterisks against any of the operations that would tell us there ought to be some predicate information), and there’s a very revealing ELIMINATE_JOIN(@”SEL$00F67CF8″ “METER_READINGS”@”SEL$1”) in the outline information.

For some reason the optimizer has decided that it’s okay to remove meter_readings from the query (even though there may be many meter readings for each meter), so it was inevitable that it produced the wrong result.

Despite my opening note, this is not an issue with subquery factoring – it just looked that way when I started poking at the problem. In fact, if you rewrite the query using an inline view you get the same error, if you turn the inline view into a stored view you get the error, and if you turn the whole query into a simple (left) join with the date predicate as part of the ON clause you still get the error.

The problem lies somewhere in the join elimination transformation. If you go back to the outline information from the bad plan you’ll see the line: ELIMINATE_JOIN(@”SEL$00F67CF8″ “METER_READINGS”@”SEL$1”) – by changing this to NO_ELIMINATE_JOIN(…) and adding it to the main query block I got a suitable plan joining the two tables and producing the right result.

The problem appears in 12.2.0.1 and 19.5.0.0 (tested on livesql) – but does not appear in 12.1.0.2 or 11.2.0.4

There is a known bug associated with this problem:

Bug: 29182901
Abstract: WRONG COUNT WAS RETURNED WHEN _OPTIMIZER_ENHANCED_JOIN_ELIMINATION=TRUE
Query with Outer Join Returned a Wrong Result due to Join Elimination (Doc ID 29182901.8)

The bug is fixed in 20.1, with a fix that has been backported into the Jan 2020 patches for 19, 18, and 12.1

February 14, 2020

Char problems

Filed under: Bugs,Oracle,Partitioning — Jonathan Lewis @ 3:25 pm GMT Feb 14,2020

The semantics of comparing character columns of different types can lead to some confusion, so before I get into the main body of this note here’s a little test based on a table with one row:


create table t1(c2 char(2), c3 char(3), vc2 varchar2(2), vc3 varchar2(3));

insert into t1 values ('XX','XX','XX','XX');
commit;

select count(*) c2_c3   from t1 where c2 = c3;
select count(*) c2_vc3  from t1 where c2 = vc3;
select count(*) c3_vc2  from t1 where c3 = vc2;
select count(*) c3_vc3  from t1 where c3 = vc3;

I’ve inserted one row, using the same value for every single column; then I’ve been counting the row(s) where various pairs of columns match. Which (if any) of the four queries return the value 1 and which (if any) return the value zero ?

To help you, here’s a quote from the relevant Oracle manual about character comparison semantics:

Blank-Padded and Nonpadded Comparison Semantics

With blank-padded semantics, if the two values have different lengths, then Oracle first adds blanks to the end of the shorter one so their lengths are equal. Oracle then compares the values character by character up to the first character that differs. The value with the greater character in the first differing position is considered greater. If two values have no differing characters, then they are considered equal. This rule means that two values are equal if they differ only in the number of trailing blanks. Oracle uses blank-padded comparison semantics only when both values in the comparison are either expressions of data type CHAR, NCHAR, text literals, or values returned by the USER function.

With nonpadded semantics, Oracle compares two values character by character up to the first character that differs. The value with the greater character in that position is considered greater. If two values of different length are identical up to the end of the shorter one, then the longer value is considered greater. If two values of equal length have no differing characters, then the values are considered equal. Oracle uses nonpadded comparison semantics whenever one or both values in the comparison have the data type VARCHAR2 or NVARCHAR2.

The first two queries return 1, the second two return zero.

  1. Query 1: c2 is blank padded to match c3 in length before the comparison, so the values are ‘XX {space}’
  2. Query 2: c2 is not padded, so the compared values are both ‘XX’
  3. Query 3: c3 is three characters long, vc2 is only 2 characters long and does not get padded to match c3
  4. Query 4: c3 is three characters long, vc3 is only 2 characters long and does not get padded to match c3

One interesting by-product of this example is this:

  • c3 = c2 and c2 = vc3 but c3 != vc3     whatever happened to transitive closure!

So we come to the point of the article, which is this:

Be very careful about using char() (or nchar) types in your tables – especially if you’re thinking of using columns of type [n]char() in join predicates (or predicates that don’t start life as join predicates but become join predicates through transitive closure).

Here’s an interesting bug that has appeared (I think) as a side effect of the need for blank-padded semantics. We start with two tables that I’ll be joining with a hash join – one of them will be a small table that will be used as the “build” table, the other will be (faked to look like) a large table that will be used as the “probe” table.


rem
rem     Script:         bloom_prune_char_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table test_probe(status char(3)) partition by list(status) (partition st_1 values('00','BI'));

create table test_build(status char(2)); 

insert into test_build values('00');
insert into test_probe values('00');
insert into test_build values('BI');
insert into test_probe values('BI');

commit;
 
prompt  =====================================
prompt  Fake large table stats for test_probe
prompt  =====================================

exec dbms_stats.set_table_stats(null,'test_probe',numrows=>2000000);

spool bloom_prune_char_bug
set linesize 156
set pagesize 60

set serveroutput off

select  /*+ 
                gather_plan_statistics 
        */
        * 
from 
        test_build b,
        test_probe a 
where 
        a.status = b.status
;

select * from table(dbms_xplan.display_cursor(null,null,'projection partition allstats last'))
/


The two tables have a pair of matching rows – so the query should return two rows. But it doesn’t – it returns no rows, and the clue about why not is in the execution plan (which I’ve pulled from memory with lots of extra bits and pieces). Here’s the output from running this script (from the query onwards) on an instance of 12.2.0.1:


no rows selected


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------
SQL_ID  2295z4p6m4557, child number 0
-------------------------------------
select /*+   gather_plan_statistics  */  * from  test_build b,
test_probe a where  a.status = b.status

Plan hash value: 177769189

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name       | Starts | E-Rows | Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |            |      1 |        |       |       |      0 |00:00:00.01 |       7 |       |       |          |
|*  1 |  HASH JOIN               |            |      1 |   2000K|       |       |      0 |00:00:00.01 |       7 |  2078K|  2078K|  766K (0)|
|   2 |   PART JOIN FILTER CREATE| :BF0000    |      1 |      2 |       |       |      2 |00:00:00.01 |       7 |       |       |          |
|   3 |    TABLE ACCESS FULL     | TEST_BUILD |      1 |      2 |       |       |      2 |00:00:00.01 |       7 |       |       |          |
|   4 |   PARTITION LIST SINGLE  |            |      1 |   2000K|KEY(AP)|KEY(AP)|      0 |00:00:00.01 |       0 |       |       |          |
|   5 |    TABLE ACCESS FULL     | TEST_PROBE |      0 |   2000K|     1 |     1 |      0 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("A"."STATUS"="B"."STATUS")

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "B"."STATUS"[CHARACTER,2], "A"."STATUS"[CHARACTER,3]
   2 - INTERNAL_FUNCTION("B"."STATUS")[2], INTERNAL_FUNCTION("B"."STATUS")[2], "B"."STATUS"[CHARACTER,2]
   3 - "B"."STATUS"[CHARACTER,2]
   4 - (rowset=256) "A"."STATUS"[CHARACTER,3]
   5 - (rowset=256) "A"."STATUS"[CHARACTER,3]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

The optimizer has used a Bloom filter to do partition pruning, and while we can see operation 4 reporting a “partition list single” operation using “and pruning” (AP), we can see that operation 5 reports zero starts. This is because the Bloom filter has been used to determine that there are no relevant partitions!

Looking down at the (rarely examined) projection information we can see why – operation 2 (the “part join filter create”) has a strange “Internal Function” in its projection, and references B.STATUS as character[2]. It looks as if the Bloom filter that identifies partitions has been built using a char(2) as the input to its hashing function – which is bad news when the resulting filter is used to check the hash values returned from the partition definition that is hash a char(3).

If my thoughts about the mismatch in how the Bloom filters for the build and probe tables are built then a test that would help to confirm the hypothesis would be disable Bloom filter pruning – which you can only do by setting a hidden parameter, possibly in a hint or SQL Patch):

select 
        /*+ 
                gather_plan_statistics 
                opt_param('_bloom_pruning_enabled','false') 
        */  
        * 
from 
        test_build b,
        test_probe a 
where
        a.status = b.status;

select * from table(dbms_xplan.display_cursor(null,null,'projection partition allstats last'))
/


ST STA
-- ---
00 00
BI BI

2 rows selected.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  9whuurpkm3wpw, child number 0
-------------------------------------
select  /*+   gather_plan_statistics
opt_param('_bloom_pruning_enabled','false')   subquery_pruning(a)  */
* from  test_build b,  test_probe a where  a.status = b.status

Plan hash value: 787868928

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name       | Starts | E-Rows | Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |            |      1 |        |       |       |      2 |00:00:00.01 |      45 |       |       |          |
|*  1 |  HASH JOIN             |            |      1 |   2000K|       |       |      2 |00:00:00.01 |      45 |  2078K|  2078K|  866K (0)|
|   2 |   TABLE ACCESS FULL    | TEST_BUILD |      1 |      2 |       |       |      2 |00:00:00.01 |       7 |       |       |          |
|   3 |   PARTITION LIST SINGLE|            |      1 |   2000K|     1 |     1 |      2 |00:00:00.01 |      38 |       |       |          |
|   4 |    TABLE ACCESS FULL   | TEST_PROBE |      1 |   2000K|     1 |     1 |      2 |00:00:00.01 |      38 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("A"."STATUS"="B"."STATUS")

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "B"."STATUS"[CHARACTER,2], "A"."STATUS"[CHARACTER,3]
   2 - (rowset=256) "B"."STATUS"[CHARACTER,2]
   3 - (rowset=256) "A"."STATUS"[CHARACTER,3]
   4 - (rowset=256) "A"."STATUS"[CHARACTER,3]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

Adding the hint opt_param(‘_bloom_pruning_enabled’,’false’) to the query we get the right results and, of course, we can see that there is no operation in the execution plan to generate and use the Bloom filter that is probably causing the problem.

Conclusion

If you are going to use char() types in your tables, and if you are going to compare columns of type char() make sure that the columns are defined to be exactly the same length – or that you include an explicit cast() to guarantee that the shorter column appears to be the same length as the longer column.

Footnote

This bug appeared in my MOS “hot topics”email a couple of days ago as

Bug 27661222: WRONG RESULTS WITH PARTITION PRUNING COMPARING CHAR COLUMNS OF DIFFERENT LENGTH

Reading the bug note the problem is described as a bug in “AND pruning” with a workaround of setting the hidden parameter “_and_pruning_enabled” to false (possibly through the opt_param() hint). I suspect that the underlying problem may be the Bloom filter itself and that disabling Bloom filter pruning for the query may be a slightly less aggressive workaround.

The bug is reported as fixed in 20.1 – but you don’t need to upgrade just yet because, apart from the workarounds, there are various patches available back to 19.x and 12.2.

The sample script above is basically the example in the bug note with a few minor changes.

Update (March 2020)

Hot on the heels of the previous bug report, another one showed up today that related to problems with char() types and comparisons. The title of the bug isn’t an accurate description of the problem: Bug 29002488 : WRONG RESULT WITH A QUERY USING UNION ALL. However the note includes a test case which started life as a join to a union all.

The bug is reported as fixed in 20.1, but there are patches and backports into various versions of 12.2, 18c and 19c.

Update (March 2020 again)

Another related bug has shown up in my “hot topics” email: B28910498 : WRONG RESULT WITH JOIN TABLES WITH CHAR COLUMNS OF DIFFERENT LENGTH. Again fixed in 20.1; apparently introduced in 12.2.0.1.

In this case with a workaround of setting optimizer_features_enable back to 12.1.0.2 or 11.2.0.4, or adding the hint /*+ no_push_pred */ to the (invisible) query.  The last workaround suggests, however, that the problem may be older than 12.2.0.1, and setting OFE simply disables an enhancement to predicate pushing in 12.2 that allows the problem to become visible for the user’s query.

 

 

 

 

 

 

February 4, 2020

maxquerylen

Filed under: Bugs,Infrastructure,Oracle,undo — Jonathan Lewis @ 3:50 pm GMT Feb 4,2020

The view v$undostat is a view holding summary information about undo activity that can be used by the automatic undo mechanism to deal with optimising the undo retention time (hence undo space allocation). The view holds one row for every ten minute interval in the last 4 days (96 hours) and includes two columns called maxquerylen and maxqueryid – which tell you something about the query that was considered to be the longest running query active in the interval.

In this note I want to explain why the contents of these two columns are sometimes (possibly often) completely irrelevant despite there being a few notes on the internet about how you should investigate them to help you decide on a suitable setting for the undo_retention.

The descriptions in the 19c reference manual for these columns is as follows:

  • maxquerylen – Identifies the length of the longest query (in seconds) executed in the instance during the period. You can use this statistic to estimate the proper setting of the UNDO_RETENTION initialization parameter. The length of a query is measured from the cursor open time to the last fetch/execute time of the cursor. Only the length of those cursors that have been fetched/executed during the period are reflected in the view.
  • maxqueryid – SQL identifier of the longest running SQL statement in the period

It would be hard to misunderstand the meaning of the second column – if the first column tells us that Oracle has spotted a “longest query” then the second column gives us the sql_id so we can check v$sql to find out what it was. But what sort of queries are going to show up as the “longest query” in the interval?

Here’s an example from a few hours of a nearly idle instance, querying the begin and end times (formatted to show just day of month + time), with the two critical columns,


select  begin_time, end_time, maxquerylen, maxqueryid
from    v$undostat
order by
    begin_time
;

BEGIN_TIME	END_TIME    MAXQUERYLEN MAXQUERYID
--------------- ----------- ----------- -------------
...
04 10:50:18	04 11:00:18	      0
04 11:00:18	04 11:10:18	    356 f3yfg50ga0r8n
04 11:10:18	04 11:20:18	    883 25u1mbkcr9rnu
04 11:20:18	04 11:30:18	   1486 25u1mbkcr9rnu
04 11:30:18	04 11:40:18	   2090 25u1mbkcr9rnu
04 11:40:18	04 11:50:18	      0
04 11:50:18	04 12:00:18	   3299 25u1mbkcr9rnu
04 12:00:18	04 12:10:18	   3903 25u1mbkcr9rnu
04 12:10:18	04 12:20:18	   4507 25u1mbkcr9rnu
04 12:20:18	04 12:30:18	      0
04 12:30:18	04 12:40:18	      0
04 12:40:18	04 12:50:18	      0
04 12:50:18	04 13:00:18	      0
04 13:00:18	04 13:10:18	      0
04 13:10:18	04 13:20:18	      0
04 13:20:18	04 13:30:18	      0
04 13:30:18	04 13:37:27	   9035 25u1mbkcr9rnu

173 rows selected.

Notice, particularly, that the SQL_ID 25u1mbkcr9rnu disappears from the 11:40 interval, then re-appears at 11:50, then disappears again from 12:20 through 13:20 (lunchtime), then reappears again at 13:30. And when it reappears after an absence the query length has increased in a way that’s consistent with the gap. So it looks as if the query wasn’t running during the gap, but turns out to have been running after the gap ended. (Is this Quantum queries?)

The explanation is in the detail of the definition: “from the cursor open time to the last fetch/execute time”. From an SQL*Plus session I “set pause on” then executed the query “select * from all_objects” and hit return a few times to get a few pages of output. Then, spread over the next couple of hours, I hit return a few more times to get a few more pages of output. Each time I hit return my session does another fetch call, and the code behind v$undostat notices that my query is still active.

I don’t know exactly how Oracle is keeping track of “active” statements because there seem to be some inconsistencies in the reporting (and I’ll comment on those later), but as a first approximation, until you close a cursor (either explicitly or implicitly) some piece of Oracle’s code registers the fact that the query might do further fetches, which means it might need to apply more undo to make current data read-consistent with the cursor’s start SCN, which means that it should take note of the time the cursor has been open because the undo retention time might need to be that long.

Inconsistencies.

I said there were some inconsistencies in the reporting. I’ve noticed three anomalies – perhaps due to the extreme idleness of the instance I was using for testing.

  1. At about 12:45 I hit return a few times to get the maxquerylen and maxqueryid refreshed – but the code didn’t seem to notice that I had done a few more fetches of a long running query. So it seems to be possible for Oracle to miss the query that should be reported.
  2. At about 11:52 I hit return a few times again, and you can see that the two critical columns were refreshed, but if you do the arithmetic Oracle has attributed 600 seconds to the query – the whole of the 10 minute refresh interval, not just the time up to the moment of the last fetch that I had done in that interval.
  3. I didn’t hit return again until about 12:45 (the moment in point 1 above when the two columns didn’t refresh the way I though they should), but the columns kept updating through 12:00 and 12:10 intervals anyway before disappearing from the 12:20 interval. So it looks like queries can be reported as long running even when they haven’t been executing or fetching.

The is one last anomaly – and this relates to the reason I started looking at this columns. If you check the entry for 11:00 you’ll see that the SQL Id ‘f3yfg50ga0r8n’ has been reported as “running” for 356 seconds. But here’s what I found in v$sql for that query:


select  executions, parse_calls, fetches, end_of_fetch_count, elapsed_time, cpu_time, rows_processed, sql_text
from    v$sql
where   sql_id = 'f3yfg50ga0r8n'
;

EXECUTIONS PARSE_CALLS	  FETCHES END_OF_FETCH_COUNT ELAPSED_TIME   CPU_TIME ROWS_PROCESSED SQL_TEXT
---------- ----------- ---------- ------------------ ------------ ---------- -------------- ------------------------------------------
        79          79         79                  0        20487      10667             79 select obj# from obj$ where dataobj# = :1

The SQL lookes like a sys-recursive statement which, in 79 executions, has accumulated 20 milliseconds of elapsed time (rather than 356 seconds – but that difference could just be related to one or other of the anomalies I reported earlier). The key thing to note is that the value of column end_of_fetch_count is zero: this looks like a statement where Oracle has simply not bothered to fetch all the data and has then not bothered to close the cursor. As a result it’s possible that each time the statement is executed (note that parse_calls = executions, so we’re not looking at a “held” cursor) the code behind v$undostat looks back at the time the cursor was initially opened to measure the query length, rather than looking at the time the statement was re-executed.

This may go a long way to answering the question that came up on Oracle-l earlier on today as follows:

The following query (sql_id is 89w8y2pgn25yd) was recorded in v$undostat.maxqueryid in a 12.2. database during a period of a high undo usage: select ts# from sys.ts$ where ts$.online$ != 3 and bitand(flags,2048) != 2048;

select
        undoblks,txncount,maxquerylen,maxconcurrency,activeblks
from    v$undostat u
where   maxqueryid='89w8y2pgn25yd'
;

UNDOBLKS  TXNCOUNT  MAXQUERYLEN  MAXCONCURRENCY  ACTIVEBLKS
--------  --------  -----------  --------------  ----------
   39199      4027         1378               5     2531960

What is this query, and how does it manage to report a maximum query length of 1,378 seconds? Just like the one above it’s a sys-recursive query; and this one appears when you query dba_tablespaces – and though it executes once for every row and takes just fractions of a second to execute. But if you trace a query like “select tablespace_name from dba_tablespaces” you’ll see that every time the query is called the trace file will show lines for: “Parse, Exec, Fetch, Close” until the last call – which doesn’t report a “Close”.

Just like my original “select * from all_objects” there’s a query dangling with an open cursor, leaving Oracle with the opportunity to go back to the moment it was opened and report it as a “long running query”.

tl;dr

The maxquerylen and maxqueryid in v$undostat don’t tell you about statements that have taken a long time to change a lot of data and generate a lot of undo; they tell you about statements that might need to apply a lot of undo to see read-consistent data and therefore might become victims in a “ORA-01555: snapshot too old” event.

For various reasons the statements reported may be completely irrelevant because there are various reasons (some, possibly, bug-related) why a cursor may be treated as if it opened a long time in the past when it was actually a re-use of an existing “dangling” cursor. It’s also possible that some bad client code will treat cursor in a way that does no harm to the client program but hides a more interesting query that would otherwise have been reported by these two columns.

Update Feb 2020

I don’t know why I didn’t think of searching MOS before poking around at this issue, but I’ve just done a search on “maxquerylen maxqueryid” and one of the first Knowledgebase articles to show up – dated September 2019 – was Doc ID 2005931.1: Wrongly calculated MAXQUERYLEN Causing ORA-01555 or ORA-30036 Errors.

The document reports several bugs all relating to the same issue, and all closed as duplicates of unpublished bug: 12314489: OCIRELEASESTMT() TO PUT THE CURSORS IN CURBOUND STATE WHEN PLACING THEM IN CACHE This is also close and replaced by bug 26833932 which is also unpublished and doesn’t even have a description.

« Previous PageNext Page »

Powered by WordPress.com.