Oracle Scratchpad

January 13, 2021

Check Constraints

Filed under: constraints,Oracle — Jonathan Lewis @ 3:17 pm GMT Jan 13,2021

This is a note I drafted in 2018 but never got around to publishing. It’s an odd quirk of behaviour that I discovered in 12.2.0.1 but I’ve just checked and it’s still present in 19.3.0.0.

Here’s a funny little thing that I found while checking some notes I had on adding constraints with minimum service interruption – a topic I last wrote about a couple of years ago [ed. now nearly 5 years ago]. This time around I did something a little different, and here’s a cut-n-paste from the first couple of steps of my test when I had previously used another session to delete a row without committing (table t1 is a table I created as select * from all_objects).

Note that the first SQL statement uses disable while the second uses enable:


SQL> alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;

At this point my session was hanging – and I find it a little surprising that the attempt to create the constraint disabled returns an immediate ORA-00054, while the attempt to create it enabled waits. A quick check of v$lock showed that my session was requesting a TX enqueue in mode 4 (transaction, share mode) waiting for the other session to commit or rollback .

In the following output from 12.1.0.2 my session is SID 16 and I’ve simply reported all the rows for the two sessions from v$lock:


       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
        16 TX     327704      12790          0          4        169          0          0
           TX      65550       9613          6          0        169          0          0
           TM     192791          0          2          0        169          0          0
           OD     192791          0          4          0        169          0          0
           AE        133          0          4          0        579          0          0

       237 TX     327704      12790          6          0        466          1          0
           TM     192791          0          3          0        466          0          0
           AE        133          0          4          0        582          0          0

You’ll notice my session is holding an OD enqueue in mode 4 and a TM lock in mode 2 – the value 192791 is the object_id of the table in question. The OD lock is described in v$lock_type as “Lock to prevent concurrent online DDLs”.

It would appear, therefore, that we are stuck until the other session commits – so I hit ctrl-C to interrupt the wait, and then tried to add the constraint again, stil without committing (or rolling back) the other session. Here’s the cut-n-paste from that sequence of events:

{hits ctrl-C}
alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
                              *
ERROR at line 1:
ORA-02264: name already used by an existing constraint

I’ve interrupted the command and (according to Oracle) cancelled the current operation – but it then becomes apparent that I have successfully added the constraint anyway!

SQL> select constraint_name, constraint_type, search_condition from user_constraints where table_name = 'T1';

CONSTRAINT_NAME      C SEARCH_CONDITION
-------------------- - --------------------------------------------------------------------------------
SYS_C0018396         C "OWNER" IS NOT NULL
SYS_C0018397         C "OBJECT_NAME" IS NOT NULL
SYS_C0018398         C "OBJECT_ID" IS NOT NULL
SYS_C0018399         C "CREATED" IS NOT NULL
SYS_C0018400         C "LAST_DDL_TIME" IS NOT NULL
SYS_C0018401         C "NAMESPACE" IS NOT NULL
C1                   C owner = upper(owner)

And this is what happened when I switched to the other session – where I had still not committed or rolled back – and tried to execute an update:


SQL> update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1;
update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.C1) violated

So the constraint really is present and is visible to other sessions – even though the attempt to add it hung and had to be interrupted!

I can’t think of any reason why this might cause a problem in the real world – but it is an oddity that might have echoes in other cases where it matters.

Update (next day)

When I posted a link to this post on twitter one of the replies referred me to a very similar post by Oren Nakdimon with a very similar date to my 2018 draft, which makes me wonder if I had decided not to publish it because I’d done a Google search on the topic after I had written this note.

I’ve just done a search on ODC (formerly OTN) for possible references (date range around the end of 2018, with text “enable novalidate”)  and found this question  So maybe we had seen the same question and done the same tests at the same time.

 

December 23, 2020

19c tweak

Filed under: CBO,Hints,Oracle,Upgrades — Jonathan Lewis @ 2:15 pm GMT Dec 23,2020

There are times when an upgrade makes a big difference to performance because an enhancement to the optimizer code path changes the costing of a plan that was always possible, but badly costed. I noticed an example of this while testing the code in the email I mentioned in last month’s posting on the “Incremental Sort” that Postgres 13 can do. Here’s a model to create some data and demonstrate the principle – the code is a modified (reduced) version of the code published by Phil Florent describing the feature.

rem
rem     Script:         fetch_first_postgres.sql
rem     author:         Phil Florent
rem     Dated:          6th Nov 2020
rem
rem     Last tested
rem             19.3.0.0        Uses index descending unhinted at low cost
rem             18.3.0.0        Used index desc efficiently if hinted, but high cost
rem             12.2.0.1        Used index desc efficiently if hinted, but high cost
rem

create table lancers(dtl timestamp, idg integer not null, perf integer);

insert into lancers(dtl, idg, perf)
with serie(i) as (
        select 25e4 from dual
        UNION ALL
        select i - 1 from serie where i > 1
)
select
        current_timestamp - (i / 1440),
        trunc(dbms_random.value * 1e5 + 1),
        case
                when dbms_random.value <= 0.001 then 50000 + trunc(dbms_random.value * 50000 + 1) 
                else trunc(dbms_random.value * 50000 + 1) 
        end
from serie
/

execute dbms_stats.gather_table_stats(user,'lancers',method_opt=>'for all columns size 1')

create index perf_i1 on lancers(perf, dtl);
alter table lancers modify (perf not null, dtl not null);

This is the basic statement I want to execute – but in some versions of Oracle it will have to be hinted to produce the execution plan I want to see.

select  
        idg, perf 
from  
        lancers 
order by
        perf desc  
fetch first 5 rows only
/

If you check the order by clause and the definition of the index perf_i1 you’ll see that Oracle could (in principle) walk the index in descending order, stopping after just 5 rows, to produce the result.

But here are the execution plans from 19.3.0.0, 18.3.0.0, and 12.2.0.1, with their plans pulled from memory and showing the rowsource execution statistics (hinted by gather_plan_statistics) to show you what happens – starting from the newest first:

19.3.0.0: (unhinted)
--------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |     8 (100)|      5 |00:00:00.01 |       9 |
|*  1 |  VIEW                         |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |      5 |     8   (0)|      5 |00:00:00.01 |       9 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|     8   (0)|      5 |00:00:00.01 |       9 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |      5 |     3   (0)|      5 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

You can see an index_desc() hint in the output, but it has been commented out. The key feature to note is that the optimizer has found the path I was hoping to see, and it’s a low-cost path, although there is one oddity in the plan – the E-rows (cardinality estimate) for the table access doesn’t allow for the stopkey and, since there are no predicates in the query, reports the 250K rows that exist in the table.

For 18.3.0.0 I had to include the hint, and you’ll see why:

18.3.0.0 (hinted with index_desc)
---------------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)



Again we see the plan is possible, but the optimizer’s cardinality estimate for the hinted index scan is 250K rows – the full size of the index, and it has allowed for that in the cost of the query. So the cost of this plan is high and in the absence of the hint the optimizer would have used a full tablescan with sort.

Finally we get down to 12.2.0.1 – and I’ve shown the hinted and unhinted plans.

12.2.0.1 (hinted index_desc)
-----------------------------
SQL_ID  fgxvcaz3sab4q, child number 0
-------------------------------------
select   /*+   gather_plan_statistics   index_desc(lancers(perf, dtl))
*/   idg, perf from  lancers order by  perf desc  fetch first 5 rows
only

Plan hash value: 843745288

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |      1 |        |   250K(100)|      5 |00:00:00.01 |      10 |
|*  1 |  VIEW                         |         |      1 |      5 |   250K  (1)|      5 |00:00:00.01 |      10 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |      1 |    250K|   250K  (1)|      5 |00:00:00.01 |      10 |
|   3 |    TABLE ACCESS BY INDEX ROWID| LANCERS |      1 |    250K|   250K  (1)|      6 |00:00:00.01 |      10 |
|   4 |     INDEX FULL SCAN DESCENDING| PERF_I1 |      1 |    250K|   854   (3)|      6 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)


12.2.0.1 Unhinted
------------------
SQL_ID  8nmavy42tzrhb, child number 0
-------------------------------------
select   /*+   gather_plan_statistics --  index_desc(lancers(perf,
dtl))  */   idg, perf from  lancers order by  perf desc  fetch first 5
rows only

Plan hash value: 1374242431

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |      1 |        |  1102 (100)|      5 |00:00:00.24 |     822 |       |       |          |
|*  1 |  VIEW                    |         |      1 |      5 |  1102  (10)|      5 |00:00:00.24 |     822 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|         |      1 |    250K|  1102  (10)|      5 |00:00:00.24 |     822 |  2048 |  2048 | 2048  (0)|
|   3 |    TABLE ACCESS FULL     | LANCERS |      1 |    250K|   132  (16)|    250K|00:00:00.13 |     822 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("PERF") DESC )<=5)

As you can see, 12.2.0.1 and 18.3.0.0 behave exactly the same way when hinted – the path is acceptable, but the cost is high. Consequently when I remove the hint the optimizer switches to using a full tablescan with sort because it’s cost is lower (thanks, in part, to the pattern in the data) than the indexed access path.

Summary

Two thoughts to take away from this note.

  • First, there were two possible execution plans for the same query and the optimizer in versions below 19c was picking the one that was clearly a bad idea. The presence of alternatives, though, means that the patterns in the data, the index definition and statistics (especially the clustering_factor) the number of rows to fetch, and various other optimizer settings may mean that you find yourself in the unlucky position that the optimizer’s arithmetic is on the boundary between the two plans and it switches randomly between them from day to day.
  • Secondly, when you upgrade to 19c the optimizer seems to be more likely to pick the indexed access path for a query like this – and that will probably be a good thing, but in a few cases it might turn out to be a bad thing.

December 21, 2020

Why Why Why Why?

Filed under: Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 2:24 pm GMT Dec 21,2020

Here’s a little puzzle – and if you don’t see the answer almost immediately you did need to read this note. The question comes from a thread on the Oracle Groundbreakers’ Forum –

“Why I am getting 0020 instead of 2020 for below query?”

select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual.

Instead of sysdate if I give date like ‘5-dec-2020’ it’s giving correct. Any ideas why iso value for sysdate year is coming as 0020?

There are many things that people do wrong with dates and many ways in which Oracle tries to help so I’ll start with a general-purpose reminder: Oracle supplies a “date” datatype, if you want to store dates, compare dates, or do date arithmetic make sure you are using the date datatype.

(Technically, of course, Oracle’s date datatype is actually a “date with time to nearest second” type, so there are a couple of details you need to remember to avoid running into surprises that are due to the presence of the time component – for example you might use a constraint like “check(date_col = trunc(date_col)” to enforce date-only values for a column.)

Sysdate is a date, and the function to_date() expects its first parameter to be a character string; so Oracle implicitly converts sysdate to a character type with the to_char() function before it does anything else and it uses the session’s nls_date_format parameter to supply the formatting string. On my instance this parameter has the value ‘DD-MON-RR’ (an option created in an attempt to work around the “Y2K” problem – which some of you will remember).

So sysdate was converted (on the day the question was asked) to the character string ’10-DEC-20′, and when the to_date() function call tried to convert this back to a date type using the explicitly supplied format dd-mm-yyyy Oracle used leading zeros to pad the 2-digit year to a 4-digit year which made the year 0020 which is exactly what the OP saw.

Solution

To correct this code, take out the call to to_date().

SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
0020

SQL> select to_char(sysdate,'iyyy') from dual;

TO_C
----
2020

If you can’t correct the code then you might be able to work around the error by setting the nls_date_format to a more appropriate value. In fact the nls_date_format is one of those parameters that you probably ought to change from its default value the moment you set up your database. It’s just rather scary to do so if you’re running a system that has been around for a few years and may (accidentally) include some code that depends on the default setting to get the right answers or best execution plans.

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL>  alter session set nls_date_format='dd-Mon-rrrr';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_date('01-Dec-20','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-2020

SQL> select to_date('01-Dec-50','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-1950

I’ve included a couple of examples using the ‘rrrr’ (or ‘RRRR’) format for the year component. Oracle introduced the ‘RR’ format over 20 years ago as a best-guess workaround to Y2K problem. The two-character ‘RR’ format means values between 50 and 99 imply previous century, and values between 00 and 49 imply current century (as show in the last two simpler examples). It’s an option that should have been deprecated within a few years of its introduction and desupported soon after, but the terrible inertia of IT juggernaut means it’s still around. If you’re worried about the impact of changing your nls_date_format from ‘dd-mon-RR’ to ‘dd-Mon-yyyy’ you may feel a little safer switching to ‘dd-Mon-RRRR’ – whatever you do, though, you’ll almost certainly find examples where the code misbehaves because of the side-effects of the change in formatting.

One tiny detail you might have noticed in the original posting is that the user tested their code with the literal value ‘5-Dec-2020’, and got the result they wanted even though the format they had used to convert from character to date was ‘dd-mm-yyyy’. Oracle tries quite hard to cope with date conversions, as Connor McDonald pointed out several years ago.

While we’re on the topic of conversion it’s worth revisiting my comment about the date type including a time component. I’ve often seen expressions like to_date(to_char(sysdate,’dd-mon-yyyy’)) being used to ensure that a date that might include a time component is reduced to a “date-only” value (although that really means the time-component is “00:00:00”). If you need only the date component it’s far better to use trunc(date_expression) rather than this double type-conversion; there’s probably not a lot of savings in terms of simple CPU-usage, but (a) you might as well take it and (b) you might be able to give the optimizer a chance of getting a better cardinality estimate hence a better execution plan.

Summary

  • sysdate is a date type, don’t try to use to_date() on it.
  • to_char(date_expression) will use the nls_date_format value if you haven’t supplied an explicit format string so you should always include an explicitly chosen format expression in the call.
  • The nls_date_format defaults to a value that is bad on two counts: it expects a 2-digit year and uses RR rather than YY. You should be using four digits for the year, and the only argument for using RRRR is if you are temporarily in transition from RR to YYYY.

To misquote Napolean (the pig, Animal Farm): “4 Y’s good, 2 Y’s bad”. And it’s not going to change to “4 Y’s good, 2 Y’s better”.

Upgrade trivia

Filed under: 18c,Oracle,Upgrades — Jonathan Lewis @ 9:57 am GMT Dec 21,2020

Sometimes it’s the little things that catch you out (perhaps only briefly) on an upgrade. Here’s one that came up on Oracle’s Groundbreakers Developers Community [sic] (who knows what it will be called this time next year and, while we’re at it, who can tell where the apostrophe(s) ought to go).

The problem was with a database trigger that had been doing home-grown auditing to catch any DDL changes to non-SYS objects. The code was quite simple:

create or replace trigger system.audit_ddl_trg 
after ddl on database
begin
        if (ora_sysevent='TRUNCATE') then

                null; -- I do not care about truncate

        elsif ora_dict_obj_owner!='SYS' then

                insert into audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent)
                values(
                        sysdate,
                        sys_context('USERENV','OS_USER') ,
                        sys_context('USERENV','CURRENT_USER') ,
                        sys_context('USERENV','HOST') , 
                        sys_context('USERENV','TERMINAL') ,
                        ora_dict_obj_owner,
                        ora_dict_obj_type,
                        ora_dict_obj_name,
                        ora_sysevent
                );

        end if;
end;
/

The issue was that after an upgrade from 12c (release not specified) to Oracle 19c the trigger was failing.

Here’s the definition for the table used by the trigger as the target of the insert statement – can you see any reasons why it might be failing:

create table audit_ddl (
        d               date,
        osuser          varchar2(255 byte),
        current_user    varchar2(255 byte),
        host            varchar2(255 byte),
        terminal        varchar2(255 byte),
        owner           varchar2(30 byte),
        type            varchar2(30 byte),
        name            varchar2(30 byte),
        sysevent        varchar2(30 byte)
)
/

If it’s not immediately obvious it’s probably because you’ve forgotten that object names (and various other identifiers) are allowed to be up to 128 characters in 19c (and a little earlier) – so defining the owner and name as varchar2(30) is an accident waiting to happen.

It didn’t take the user long to work out why there was a problem but the more interesting part of the issue was why there were now objects in the database with names exceeding the old 30 character limit. The OP supplied an (obfuscated) example: after the upgrade Oracle was reporting object names “using the full path name” like: “/some/path/name/object_name”.

The structure is a clue – for this user it’s all about Java classes. Here’s a little query against dba_objects with the results from 11.2.0.4 and 12.2.0.1

select  object_name 
from    dba_objects 
where   object_type = 'JAVA CLASS' 
and     object_name like '%TimeZoneNamesBundle'
/

OBJECT_NAME (11.2.0.4)
------------------------------
/2ea59ec_TimeZoneNamesBundle

12.2.0.1
OBJECT_NAME (12.2.0.1)
--------------------------------------
sun/util/resources/TimeZoneNamesBundle

Java is a particularly enthusiastic user of long object names in Oracle – but it’s not the only culprit, there are a few others as we can see with another query against dba_objects – this time from 19c:

select  object_type, count(*)
from    dba_objects 
where   length(object_name) > 30 
group by object_type 
order by count(*)
/

OBJECT_TYPE               COUNT(*)
----------------------- ----------
PROCEDURE                        1
INDEX                            2
JAVA RESOURCE                 1286
SYNONYM                       4337
JAVA CLASS                   31739

If you’ve made much use of Java in the database before now you’re probably familiar with the call to dbms_java.long_name(). Since Oracle has a limit of 30 characters for identifiers it trims the leading edge (and sometimes a bit of the trailing edge) of the long names used by the public java libraries and uses a hashing function to create a short prefix. If you look in the sys.javasnm$ table (java short name?) in earlier versions of Oracle you’ll see that it has two columns – (short, longdbcs), and we can see the relationship between them:

select  short, longdbcs, dbms_java.longname(short) long_name 
from    javasnm$ 
where   rownum <= 10
/

SHORT                          LONGDBCS                                           LONG_NAME
------------------------------ -------------------------------------------------- --------------------------------------------------
/2ea59ec_TimeZoneNamesBundle   sun/util/resources/TimeZoneNamesBundle             sun/util/resources/TimeZoneNamesBundle
/8acf0d3a_OpenListResourceBund sun/util/resources/OpenListResourceBundle          sun/util/resources/OpenListResourceBundle
/e3e70b06_LocaleNamesBundle    sun/util/resources/LocaleNamesBundle               sun/util/resources/LocaleNamesBundle
/cc11c9d8_SerialVerFrame       sun/tools/serialver/SerialVerFrame                 sun/tools/serialver/SerialVerFrame
/1f9f2fa_N2AFilter             sun/tools/native2ascii/N2AFilter                   sun/tools/native2ascii/N2AFilter
/b6b3d680_UnsupportedEncodingE java/io/UnsupportedEncodingException               java/io/UnsupportedEncodingException
/7994ade2_CharsetEncoder       java/nio/charset/CharsetEncoder                    java/nio/charset/CharsetEncoder
/73841741_IllegalCharsetNameEx java/nio/charset/IllegalCharsetNameException       java/nio/charset/IllegalCharsetNameException
/f494d94e_UnsupportedCharsetEx java/nio/charset/UnsupportedCharsetException       java/nio/charset/UnsupportedCharsetException
/3092d940_MissingResourceExcep java/util/MissingResourceException                 java/util/MissingResourceException

10 rows selected.

With the appearance of long(er) identifiers in 18c, Oracle no longer needs to mess about with short names and a conversion function – it has just put the “fully qualified” name into obj$. I doubt if this will catch anyone out for long, but it might be nice to know about in advance.

December 9, 2020

Edition-based Redefinition

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 12:00 pm GMT Dec 9,2020

This note is a list of links to the presentations that Bryn Llewellyn gave to the Swedish Oracle Users Group in a full-day seminar on EBR in 2017. I’ve had it as a draft note on my blog for some time, but have just revisited it and thought that others might like to be reminded about it.

There are 8 videos ranging from about 40 to 60 minutes each, and the last link is for the full playlist.

December 8, 2020

Hash Joins

Filed under: Hash Join,Joins,Oracle — Jonathan Lewis @ 11:55 am GMT Dec 8,2020

This is a follow-up to a note that’s 10 years old [opens in new tab], prompted by a thread on the Oracle Developer Community forum asking about the order of operation in an execution plan, and following up with a question about PGA memory use that I had left open in the original note.

The original note pointed out that the serial execution plan for a 4 table join that used hash joins for every join and had a leading() hint dictating a single join order could still have 8 distinct execution plans (which I then corrected to 4, because half of them were eliminated by an odd little inconsistency of the leading() hint).

The source of the number 8 was the fact that when you hint a hash join with /*+ use_hash(next_alias) */ the optimizer will consider a /*+ swap_join_inputs(next_alias) */ – in other words, it will choose which of the two inputs should be the “build” table and which the “probe” table, even though the human eye will probably assume that next_alias was clearly intended as the probe table. (The “little inconsistency” in the leading() hint is that it blocks the option for a swap for the first – and only the first – join unless it’s explicitly hinted.)

In the article I showed all 8 possible plans, and described how the two plans at the extreme ends of my list would operate at run-time. I’ve reproduced these two plans below, preceded by the hints that produced them:

leading(t1 t2 t3 t4)
use_hash(t2) no_swap_join_inputs(t2)
use_hash(t3) no_swap_join_inputs(t3)
use_hash(t4) no_swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|*  3 |    HASH JOIN          |      |    70 |  1260 |     8 |
|*  4 |     HASH JOIN         |      |    70 |   840 |     5 |
|   5 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
|   6 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   7 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|   8 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
--------------------------------------------------------------

leading(t1 t2 t3 t4)
use_hash(t2)    swap_join_inputs(t2)
use_hash(t3)    swap_join_inputs(t3)
use_hash(t4)    swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|   3 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
|*  4 |    HASH JOIN          |      |    70 |  1260 |     8 |
|   5 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|*  6 |     HASH JOIN         |      |    70 |   840 |     5 |
|   7 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   8 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
--------------------------------------------------------------

Side note: one of the comments on the original post raised the point that (e.g.) the second of the plans above could have been generated by the join order represented by the hint leading(t2 t1 t3 t4), and that’s absolutely correct. In fact, if you see the plan for a 4 table join consisting of nothing but 3 hash join you cannot be certain what join order the optimizer was examining when it produced that plan unless you look at the CBO trace file (or check to see if it’s been explicitly and completely, hinted in the code).

Having set the scene, we come to the question that prompted this note. The question related to the second plan above, and said:

“However, this one is the plan that takes the most memory in the PGA because it builds 3 hash tables before joining any table, right?”

The quick answer to the question is: “Not necessarily.”

In the original article I had pointed out that plans following the pattern of the first plan above with N tables and N – 1 joins would have at most two build tables in memory at any one moment while the bottom plan would create N – 1 build tables in memory before any join results could be created. This does rather invite the inference that the bottom plan has to be the one that is going to use most PGA memory, but what I had said in the original article was (emphasis added):

“Notice: the number of in-memory hash (build) tables we have in the first of the 8 plans at any instant after the first join starts is two and (no matter how many tables are involved in this pattern) the number of in-memory hash tables will always be two. The actual size of the two hash tables is a little unpredictable and, as a very crude guideline, you might expect the size to grow as more tables are joined into the result set.

As a thought experiment, consider 3 small dimension tables and one big fact table. If Oracle were to create in-memory hash tables from the three dimension tables and then start scanning the fact table (following the pattern of the second plan above with t1 in the role of the fact table) probing each of the dimension tables in turn, it could deliver the first result row very quickly without requiring more memory to store intermediate results.

Conversely if Oracle were to create a tiny in-memory hash table from the first dimension and probe it with the fact table (following the pattern of the first plan above with t2 in the role of the fact table) Oracle would then have to build a very large in-memory hash table before before probing it with the second dimension table, and as that second join takes place it would be generating a new result set that would become the next big in-memory hash table.

In this thought experiment we would probably find that the optimizer did the right thing without prompting and constructed three tiny in-memory hash tables – but it’s not always so clear-cut, and even the “right” decision can result in very large intermediate build tables (and if those intermediate build tables spill to disc in an Exadata environment the change in performance can be huge).

To finish off, here’s a script to turn the thought experiment into a concrete example (by careful, but deliberately silly, hinting).

rem
rem     Script:         c_treblehash_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem 

create table t1 
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) swap_join_inputs(t1)
                use_hash(t2) full(t2) swap_join_inputs(t2)
                use_hash(t3) full(t3) swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) no_swap_join_inputs(t1)
                use_hash(t2) full(t2) no_swap_join_inputs(t2)
                use_hash(t3) full(t3) no_swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

All I’ve done is create three small “dimension” tables of 70 rows each then created a table which is their Cartesian join, which produces a “fact” table of 343,000 rows. Then I’ve written a simple query to join the three dimension tables to the fact table.

I’ve used swap_join_inputs() for all the joins in one version of the query, and no_swap_join_inputs() in the other versions; and I’ve reconnected to the data before running each query to make it easier to see the different impact on the PGA of the two plans.

I’ve use dbms_xplan.display_cursor() to pull the execution plans from memory after running the queries, and since I’ve not set statistics_level to all, or added the hint /*+ gather_plan_statistics */ to the queries the only rowsource execution statistics I’ll get are the PGA usage: estimates and actuals.

The final step of each test reports the maximum PGA memory usage the session reached in the course of the test.

Here are the two plans, each followed by the PGA maximum memory size.

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|  2171K|  2171K| 1684K (0)|
|   3 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
|*  4 |    HASH JOIN          |      |    343K|  2171K|  2171K| 1681K (0)|
|   5 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|*  6 |     HASH JOIN         |      |    343K|  2171K|  2171K| 1711K (0)|
|   7 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   8 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   4 - access("T2"."ID"="T4"."ID2")
   6 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 13,859,925

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|    23M|  3727K|   29M (0)|
|*  3 |    HASH JOIN          |      |    343K|    23M|  3727K|   29M (0)|
|*  4 |     HASH JOIN         |      |    343K|    23M|  3667K|   29M (0)|
|   5 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
|   6 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   7 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|   8 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   3 - access("T2"."ID"="T4"."ID2")
   4 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 52,984,917

As you can see, each of the hash joins in the first plan required roughly 1.7MB of memory. All three hash tables would have been in existence at the same time, giving a total of about 5.1MB of memory for the query. The session’s maximum PGA usage shows up as 13MB, of which 5MB was my basic “startup” PGA, leaving 3MB “unaccounted”.

In comparison, each of the hash joins in the second plan required roughly 29MB, although only two of the hash tables would have been in existence at any one moment. That’s still an allocation of 58MB for the same basic 4 table join. In fact things aren’t quite as bad as they seem in this case since the maximum PGA allocated was only about 52MB (again with 5MB of startup PGA). The apparent contradiction may be due to the way that Oracle allocates PGA in increasing chunks – the 29MB reported may have been the result of the session doing something like: “I’ve reached 23MB, my next allocation will be 4MB, oops, I only needed another 128KB)”

As a final check of activity, I’ve also run a couple of queries against V$sql_workarea – a convenient memory structure you can examine to get some “recent history” of queries that have been using large amount of memory, or spilling to disc. In this case I’ve query the structure by sql_id for the two queries, reporting just a little detail about the last execution and memory usage.

SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'dcc01q28gcbmy';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                 1724416 OPTIMAL
           4 HASH-JOIN                                 1721344 OPTIMAL
           6 HASH-JOIN                                 1752064 OPTIMAL


SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'b52uwjz07fwhk';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                30930944 OPTIMAL
           3 HASH-JOIN                                30945280 OPTIMAL
           4 HASH-JOIN                                30650368 OPTIMAL

As you can see, the view reports each plan operation (with id and type) that needed memory for an SQL workarea – and the numbers confirm the 1.7MB and 29MB reported by the execution plans. (Which is not surprising since it’s exactly these figures that are used to supply the details in the plans.)

You need to be a little cautious with this view in older versions of Oracle – it used to do a “full tablescan” of the library cache, which resulted in a lot of library cache latch activity and could cause some contention if you ran it frequently, but it’s a very useful view for finding queries that are doing unexpectedly large sorts or hash joins, and one you might query occasionally if you see any space management threats in the temporary tablespace.

November 30, 2020

In-row LOBs

Filed under: Infrastructure,LOBs,Oracle — Jonathan Lewis @ 11:27 am GMT Nov 30,2020

If you’re like me there are probably lots of little things that you know but find terribly easy to forget. Here’s one of my bêtes noires – starting with a little quiz:

rem
rem     Script:         lob_in_row.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 (
        v1      varchar2(20),
        b1      clob,
        s1      clob
)
lob(b1) store as basicfile  b_lob,
lob(s1) store as securefile s_lob
;

insert into t1 values(
        rpad('x',20,'x'),
        rpad('x',20,'x'),
        rpad('x',20,'x')
)
/

commit;

execute dbms_stats.gather_table_stats(user,'t1');

select  column_name, avg_col_len
from    user_tab_columns
where   table_name = 'T1'
/

select  avg_row_len
from    user_tables
where   table_name = 'T1'
/

column rel_file_no new_value m_file_no
column block_no    new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &amp;amp;m_file_no block &amp;amp;m_block_no;

I’ve created a table with a varchar2() column, a basicfile CLOB, and a securefile CLOB, and inserted the same value into all three. By default the CLOB columns will be defined as ‘enable storage in row’ and since the inserted value is very short it will be stored “in row” for both these CLOBs. The question is:

Which column takes the most space in the data block?

To answer this question we need only gather stats and query user_tab_columns and/or user_tables – except the results may be a little surprising, so my script also uses the dbms_rowid package to find the file and block number where the row has been inserted, flushes the buffer cache to make sure that we don’t get confused by older versions of the block (you probably shouldn’t run this script on a big busy system) then dumps the block into the session trace file.

You’ll see why it’s necessary to look at the block dumps when I show you the results from gathering stats. The three sets of results below come from 11.2.0.4, 12.2.0.1, and 19.3.0.0 in that order. A key feature the databases have in common is that they were all created with a multi-byte character set:


11.2.0.4
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           123
S1                           117 

AVG_ROW_LEN
-----------
        261

12.2.0.1
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           171
S1                           165

AVG_ROW_LEN
-----------
        357

19.3.0.0
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                            77
S1                            71

AVG_ROW_LEN
-----------
        169

There are three things that the three versions agree on.

  • First the row length is the sum of the column lengths.
  • Secondly the average length of the varchar2() column is 21 – remember that the avg_col_len includes the length byte(s) when we collect stats with the dbms_stats package.
  • Finally that the difference between the basicfile and securefile LOBs is 6.

But are the actual values reported for the LOB Lengths in any way truthful, or are we seeing reporting errors (that vary with version).

That’s why we need to look at the block dump, and this is what we find as the interesting bit of the dump – which is basically the same in all three cases:


block_row_dump:
tab 0, row 0, @0x1eec
tl: 172 fb: --H-FL-- lb: 0x1  cc: 3

col  0: [20]  78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78

col  1: [76]
 00 54 00 01 02 0c 80 00 00 02 00 00 00 01 00 02 75 f6 ff fb 00 38 09 00 00
 00 00 00 00 28 00 00 00 00 00 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00
 78

col  2: [70]
 00 54 00 01 02 0c 80 80 00 02 00 00 00 01 00 02 75 f6 ff fc 00 32 48 90 00
 2c 00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78

end_of_block_dump

When Oracle 19c reported the avg_col_len (and maybe the same would be true of 18c) it was correctly reporting the space taken up in the row by the entries for the two LOBs (and their overheads) that  I had inserted.  The earlier versions of Oracle are doing something a little odd.

A key thing you’ll notice, of course, is that the varchar2() column is storing my input string as one byte per character, but because I used a multibyte characterset for my database characterset in all three cases Oracle has stored the LOB equivalents of the string using a two-byte fixed-width characterset that has effectively doubled the storage (and then has to include the “Lob Locator” overhead – hence the 20 character string turning into 70+ bytes).

This highlights two important issues. First that trying to work out how to “tune” your table storage based on the avg_col_len / avg_row_len is a lot harder to do if you’ve got LOB columns in the table, and your method of estimating storage will have to change to suit the version of Oracle. Secondly, and much more importantly, if you’re thinking of changing a varchar2() column (such as “description” or “comments” from a character data type to a CLOB to allow for a small number of cases where people want to supply longer bits of text then if you’re using a multibyte character set there are two awful consequences:

  • the storage requirement for the column will more than double
  • the LOB will go out of line when the in-row storage requirement exceeds 4,000 bytes – which means when the original string content exceeds something between 1,938 and 1,985 characters depending on your version of Oracle and whether you are using basicfile or securefile LOBs.  (And a “long” input string will exceed a single block in the LOB segment whent it goes over roughly 4,000 characters.)

So if you want to handle “long comments” in multibyte charactersets you might want to consider making your code more complex so that up to 4,000 bytes you store the data as a varchar2(), and only use a CLOB column when the data goes over that limit.

Footnote

If you’re wondering  what all the overheads are for the in-row LOB you’ll find that the 11g block dump gives you some extra details (though the extra details disappeared from my 12c and 19c block dumps).  Here’s the content that appeared immediately after the col1 and col2 data respectively in the 11g block dump.

LOB
Locator:
  Length:        84(76)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fb
  Flags[ 0x02 0x0c 0x80 0x00 ]:
    Type: CLOB
    Storage: BasicFile
    Enable Storage in Row
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  Inode:
    Size:     56
    Flag:     0x09 [ Valid DataInRow ]
    Future:   0x00 (should be '0x00')
    Blocks:   0
    Bytes:    40
    Version:  00000.0000000001
    Inline data[40]
Dump of memory from 0x00007FB4244BED8D to 0x00007FB4244BEDB5
7FB4244BED80                            00780001              [..x.]
7FB4244BED90 00780078 00780078 00780078 00780078  [x.x.x.x.x.x.x.x.]
        Repeat 1 times
7FB4244BEDB0 00780078 54004678                    [x.x.xF.T]



LOB
Locator:
  Length:        84(70)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fc
  Flags[ 0x02 0x0c 0x80 0x80 ]:
    Type: CLOB
    Storage: SecureFile
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  SecureFile Header:
    Length:   50
    Old Flag: 0x48 [ DataInRow SecureFile ]
    Flag 0:   0x90 [ INODE Valid ]
    Layers:
      Lengths Array: INODE:44
      INODE:
        00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78

Footnote 2

Oracle allows for an empty_clob() to populate a CLOB column with a zero length CLOB, and an empty_clob() is not the same as a null.

Many years ago, when I wrote “Practical Oracle 8i” I pointed out that if you were using a LOB column it would be better to use empty_[bc]lob() to represent an absence of data to avoid the need for checking the column in two different ways for “no data”.

There are two details to remember about this, though. First, empty_clob() take 36 bytes of storage for a basicfile and 30 bytes for a secure file (reported at 37/31 in 19c, 135/131 in 12cR2 and 103/97 in 11gR2). You may decide that’s expensive way of saying “nothing to see here”.

Secondly, while PL/SQL will allow you to compare a PL/SQL clob variable with empty_clob() you cannot use a simple “column = empty_clob()” predicate in SQL and will need something like:

where length(s1) = 0
where dbms_lob.getlength(s1) = 0
where dbms_lob.compare(s1, empty_clob) = 0

As a final closing comment about the oddity of avg_col_len and LOBs – my first test to check the lengths reported for an empty_clob() also inserted NULL into the v1 column. In this special case (all the columns in the row were empty_clob() or null) 11g reported the avg_col_len of s1 and b1 as zero!

November 26, 2020

Row sizes 3

Filed under: Oracle,Problem Solving — Jonathan Lewis @ 12:08 pm GMT Nov 26,2020

Several years ago I published a couple of examples of scripts that I’d been using since Oracle 6 (with minor changes over time) to analyse the content of tables in some detail. The first was a breakdown of the lengths of the rows in the table, the second was a map showing the distribution of nulls in the rows. There used to be a third component of the analysis which produced a report of the number of non-null entries for each column in the table,  but by the time I published the first two parts there didn’t seem to be much point in publishing the third since you could get a sufficiently accurate picture by querying view user_tab_columns after gathering stats:


break on num_rows

select
        tab.num_rows, col.column_name , col.num_nulls
from 
        user_tables             tab
join
        user_tab_columns        col
on
        col.table_name = tab.table_name
where
        tab.table_name = 'T1' 
order by 
        col.num_nulls desc, col.column_id
/


  NUM_ROWS COLUMN_NAME           NUM_NULLS
---------- -------------------- ----------
     56777 EDITION_NAME              56777
           CREATED_APPID             56777
           CREATED_VSNID             56777
           MODIFIED_APPID            56777
           MODIFIED_VSNID            56777
           SUBOBJECT_NAME            56570
           DATA_OBJECT_ID            55353
           DEFAULT_COLLATION         51058
           EDITIONABLE               40216
           OWNER                         0
           OBJECT_NAME                   0
           OBJECT_ID                     0
           OBJECT_TYPE                   0
           CREATED                       0
           LAST_DDL_TIME                 0
           TIMESTAMP                     0
           STATUS                        0
           TEMPORARY                     0
           GENERATED                     0
           SECONDARY                     0
           NAMESPACE                     0
           SHARING                       0
           ORACLE_MAINTAINED             0
           APPLICATION                   0
           DUPLICATED                    0
           SHARDED                       0


In this example t1 is a copy of view all_objects and I’ve listed the columns in order of most frequently null with the table’s num_rows reported at the top for comparison purposes. If you were going to do something that made it a good idea to rebuild the table you could use this report as a guide to re-arranging the column order to position the most frequently null columns towards the end of the row (saving the “trailing nulls take no space” length bytes).

This approach depends on the stats being up to date and accurate, of course, so if you didn’t have them, and didn’t want to collect them, another strategy would be to run a query like the following:


select
        to_char(count(OWNER),'999,999,999,999,999') OWNER,
        to_char(count(OBJECT_NAME),'999,999,999,999,999') OBJECT_NAME,
        to_char(count(SUBOBJECT_NAME),'999,999,999,999,999') SUBOBJECT_NAME,
        to_char(count(OBJECT_ID),'999,999,999,999,999') OBJECT_ID,
        to_char(count(DATA_OBJECT_ID),'999,999,999,999,999') DATA_OBJECT_ID,
        to_char(count(OBJECT_TYPE),'999,999,999,999,999') OBJECT_TYPE,
        to_char(count(CREATED),'999,999,999,999,999') CREATED,
        to_char(count(LAST_DDL_TIME),'999,999,999,999,999') LAST_DDL_TIME,
        to_char(count(TIMESTAMP),'999,999,999,999,999') TIMESTAMP,
        to_char(count(STATUS),'999,999,999,999,999') STATUS,
        to_char(count(TEMPORARY),'999,999,999,999,999') TEMPORARY,
        to_char(count(GENERATED),'999,999,999,999,999') GENERATED,
        to_char(count(SECONDARY),'999,999,999,999,999') SECONDARY,
        to_char(count(NAMESPACE),'999,999,999,999,999') NAMESPACE,
        to_char(count(EDITION_NAME),'999,999,999,999,999') EDITION_NAME,
        to_char(count(SHARING),'999,999,999,999,999') SHARING,
        to_char(count(EDITIONABLE),'999,999,999,999,999') EDITIONABLE,
        to_char(count(ORACLE_MAINTAINED),'999,999,999,999,999') ORACLE_MAINTAINED,
        to_char(count(APPLICATION),'999,999,999,999,999') APPLICATION,
        to_char(count(DEFAULT_COLLATION),'999,999,999,999,999') DEFAULT_COLLATION,
        to_char(count(DUPLICATED),'999,999,999,999,999') DUPLICATED,
        to_char(count(SHARDED),'999,999,999,999,999') SHARDED,
        to_char(count(CREATED_APPID),'999,999,999,999,999') CREATED_APPID,
        to_char(count(CREATED_VSNID),'999,999,999,999,999') CREATED_VSNID,
        to_char(count(MODIFIED_APPID),'999,999,999,999,999') MODIFIED_APPID,
        to_char(count(MODIFIED_VSNID),'999,999,999,999,999') MODIFIED_VSNID,
        to_char(count(*),'999,999,999,999,999') row_count 
fromi
         t1
;

You don’t really need the to_char() function, but it’s a conveience for what I’m going to do with the SQL. Obviouslty it would be a bit tedious to create this statement by hand so, as I did in the previous “rowsize” notes, I’ve written some code to generate it for me:


rem
rem     row_size_2b.sql
rem     
rem     Generate SQL to report counts of 
rem     non-null columns in a table.
rem

set linesize 32000
set feedback off
define m_table = '&1'

declare
        m_string        varchar2(32000) := 'select ';
        m_cursor        sys_refcursor;

begin
        for r in (
                select 
                        column_name, data_type
                from    user_tab_columns
                where   table_name = upper('&m_table')
        ) loop
                m_string := m_string ||  
                                ' to_char(count(' || trim(r.column_name) || '),''999,999,999,999,999'') ' ||
                                trim(r.column_name) || ',' || chr(10) ;
        end loop;

        m_string := m_string || ' to_char(count(*),''999,999,999,999,999'') row_count from &m_table';

--      dbms_output.put_line(m_string);
        print_table(m_string);

end;
/

set linesize 156

The script accepts a table name from the user’s schema (you could edit this to query dba_tab_columns), constructs a string (as shown above – though I’ve done a little cosmetic work on it), and then passes the resulting string to a verion of Tom Kyte’s print_table() routine which produces the following output:


OWNER                         :               56,777
OBJECT_NAME                   :               56,777
SUBOBJECT_NAME                :                  207
OBJECT_ID                     :               56,777
DATA_OBJECT_ID                :                1,424
OBJECT_TYPE                   :               56,777
CREATED                       :               56,777
LAST_DDL_TIME                 :               56,777
TIMESTAMP                     :               56,777
STATUS                        :               56,777
TEMPORARY                     :               56,777
GENERATED                     :               56,777
SECONDARY                     :               56,777
NAMESPACE                     :               56,777
EDITION_NAME                  :                    0
SHARING                       :               56,777
EDITIONABLE                   :               16,561
ORACLE_MAINTAINED             :               56,777
APPLICATION                   :               56,777
DEFAULT_COLLATION             :                5,719
DUPLICATED                    :               56,777
SHARDED                       :               56,777
CREATED_APPID                 :                    0
CREATED_VSNID                 :                    0
MODIFIED_APPID                :                    0
MODIFIED_VSNID                :                    0
ROW_COUNT                     :               56,777

-----------------

1 rows selected

As with many of the scripts I’ve pubished, the task is one I rarely do, so I haven’t bothered to add in all the extra little refinements that might make it foolproof and totally self-contained. In this case, for example I’d have to go through a final couple of manual steps to edit the output by deleteing the last few lines then (since I use vi) issuing “!%!sort -k3 -n -r” which would sort the results in reverse order, numerically, according to the third field.

 

print_table()

Filed under: Oracle,Problem Solving — Jonathan Lewis @ 9:40 am GMT Nov 26,2020

Many years ago Tom Kyte published a small PL/SQL procedure to output each row in a table as a list of (column_name , value). I often use the original version, even though Tom refined it several times over the years. From time to time the links in my blog that I’ve set up to point to Tom’s code have simply stopped working so I’ve decided to publish a reference copy on my blog in case newer readers need to find a copy and the current link above stops  working.

rem
rem     Script:         print_table.sql
rem     Dated:          Nov 2001
rem     Author:         Tom Kyte - from Expert One on One Oracle.
rem
rem     Purpose:
rem     Prints the results of a query one column per line.
rem
rem     Created as a 'authid current_user' so that it can
rem     access any table that the caller is allowed to access
rem     from the SQL*Plus prompt.
rem
rem     A suitably privilefed user could also make this publicly
rem     available through the public synonym and grant at the end
rem     of the script
rem
rem     Minor modifications by Jonathan Lewis to separate rows
rem     in the result set, and report the number of rows returned
rem

create or replace procedure print_table( p_query in varchar2 )
authid current_user
is
        l_theCursor     integer default dbms_sql.open_cursor;
        l_columnValue   varchar2(4000);
        l_status        integer;
        l_descTbl       dbms_sql.desc_tab;
        l_colCnt        number;
        l_rowcnt        number := 0;
begin
        execute immediate 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss'' ';

        dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
        dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

        for i in 1 .. l_colCnt loop
                dbms_sql.define_column(l_theCursor, i, l_columnValue, 4000 );
        end loop;

        l_status := dbms_sql.execute(l_theCursor);

        while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
                for i in 1 .. l_colCnt loop
                        dbms_sql.column_value( l_theCursor, i, l_columnValue );
                        dbms_output.put_line(
                                rpad( l_descTbl(i).col_name, 30 )
                                || ' : ' || l_columnValue
                        );
                end loop;
                dbms_output.new_line;
                dbms_output.put_line( '-----------------' );
                dbms_output.new_line;
                l_rowcnt := l_rowcnt + 1;
        end loop;

        dbms_output.put_line(l_rowcnt || ' rows selected');

        execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';

exception
        when others then
                execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';
        raise;
end;
/

-- create public synonym print_table for print_table;
-- grant execute on print_table to public;

The reason I’m publishing this now is that I’m about to publish the third variant of a piece of code I wrote many years ago, and first blogged about in 2012.

November 12, 2020

rowsets

Filed under: Execution plans,Infrastructure,Oracle,Performance — Jonathan Lewis @ 12:35 pm GMT Nov 12,2020

Here’s a little demonstration of the benefit of rowsets. It started with a very simple question that arrived in my in-box from Kaley Crum.

  • “Why does the query with the UNION ALL take so much longer than the query without the UNION ALL?”

Here are the two queries – surely they should take virtually the same amount of time.

select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me);


select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me
      --
      union all
      --
      select 1 from dual);

Look closely – the difference between the two queries is just the addition through a UNION ALL of “select 1 from dual”. The first query took about 3.5 seconds to complete (there were 70M rows in the table), and the second took nearly 69 seconds.

Here are the execution plans showing the rowsource execution statistics (and Query Block // Object Alias information) – first the baseline query:

QL_ID  0ph1vfuuxkbqb, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me)
 
Plan hash value: 3609429292
 
--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        | 26465 (100)|          |      1 |00:00:03.34 |   95896 |  95891 |
|   1 |  SORT AGGREGATE     |         |      1 |      1 |            |          |      1 |00:00:03.34 |   95896 |  95891 |
|   2 |   VIEW              |         |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.32 |   95896 |  95891 |
|   3 |    TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.27 |   95896 |  95891 |
--------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$2 / from$_subquery$_001@SEL$1
   3 - SEL$2 / DROP_ME@SEL$2

And then the UNION ALL query:

SQL_ID  0chdajr28y0ub, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me       --       union all       --       select 1 from
dual)
 
Plan hash value: 3408528233
 
---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |         |      1 |        | 26467 (100)|          |      1 |00:01:18.58 |   95896 |  95891 |
|   1 |  SORT AGGREGATE      |         |      1 |      1 |            |          |      1 |00:01:18.58 |   95896 |  95891 |
|   2 |   VIEW               |         |      1 |     70M| 26467   (2)| 00:00:02 |     70M|00:01:10.84 |   95896 |  95891 |
|   3 |    UNION-ALL         |         |      1 |        |            |          |     70M|00:00:53.13 |   95896 |  95891 |
|   4 |     TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:19.28 |   95896 |  95891 |
|   5 |     FAST DUAL        |         |      1 |      1 |     2   (0)| 00:00:01 |      1 |00:00:00.01 |       0 |      0 |
---------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SET$1 / from$_subquery$_001@SEL$1
   3 - SET$1
   4 - SEL$2 / DROP_ME@SEL$2
   5 - SEL$3 / DUAL@SEL$3

This is not a trick question – I’ve got the SQL to create the data set and run the test; and I’ve run the test through 19.3.0.0 and 12.2.0.1 with results very similar to the above. Here’s a slightly modified version of the script – you’ll notice the column name has changed because I’ve used my usual “large data set” generator rather than scaling up a clone of all_objects:

rem
rem     Script:         rowset_kaley.sql
rem     Author:         Kaley Crum / JP Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem

create table drop_me
pctfree 0
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum  n1
from
        generator
cross join
        generator
where rownum <= 7e7
;

alter session set statistics_level = all;

set timing on
set serveroutput off

prompt  ===========================================
prompt  Baseline: Simple query without a union all.
prompt  Runs in 3-4 seconds
prompt  ===========================================

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

prompt  ==========================================
prompt  Add a UNION ALL for one record from dual.
Prompt  Runs in over a minute
prompt  ==========================================

pause Press return

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me
      --
      union all
      --
      select 1 from dual);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

There are two factors involved in the massive variation in performance. The first factor is one that the test case will report – though I didn’t show it in the output above, the second is that the test case has enabled rowsource execution statistics.

Here’s the big clue – from the Column Projection Information, which is one of the things that appears with the “advanced” format option (or when you add the “projection” format option) in the call to dbms_xplan.display_xxx()

First for the fast query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]
   2 - (rowset=1019)
   3 - (rowset=1019)

The rowsource passes 1,019 rows at a time from the tablescan operation to the view operation and from the view operation to the sort operation, for a total of roughly 70,000 calls for each of those two steps of the plan.

Compare this to the projection information for the slow UNION ALL query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]

I haven’t missed a bit in copying – this really is it. The rows move up the stack one at a time, not in a rowset array. That’s 70,000,000 subroutime calls for each of the two steps of the plan.

I’m sure most of us have heard the mantra “row by row is slow by slow” (or some variant on the theme). This is true all the way down to the internal levels of an execution plan.

Of course 70M calls vs. 70K calls shouldn’t really explain a difference of more than 60 seconds – but when the calls to the O/S for timing information for rowsource execution stats are repeated for each call (or even if it’s only a 1% sample of the calls) down the stack that’s where a lot of time can disappear.

On my laptop, running Oracle 19.3 in a VM, this is how my timing went:

  • Fast query: 1.94 seconds, dropping to 1.79 seconds when I disabled rowsource execution stats.
  • Slow query: 65.91 seconds, dropping to a far less astounding 3.26 seconds when I disabled rowsource execution stats.

So there really is a noticeable difference between row-by-row and array-processing but most of the difference in the original test came from using rowsource execution statistics to measure how much of a difference there would be.

It’s also worth mentioning that this is probably the most extreme case you could produce to show the difference – using the largest possible rowset size with the smallest possible rows when you want to mazimise the gap between (internal) array processing and single row processing – and then processing a very large number of rows. [But isn’t ridiculously large numbers of rows what you do with Exadata?].

If you want further evidence that the difference is due to the rowset size you can always alter session set “_rowsets_enabled”=false; and watch the fast query slow down. It will take about half the time of the slow query as it only has to pass 70M rows up one step of the plan rather than the two steps that are in the UNION ALL plan.

In theory it looks as if you could also restrict the size of the rowset by setting _rowsets_max_rows or _rowsets_target_maxsize, but since the default value for the former is 256 the results above suggest that the settings might be ignored, and when I tried adjusting them at the session level nothing changed.

Another test you could run is to adjust (alter session) the parameter “_rowsource_statistics_sampfreq”, When I set this to 100 the time for the slow query dropped to about 21 seconds (and the fast query – with rowsets disabled – dropped to about 11 seconds).

Footnote

It’s always easy to hit an accidental special case without realising it, so when you’re testing something it’s important to think about what’s special in the example. I came up with two possibilities in this case – dual is always going to be a very special case in any circumstances, and I’ve got a table in my query that is guaranteed to return exactly one row. So I repeated the test with:

  • a real table with one row and primary key in case the reference to dual was disabling rowsets
  • a real table with 100 rows in case this enabled a rowset size of 100

In neither case did Oracle start using array processing.

Footnote 2

One of the patterns I see occasionally follows the form of the slow query above. Some front-end tool executes a select statement that starts with a select {list of desired column headings} from dual so that the front-end code doesn’t have to be programmed to create headings as it receives the data.

Although the impact this would have on rowsets looks like a large percentage of the work done in the database in this case, in more realistic cases it would probably be irrelevant, until someone starts testing with the statistics_level set to all, or _rowsource_execution_statistics explicitly enabled.

There is a related effect, though, from the SQL Monitor feature (which activates automatically for a statement if it’s executing in parallel, or if it’s predicted to take more than 5 seconds to complete). When I added the /*+ monitor */ hint to the two queries (and didn’t fiddle with the rowsets parameter) the fast query averaged 2.26 seconds instead of 1.79 seconds, and the slow query averaged 3.51 seconds instead of 3.26 seconds. I have to say, though, that the variation between executions of the same statement was quite significant compared the variation produced by adding or eliminating the hint.

November 11, 2020

Indexing partitions

Filed under: Indexing,Oracle,Partitioning,Statistics — Jonathan Lewis @ 11:35 am GMT Nov 11,2020

This is one of those notes on the “thinking about the data / indexes” theme that I like to write; it’s a draft I wrote over two and a half years ago that I’ve just rediscovered and polished slightly and refers to a “recent” question that came up on the ODC Forum. It’s not intended as “The Answer” to the question, it’s a just a list of ideas and observations you’d want to think about if you had to address the problem:

There is a table t1 that is range partitioned by date, each partition represents one day, and the partitioning column c3_dt is constrained to be “date-only” (i.e. the time component is always 00:00:00) so there’s only one value per partition for the column. The table is a transaction table, and currently a single partition holds about 200 million rows. [Ed: And there seem to be a little over 200 partitions, with some variations in rows per partition.]

There are two local indexes on the table: idx1 is a unique index defined on (c1, c2, c3_dt) and idx2 is non-unique, defined on (c1, c2).

Could we drop one of these two indexes, and if so which one ?

There are three immediate considerations here:

  • One index is a prefix (i.e. same columns in the same order) of the other so, from a purely mechanical perspective, we could drop the shorter index. We would, however, have to be a little cautious because the optimizer might not want to use the longer index in places where it would use the shorter because (a) the index would physically be larger and (b) in the general case the clustering_factor of the shorter index is likely to be smaller than the clustering_factor of a longer index with a matching prefix, and the estimated cost of using an index is based largely on the leaf_block count and the clustering_factor.
  • The c3_dt column adds no value in terms of precision for accessing the table; any predicate that allows the optimizer to apply partition elimination will have the same effect whether or not the column is part of the index. The only queries that could have better performance because of the extra column would be queries involving only the columns in the index – even then you’d have to ensure that at least one column in the index was declared not null (and that might have been the c3_dt column). On the down-side the date column is adding 8 bytes per row (7 data bytes, 1 length byte) to the size of the index. So it would be nicer to drop the longer index if possible. On the up-side, since the date column is always the same value within a partition its presence at the end of the index should mean that the clustering_factor of idx1 should match that of idx2 (with, perhaps, some small variation due to random sampling).
  • You can’t drop the unique index unless it has been created “by accident”. It seems reasonably likely that the index is supporting a unique (or primary key) constraint. For a local index to be unique (or, more accurately, for a unique/primary key constraint to be supported by a local index) the constraint/index has to contain the partition key, and c3_dt is the partitioning column

Bottom line, then, is that you (probably) can’t drop idx1 because you need the uniqueness that comes with the c3_dt column but it should be possible (and would be sensible and probably fairly safe) to drop idx2 – the two-column index. Of course it would be sensible to make the index invisible for a while to see if any plans that were using idx2 refused to use idx1 instead; and it’s probably a good idea to create (or be prepared to create) column group statistics on the (c1, c2) combination so that the optimizer doesn’t lose the information about the number of distinct combinations.

It’s a pity that the index that has to be kept is the one that has a “useless” column at the end, adding 8 bytes per row (totalling roughly 1.6GB per partition) to the size of the index – however the fact that the columns holds the same value for every row in the partition offers an interesting possibility. If you re-arrange the column order to (c3_dt, c1, c2) and compress the index on the first column, would all the queries that currently use the index still be happy to use it with an “index skip scan”? Before pursuing this idea, of course, you would want to check that the leading c1 column wasn’t there to act protect a foreign key constraint from the “foreign key locking” problem.

The OP supplied some statistics about the three columns, which prompt a few more thoughts:

 
Partition level stats:- 
column_name data_type num_distinct                density
----------- --------- ------------   --------------------
C1          VARCHAR2          8754    0.00127388535031847 
c2          NUMBER         1398016    7.1529939571507E-7 
C3_dt       DATE                 1    2.43469886548297E-9 

Global statistics:-  
column_name data_type num_distinct                density 
----------- --------- ------------   --------------------
c1          VARCHAR2       1597649    6.25919710775020E-7 
c2          NUMBER         1996800    5.00801282051282E-7 
C3_dt       DATE               211    0.004739336492891 

Looking at these numbers we can  make the following observations:

  • C3_DT: global num_distinct = 211 suggests 211 partitions and, at the global level, we see 1/num_distinct = density so there’s no global histogram. But at the partition level we see the density = 2.4e-9, which suggests there’s a (silly) histogram and there are 205 million rows in the partition (reverse engineering the density = 1/(2*num_rows) formula for “non-existent values). We can’t be sure without doing some experimentation what effect this inconsistency might have on the optimizer’s choice of access path when we have to handle range-based (cross-partition) queries.
  • C1: num_distinct = 8754 for the current partition and 1,597,549 for the table, and the global figure happens to be very close to 1,847,094 ( = 211 * 8754): so perhaps there’s very little (or NO) overlap in c1 values between partitions – and that’s probably going to mess up the optimizer’s estimates for queries that cross partitions anyway. It’s also an indication that whichever index we keep could be complressed on the first column, and if we move to an index (c3_dt, c1, c2) the index could be compressed on the first two columns.
  • C2: has similar values for num_distinct for both the partition and the global level, which is nice – any anomalies introduced by variation between partitions is likely to be relatively small and may be nearly harmless. I often say that the optimizer likes boring data and can do a good job with it – c2 looks like it might be a nice boring column; though it’s possible that it’s hiding a nasty secret in the low / high values that we haven’t been told.

As I said at the outset, I’m not trying to answer the question “which index should I drop?”, I’m supplying some more questions that need answers before the original question can be answered sensibly. Along the way I’ve also demonstrated the way in which we can extract further information by looking at a few numbers and doing a little arithmetic.

I’d also like to point out that it’s a draft from 30 months ago, and I haven’t spent a lot of time trying to complete it and polish it, so E and O.E. but do feel free to chip in with other points that you think should be considered when making this decision). 

 

November 10, 2020

Quiz time

Filed under: Infrastructure,Oracle,redo — Jonathan Lewis @ 9:28 am GMT Nov 10,2020

Here’s a fun little item that I’ve just rediscovered from 2016.

There’s a fairly commonly cited query that aggregates and tabulates the first_time column from the v$log_history view by hour of day, reporting the hour across the page and the date down the page. It often appears in the responses to the question “how big should my online redo log files be?” but I’m not going to supply the query here because it’s irrelevant, instead I’m going to show you a line of the output it produced (on a real production system) then ask the important question that might prompt you decide to resize your log files:

DAY        00  01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  21  22  23

--------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---

06-MAY-16   4   3   4   5   3   2   0   1   3   4   1   5  12   8   2   1   7   3   0   1   5   5   2   4


The important question is this:

  • when was most time recorded for events relating to log file switches?

I’ll supply the answer some time tomorrow. The only comment I’ll make in advance is that there’s a huge history of variation in the way you (and Oracle) might view redo log file switches, and it would be easy for two highly competent DBAs to have dramatically different view points if all you gave them was a set of numbers and no context.

Answer

There is no way you could have deduced the right answer from the line of numbers above, and it’s gratifying to see that all the comments understood the rules of this “no-win” game. The correct answer to the question was

The point at which the most significant redo waits appeared was from 20:58 to 21:02 when there were 10 log file switches in just over 3 minutes.

There are plenty of arguments that you might put forward to justify guessing some other time period, and I think they were all explained (or, rather, explained away) in the comments.

One person volunteered to highlight the “obvious” answer – which is the 12:00 – 13:00 interval which recorded the single highest number. Clearly there may have been a bit of a peak in the rate of redo generation around that interval (from 11:30 to 13:30, say) but that doesn’t mean the load on the system was high enough to cause any waits.

There are a couple of places where the count shows zero, and that might be because there was no redo generated, but it might be because the log writer got stuck on (e.g.) a CI enqueue wait and everything else stopped in a wait for logfile switch.

Then, of course, if the actual workload peak had occurred just a few minutes later another of the intervals would have been recording 12 switches leaving the less experienced DBA a choice of time periods to investigate.

There are two problems with this report: first there’s the fundamental understanding – the rate at which redo is generated doesn’t tell you anything about how much time is lost due to contention related to redo. Secondly the report has an inherent scoping issue – as one of the comments put it – there’s a big difference between 12 logfile switches in 5 minutes and 12 logfile switches in 60 minutes. (Almost a prescient comment since the right answer was 10 logfile switches in just over 3 minutes – without showing 10 logfile switches in an hour).

As with so many of the “popular” monitoring queries that have been around for the last quarter of a century – the only use of a report like this is to give you a little histoary so that you can look at the last couple of weeks and wonder what happened around 9:00 last night that was different from usual, because that spike hasn’t been there before.

November 9, 2020

I wish

Filed under: CBO,Execution plans,Oracle,sorting,Wishlist — Jonathan Lewis @ 12:01 pm GMT Nov 9,2020

Here’s a lovely little mechanism new to Postgres 13 that can minimise sorting costs: the “incremental sort”. It would be nice to see it in Oracle as well as it could make an enormous difference to “fetch first N” queries.

The concept is simple – if a rowsource moving up a plan is known to be in “partially sorted” order when it reaches a sort operation the optimizer can choose whether or not to sort the entire rowsource in one go or to sort it in batches as it arrives.

For example if you have a query which has “order by cola, colb” as its final clause and the plan has used an index range scan on an index on just (cola) then the “sort order by” operation can accept the rows for the first value of cola, sort them by colb and pass them on, then accept the rows for the second value of cola, sort them by colb and pass them on, and so on.

Better still, if you have an “order by table1.colA, table2,colB” and the optimizer used an indexed access path on table1.colA and a nested loop into table2, then the optimizer will still recognize that the generated data is already partially sorted, and sort batches for table1.colA to order them by table2.colB. (Oracle has a mechanism for dealing with sorted hash clusters that is roughly similar.)

Obviously the benefit is that you avoid doing a very large sort that might spill to disc; slightly less obviously is that you might avoid sorting the whole data set if you have a “fetch first N” query. Here’s an Oracle model setting up a demonstration of the principle:

rem
rem     Script:         fetch_first_postgres_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem

create table t1 as
with generator(id) as (
        select 1 from dual
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator
/

alter table t1 modify n1 not null;

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(n1);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

drop index t1_i1;
create index t1_i1 on t1(n1, n2);

explain plan for
select  * 
from    t1
where   n1 between 25 and 30
order by
        n1, n2
;

select * from table(dbms_xplan.display);

I’ve created a table with 100,000 rows where the value of n1 is repeated 10 times and well-clustered, while for each value of n1, the n2 column has 10 distinct values (not necessarily in order thanks to the mod(,13)). Using this data set I’ve executed the same query three times – selecting the rows for 6 consecutive values of n1, ordering by n1, n2.

The first test will have to do a tablescan, the second can use the index I’ve created on (n1) but will have to do a sort (after visiting the table), the third can walk the index I’ve created on (n1,n2) and complete without sorting.

Here, in order, are the three execution plans:

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    70 |  4830 |   145   (9)| 00:00:01 |
|   1 |  SORT ORDER BY     |      |    70 |  4830 |   145   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T1   |    70 |  4830 |   144   (8)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N1"<=30 AND "N1">=25)


----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                       |       |    70 |  4830 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    70 |  4830 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1">=25 AND "N1"<=30)


-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    70 |  4830 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    70 |  4830 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_I1 |    70 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1">=25 AND "N1"<=30)

Having created the Oracle model I asked Phil Florent (who had prompted this note by emailing me with a question about why Postgres 13 was executing a “Fetch First” so much faster than Oracle 19 (the basic answer is at this URL) if he would run it under Postgres and send me the execution plans.

The code required two changes – the first to handle the change in dialect, the second to supply a change in scale because my model produced such a small output that Postgres didn’t bother to use the new feature. Here’s the modified SQL to generate the original test data:

create table t1 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 1e5
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)       small_vc,
        rpad('x',50,'x')        padding
from
        generator;

And the three plans (so that you can compare the content and style of output with Oracle plans) that went with this small data set – first the no-index plan, then the plan for the (n1) index, then the plan with (n1,n2) indexed:

                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
Sort  (cost=2835.85..2836.00 rows=62 width=74) (actual time=19.534..19.600 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Seq Scan on t1  (cost=0.00..2834.00 rows=62 width=74) (actual time=0.067..19.417 rows=60 loops=1)
         Filter: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))
         Rows Removed by Filter: 99940

Planning Time: 0.351 ms
Execution Time: 19.703 ms


                                                    QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Sort  (cost=11.50..11.66 rows=62 width=74) (actual time=0.224..0.289 rows=60 loops=1)
   Sort Key: n1, n2
   Sort Method: quicksort  Memory: 33kB
   ->  Index Scan using t1_i1 on t1  (cost=0.42..9.66 rows=62 width=74) (actual time=0.024..0.113 rows=60 loops=1)
         Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.665 ms
Execution Time: 0.391 ms



                                                 QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Index Scan using t1_i1 on t1  (cost=0.42..114.66 rows=62 width=74) (actual time=0.022..0.155 rows=60 loops=1)
   Index Cond: ((n1 >= '25'::double precision) AND (n1 <= '30'::double precision))

Planning Time: 0.690 ms
Execution Time: 0.259 ms
 

As you can see, there’s very little difference between Oracle’s plans and Postgres’ plans in this example. (Though it’s rather nice to see what extra details appear in the Postgres plans, which were generated with the equivalent of Oracle’s “statistics_level = all” setting.

The middle plan shows us that Postgres didn’t use the “incremental sort” – but it’s useful to see it anyway so that we can compare the structure of the plan when we increase the volume of data – which the following script is for:

create table t2 as
with recursive generator(id) as (
        select 1
        union all
        select id + 1 from generator where id < 10000000
)
select
        ceil(id/10)             n1,
        mod(id,13)              n2,
        lpad(id::text,10)             small_vc,
        rpad('x',50,'x')        padding
from
        generator;
 
create index t2_i1 on t2(n1);

explain analyze
select  *
from    t2
where   n1 between 25000 and 30000
order by
        n1, n2
;

analyze t2;

Now we have 10M rows, still with 10 rows per value of n1, and our query requests 5,001 values of n1, so 50,010 rows in total. With that much data the optimizer decided to use the incremental sort rather than sorting the whole result set in one go:

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Incremental Sort  (cost=0.49..4532.57 rows=52677 width=74) (actual time=0.160..164.125 rows=50010 loops=1)
   Sort Key: n1, n2
   Presorted Key: n1
   Full-sort Groups: 1251  Sort Method: quicksort  Average Memory: 30kB  Peak Memory: 30kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.055..61.663 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.197 ms
Execution Time: 204.490 ms

Note, particularly, the “Presorted Key” line leading to the “Sort Key” line. We can also check the plan without the incremental sort with a set command to disable the feature:

set enable_incremental_sort = off;

                                                         QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Sort  (cost=8812.64..8944.33 rows=52677 width=74) (actual time=181.487..227.735 rows=50010 loops=1)
   Sort Key: n1, n2
   Sort Method: external merge  Disk: 4128kB
   ->  Index Scan using t2_i1 on t2  (cost=0.43..2339.97 rows=52677 width=74) (actual time=0.070..74.975 rows=50010 loops=1)
         Index Cond: ((n1 >= '25000'::double precision) AND (n1 <= '30000'::double precision))
Planning Time: 0.239 ms
Execution Time: 266.325 ms

Comparing the times for these two queries there isn’t really much difference – but you will notice that the old approach has had to dump 4MB to disc while the incremental sorts gets the job done in 30Kb of memory, which may be an important difference in other circumstances.

More importantly than a tiny time difference in this trivial example is the fact that Phil Florent’s original question was:

“Why is this two-table join with ‘fetch first 5’ taking 28 seconds to complete in Oracle when it takes less than one millisecond to complete in Postgres?”

The answer was:

  • Oracle is doing a hash join that produces a 25M row result set, sorting it to do an analytic row_number() call to get the first 5 rows.
  • Postgres is doing a nested loop join, fetching the first row(s) from the first table in the right order by index range scan then fetching the matching rows from the second table – then doing an incremental sort on those rows; then fetching the next row(s) and repeating the process – stopping after it’s got through a total of 5 rows instead of generating and sorting 25M rows.

Here’s the plan (with obfuscated table names):

                                                    QUERY PLAN                                                    

-------------------------------------------------------------------------------------------------------------------

Limit  (cost=284.36..287.21 rows=5 width=10)
   ->  Incremental Sort  (cost=284.36..14261803.18 rows=25000000 width=10)
         Sort Key: table1.col1 DESC, table2.col2 DESC
         Presorted Key: table1.col1
         ->  Nested Loop  (cost=1.00..13294209.39 rows=25000000 width=10)
               ->  Index Scan using table1_col1_idx on table1  (cost=0.56..1300593.47 rows=25000000 width=8)
               ->  Index Scan using table2.pk_col on table2 (cost=0.43..0.48 rows=1 width=6)
                     Index Cond: (id = table1.id)

You’ll notice in this exanple that the incremental sort can take advantage of the optimizer’s knowledge of the index definitions whether the sort is ascending or descending.

You’ll also notice that Postgres has the same problem as Oracle when it comes to coping with Fetch First (or, in Oracle’s case, rownum <= N and optimizer_mode = first_rows_N). Even when it “knows” that a query is going to stop fetching data very early the plan still reports 25M rows as the expected volume of data.

Summary

Postgres 13 has a wonderful mechanism for optimising sorts that can make a huge difference to “first rows” queries and even basic “order by” clauses involving join query result sets.

Footnote

Browsing the Internet for documentation and comment on the incremental sort I found the following. Given my level of ignorance about Postgres I can’t comment on the completeness or correctness of the information, but it looked good to me, and I found its comments about this feature very informative.

November 3, 2020

FBI or Virtual

Filed under: Function based indexes,Indexing,Oracle — Jonathan Lewis @ 10:56 am GMT Nov 3,2020

This note has has been sitting with the other 800 drafts since some time in May 2019, and started with a comment about following on from “a recent talk on how to engineer indexes properly”. Unfortunately I don’t remember when I wrote it, or why it came about.I mention this only because the note shows you how you can run into irritating limitations when you’re trying to do things properly.

First, a little script to generate some highly skewed data:

rem
rem     Script:         fbi_or_virtual_desc.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem
rem     Tested on:
rem       19.2.0.0
rem       18.3.0.0
rem       12.2.0.1

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id1,
        rownum                          id2,
        chr(65+trunc(ln(rownum)))       flag,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

If you check the way I’ve defined the flag column you’ll see that it generates uppercase letters of the alphabet (chr(65) = ‘A’), and the use of the ln() (natural logarithm_ function with rownum means the earlier letters in the alphabet have far fewer rows than later letters. Here’s a quick query to show the skew:


select
        flag, count(*)
from
        t1
group by
        flag
order by
        count(*)
/

FLAG   COUNT(*)
---- ----------
A             2
B             5
C            13
D            34
E            94
F           255
G           693
H          1884
I          5123
J         13923
K         37848
L        102880
M        279659
N        557587

14 rows selected.

Imagine I have a requirement to access the rows where flag = ‘D’ very efficiently, and I’d like to do that in descending order of id1. I could create an index on (flag, id1) to help, of course – but that’s going to produce a very large index that (with help from a histogram and literal parameter usage, combined with the excellent clustering pattern) the optimizer would probably use only for the values from ‘A’ to ‘H’.

Clearly the solution is to take advantage of a function-based index here to create index entries for just the rows we are interested in and “hide” the rest. Better still we can create a virtual column and index that because we would then have a simple column name to use in our SQL and we could even make it invisible (from 12c) so that it won’t cause problems with sloppy code that does a “select *” or “insert values()” without a list of columns.

Which is the better choice? Let’s add the virtual column and then create indexes. You’ll notice I created id1 and id2 with identical values – this allows me to do test both strategies in a single pass of the script. If I tried to create the virtual column and the function-based index using id1 in both cases I’d get an error message that I was using the same expression twice – though the actual error would depend on the order of creation.

If I created the virtual column first the error on attempting to create a function-based index would be: “ORA-54018: A virtual column exists for this expression”, and if I created the function-based index first the error on trying to create the virtual column would be “ORA-54015: Duplicate column expression was specified”.

So I’m using id1 to test the virtual column approach and id2 for to test the function-based index approach.


alter table t1 add (
        id1_c generated always as
                (case when flag = 'D' then id1 end)
                virtual
        )
;

create index t1_i0 on t1(id1);

create index t1_i1  on t1(id1_c);
create index t1_i1d on t1(id1_c desc);

create index t1_i2  on t1((case when flag = 'D' then id2 end));
create index t1_i2d on t1((case when flag = 'D' then id2 end) desc);

And here’s is what we see if we “set echo on” during the index creation script:


SQL>
SQL> create index t1_i0 on t1(id1);

Index created.

SQL>
SQL> create index t1_i1  on t1(id1_c);

Index created.

SQL> create index t1_i1d on t1(id1_c desc);
create index t1_i1d on t1(id1_c desc)
                          *
ERROR at line 1:
ORA-54034: virtual columns not allowed in functional index expressions

SQL>
SQL> create index t1_i2  on t1((case when flag = 'D' then id2 end));

Index created.

SQL> create index t1_i2d on t1((case when flag = 'D' then id2 end) desc);

Index created.

We can’t create a descending index on the virtual column – even though we can create a descending function-based index on exactly the expression that we wanted to use for the virtual column – so it seems we can get what we want, provided we are prepared to forgo the niceness of naming (and perhaps making invisible) a virtual column.

So are we happy?

No, we’re not – and here’s why:

select
        index_name, num_rows, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
order by
        index_name
;

INDEX_NAME             NUM_ROWS LEAF_BLOCKS
-------------------- ---------- -----------
T1_I0                   1000000        2226
T1_I1                        34           1
T1_I2                        34           1
T1_I2D                  1000000        1812

To make a column in an index “descending” Oracle takes the “one’s complement” of each byte in turn then appends 0xFF to the result. The effect of this is to turn a NULL into 0xFF which, as a non-null value, has to be stored in the index with a rowid pointing to the table. So an index holding 34 rows and avoiding the waste of 999,966 redundant entries turns into an index useing (wasting) space on those 999,966 entries when you try to create a descending version of the index. The index we end up with is nearly as big as the index on (flag, id1) would have been.

Conclusion

We can’t create the descending index we want by first creating a virtual column. On the other hand, if we create it as a simple function-based index it’s going to be just as big as the index we were trying to avoid creating.

Footnote 1

The ‘0xFF’ is why you can’t create a unique, descending index unless at least one of the columns is declared not null – a two-column unique index would allow multiple rows to hold (null, null), but try to recreate it descending and those rows now hold duplicated (0xFF, 0xFF) and index creation fails. (Worse still, you may have no problem data when you create the index, and then discover that processes break because they introduce problem data.)

Footnote 2

There’s no such thing as a “descending index” – there are only indexes where some of the columns are stored descending. If you create an index with every single column marked as descending then all you’ve done is increase the size of the index, while confusing the optimizer and disabling some of the clever tricks it can do. The optimizer is perfectly able to read a normal index in descending order. (Which means that this example is a simplified demonstration of the problem – if I wanted the ‘D’ rows in descending order of id1 then I’d have an “order by id1 desc” in my query and the optimizer would produce a plan to walk the ascending (t1_i1) index in descending order.

October 21, 2020

Column Groups

Filed under: extended stats,Oracle,Statistics — Jonathan Lewis @ 12:14 pm BST Oct 21,2020

Here’s an odd little detail about the statistics of column groups. At first glance it’s counter-intuitive but it’s actually an “obvious” (once you’ve thought about it for a bit) consequence of the approximate_ndv() algorithm for gathering stats.

I’ll present it as a question:

I have a table with two columns: flag and v1. Although the column are not declared as non-null neither holds any nulls. If there are 26 distinct values for flag, and 1,000,000 distinct values for v1, what’s the smallest number of distinct values I should see if I create the column group (flag, v1) ?

The question is, of course, a little ambiguous – there’s the number of distinct values that the column (group) holds and the number that a fresh gather of statistics reports it as holding. Here are the stats from a test run of a simple script that creates, populates and gathers stats on my table:

select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

COLUMN_NAME                      NUM_DISTINCT
-------------------------------- ------------
FLAG                                       26
ID                                    1000000
V1                                     999040
SYS_STUQ#TO6BT1REX3P1BKO0ULVR9         989120

There are actually 1,000,000 distinct values for v1 (it’s a varchar2() representation of the id column), but the approximate_ndv() mechanism can have an error of (I believe) up to roughly 1.3%, so Oracle’s estimate here is a little bit off.

The column group (represented by the internal column defonition SYS_STUQ#TO6BT1REX3P1BKO0ULVR9) must hold (at least) 1,000,000 distinct values – but the error in this case is a little larger than the error in v1, with the effect that the number of combinations appears to be less than the number of distinct values for v1!

There’s not much difference in this case between actual and estimate, but there test demonstrates the potential for a significant difference between the estimate and the arithmetic that Oracle would do if the column group didn’t exist. Nominally the optimizer would assume there were 26 million distinct values (though in this case I had only created 1M rows in the table and the optimizer would sanity check that 26M).

So, although the difference between actual and estimate is small, we have to ask the question – are there any cases where the optimizer will ignore the column group stats because of a sanity check that “proves” the estimate is “wrong” – after all it must be wrong if the num_distinct is less than the num_distinct of one of the components. Then again maybe there’s a sanity check that only ignores the column group if the estimate is “wrong enough”, but allows for small variations.

I mention this only because an odd optimizer estimate has shown up recently on the Oracle-L mailing list, and the only significant difference I can see (at present) is that a bad plan appears for a partition where this column group anomaly shows up in the stats, but a good plan appears when the column group anomaly isn’t present.

Footnote:

If you want to recreate the results above, here’s the model I’ve used (tested on 19.3.0.0 and 11.2.0.4):

rem
rem     Script:         column_group_stats_5.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             11.2.0.4
rem 

execute dbms_random.seed(0)

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        chr(65 + mod(rownum,26))        flag,
        rownum                          id,
        lpad(rownum,10,'0')             v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6   -- > comment to avoid WordPress format issue
order by
        dbms_random.value
/


select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns(v1, flag) size 1'
        );
end;
/
 
select  column_name, num_distinct 
from    user_tab_cols 
where   table_name = 'T1'
/

Footnote 2:

As an interesting little statistical quirk, if I defined the column group as (flag, v1) rather than (v1, flag) the estimate for the column group num_distinct was 1,000,000.

« Previous PageNext Page »

Website Powered by WordPress.com.