Oracle Scratchpad

August 6, 2021

Sequence Accelerator

Filed under: 19c,Infrastructure,Oracle,Performance — Jonathan Lewis @ 11:22 am BST Aug 6,2021

Connor McDonald has just published a blog note about a tweak to sequences that appeared in recent versions of Oracle (19.10 – see tweet from Timur Akhmadeev).

To address the problems caused by people leaving the sequence cache size at the default of 20 (leading to contention on very busy sequences – see footnote) Oracle’s internal code will now check the rate at which a sequence nextval is being called and “ignore” the cache definition, using larger and larger values to bump the sequence highwater in the updates to the seq$ table.

Connor pointed out that if you really wanted to see how big the jump might get you could crash your instance in the middle of a run, and see how large the gap in the sequence was at the next startup. But if you want to experiment a little further with the feature here’s a less painful way of doing it – enable SQL trace for just the sequence update statement – which in current versions has an SQL_ID of 4m7m0t6fjcs5x:

alter system  set events 'sql_trace[SQL:4m7m0t6fjcs5x] wait=false, bind=true';

-- wait a bit

alter system  set events 'sql_trace[SQL:4m7m0t6fjcs5x] off';

I’ve shown how to set the trace at the system level but it is possible to use the session level, and I’ve requested bind variables to be dumped on every execution of the statement. After you’ve got some trace files you can examine them to pick out the relevant values. (In a unix environment I’d use grep and awk to automate this).

Here’s a little script to create a table and sequence, enable tracing, then hammer the sequence. I’ve left everything to default so the sequence cache will be 20 and on older versions of Oracle we’d see the highwater mark of the sequence incremented by 20 on each update to seq$. I’m running 19.11.0.0

rem
rem     Script:         trace_seq_update.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2021
rem
rem     Last tested 
rem             19.11.0.0
rem

define m_sql_id = '4m7m0t6fjcs5x'

drop table t1;
drop sequence s1;

create table t1 (n1 number (10,0));
create sequence s1;

insert into t1 values(0);
commit;

spool trace_seq_update.lst

select  cache 
from    user_sequences 
where   sequence_name = 'S1'
;


alter session  set events 'sql_trace[SQL:&m_sql_id] wait=false, bind=true';

insert into t1 select s1.nextval 
from    all_objects
where   rownum <= 5000
/

alter session  set events 'sql_trace[SQL:&m_sql_id] off';

And here’s the first extract from the trace file:

PARSING IN CURSOR #140658757581416 len=129 dep=1 uid=0 oct=6 lid=0 tim=338002229669 hv=2635489469 ad='77638310' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
BINDS #140658757581416:

To find the values used to update highwater from this point onwards I just kept searching for “BINDS #140658757581416:”,stepping down to “Bind#6”, and reporting the “value=” line that was 4 lines beyond that.

If you want to repeat the tests you’ll (probably) find that your cursor number (BINDS #nnnnnnnnnnnn) is difference. If you’ve done a system-wide trace, of course, you might have multiple sequences updated in the same trace file, in which case you’ll also need to report the value for “Bind#9” to separate the different sequences. Moreover, just to make automatic harder, you may find that the update cursor closes and re-opens with a new cursor number from time to time.

Here’s the complete list of Bind#6 entries for my test:

 Bind#6
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=02  flg=09
  value=21

 Bind#6
  oacdty=02 mxl=22(03) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=03  flg=09
  value=221

 Bind#6
  oacdty=02 mxl=22(03) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=03  flg=09
  value=2221

 Bind#6
  oacdty=02 mxl=22(04) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=04  flg=09
  value=22221

As you can see, the highwater jumps by 20, then 200, then 2,000 then 20,000. As a preliminary hypothesis it looks as if Oracle is going to take the cache size (BIND#5) as a base, but escalate that value internally by powers of 10 until the frequency of updates to seq$ drops to an acceptable value. (The value of cache – Bind#5 – isn’t changed by this mechanism, however).

Connor supplies a link to the 21c documentation on the feature – but it’s a bit thin, so there’s some scope for checking if RAC or scaled/expanded sequences produce any unexpected or puzzling side-effects.

Timur’s tweet also supplied a reference to MOS Doc ID: 2790985.1 Sequence dynamic cache resizing feature which has some helpful technical details.

Footnote

I’ve written a 4-part series on sequences for Simpletalk (final part awaiting publication), starting with at this URL.

July 22, 2021

SQL Macro

Filed under: 19c,Oracle — Jonathan Lewis @ 10:18 am BST Jul 22,2021

A question came up recently on the Oracle Developer forum that tempted me into writing a short note about SQL Macro functions – a feature that was touted for 20c but which has been back-ported [ed: in part, just the “table macro” feature – see comment 1] to the more recent releases of 19c. Specifically I set up this demo using 19.11.0.0.

The OP supplied a script to prepare some data. I’ll postpone that to the end of this note and start with variations of the query that could be used against that data set. I’ll be looking at the original query, a variant of the query that uses a pipelined function, then a variant that uses an SQL Macro function.

The requirement starts with a query to turn a pair of dates into a date range – which can be done in many ways but the OP had used a recursive “with subquery” (CTE/common table expression).

with calendar ( start_date, end_date ) as (
        select date '2021-07-01', date '2021-07-30' from dual
        union all
        select start_date + 1, end_date
        from   calendar
        where  start_date + 1 <= end_date
)
select start_date as day
from   calendar
;

Getting on to the full requirement we can use this subquery as if it were a table (or inline view) and join it to any other tables where we want data from a date range, for example:

select
        e.employee_id, c.day
from
        employees e
inner join
        (
                with calendar ( start_date, end_date ) as (
                        select date '2021-07-01', date '2021-07-30' from dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
        ) c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day

If we want a report for a different month we just have to supply a different pair of dates, and we can probably work out a way of making it easy for the end-users to supply those dates as parameters to a report.

The pipelined function

However, we may want to use the same little “recursive CTE” (or similar) pattern in many different reports, and ad hoc queries that users might want to write for themselves. To avoid wasting time on logic, or basic typing errors, is it possible to hide some of the complexity of the subquery structure. The answer is yes, and for a long time we could have used a “pipelined function” to do this – though we have to create a simple object type and an object table type to do so. For example:

create or replace type obj_date is object (day date);
/

create or replace type nt_date is table of obj_date;
/

create or replace function generate_dates_pipelined(
        p_from  in date,
        p_to    in date
)
return nt_date 
pipelined
is
begin
        for c1 in (
                with calendar (start_date, end_date ) as (
                        select trunc(p_from), trunc(p_to) from dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
        ) loop
                pipe row (obj_date(c1.day));
        end loop;

        return;

end generate_dates_pipelined;
/

I’ve started by creating an object type with a single attribute called day of type date, and an object table type of that object type. This means I can use the object type and the object table type to pass data between SQL and PL/SQL. Then I’ve created a pl/sql function that returns the object table type, but in a pipelined fashion using the pipe row() mechanism to supply the data one object at a time.

In my final SQL I can now use the table() operator to cast the result of the function call from an object table to a relational table, implicitly mapping the object attributes to their basic Oracle data types.

select
        e.employee_id, c.day
from
        employees e
inner join
        table(generate_dates_pipelined(date '2021-07-01', date '2021-07-30')) c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day
;

I’ve replaced the 9 lines of the inline “with subquery” by a single line call:

        table(generate_dates_pipelined(date '2021-07-01', date '2021-07-30')) c

In fact the table() operator hasn’t been needed since some time in the 12c timeline, but it might be useful as a little reminder of what’s going on behind the scenes. It’s also a reminder that the data really will behave as if it’s coming from a relational table rather then a pl/sql loop.

Although this pipelined function approach can be very effective another member of the forum pointed out that behind the scenes it is based on a pl/sql loop walking through a cursor which, in this example, was doing row by row processing (though it could be changed to bulk collect with a limit to improve performance a little). We might want to look at options for doing things differently.

The SQL Macro function

In many programming languages a “macro” is a symbol that is used as a short-hand for a longer piece of code. Even in environments like your favourite shell environment you can usually set up shorthand for longer texts that you use frequently, for example:

alias otr="cd /u01/app/oracle/diag/rdbms/or19/or19/trace"

The Oracle equivalent is a PL/SQL function (declared as a “SQL_Macro” function) that you include in your SQL statement, and at run-time Oracle will execute the function and use the text it returns to modify your statement [ed: see comment 1: this expansion is done at hard parse time, not at run-time] . Here’s the macro strategy applied to the date range generation:

create or replace function generate_dates_macro(
        p_from  in date,
        p_to    in date
)
return varchar2
sql_macro
is
        v_sql varchar2(4000) := q'{
                with calendar (start_date, end_date ) as (
                        select
                                to_date('xxxx-xx-xx','yyyy-mm-dd'),
                                to_date('yyyy-yy-yy','yyyy-mm-dd')
                        from    dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
                }'
        ;

begin
        v_sql := replace(v_sql,'xxxx-xx-xx',to_char(p_from,'yyyy-mm-dd'));
        v_sql := replace(v_sql,'yyyy-yy-yy',to_char(p_to  ,'yyyy-mm-dd'));

--      dbms_output.put_line(v_sql);
        return v_sql;

end generate_dates_macro;
/

I’ve created a function, flagged as a sql_macro, that returns a varchar2. It has two input parameters which are declared as dates. The initial value of the variable v_sql looks very similar to the CTE I used in the original query except the two “dates” it uses are “xxxx-xx-xx” and “yyyy-yy-yy”, but in the body of the function I’ve replaced those with the text forms of the two incoming date parameters. There’s a call to dbms_output.put_line() that I’ve commented out that will show you that the final text returned by the function is:

                with calendar (start_date, end_date ) as (
                        select
                                to_date('2021-07-01','yyyy-mm-dd'),
                                to_date('2021-07-30','yyyy-mm-dd')
                        from    dual
                        union all
                        select start_date + 1, end_date
                        from   calendar

                 where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar

So now we can rewrite the original statement as follows (with just a minor change from the pipelined version):

select
        e.employee_id, c.day
from
        employees e
inner join
        generate_dates_macro(date '2021-07-01', date '2021-07-30') c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day
;

When we first call this statement (i.e. assuming a suitable child cursor does not yet exist) Oracle evaluates the function, slots the generated text in place, then optimises and executes the resulting text instead. Interestingly the text reported by a call to dbms_xplan.display_cursor() shows the original text even though the plan clearly includes references to the table(s) in the SQL macro – a search of the library cache also shows the original text, but reveals an anonymous pl/sql block calling the SQL Macro function (in a style reminiscent of the way that row-level security (RLS, FGAC, VPD) calls a security predicate function) that is invisibly folded into a query.

declare
begin 
        :macro_ text := "GENERATE_DATES_MACRO"(
                TO_DATE(' 2021-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),
                TO_DATE(' 2021-07-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
        );
end;

Here’s the execution plan for the query using the SQL Macro:

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                     | Name       | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                              |            |      1 |        |    41 (100)|     83 |00:00:00.01 |     130 |       |       |          |
|   1 |  SORT ORDER BY                                |            |      1 |      1 |    41   (5)|     83 |00:00:00.01 |     130 |  9216 |  9216 | 8192  (0)|
|*  2 |   FILTER                                      |            |      1 |        |            |     83 |00:00:00.01 |     130 |       |       |          |
|*  3 |    HASH JOIN ANTI                             |            |      1 |      1 |    39   (3)|     84 |00:00:00.01 |      46 |  1744K|  1744K| 1542K (0)|
|   4 |     NESTED LOOPS                              |            |      1 |      1 |    21   (0)|     88 |00:00:00.01 |      23 |       |       |          |
|   5 |      TABLE ACCESS FULL                        | EMPLOYEES  |      1 |      1 |    17   (0)|      4 |00:00:00.01 |      23 |       |       |          |
|*  6 |      VIEW                                     |            |      4 |      1 |     4   (0)|     88 |00:00:00.01 |       0 |       |       |          |
|   7 |       UNION ALL (RECURSIVE WITH) BREADTH FIRST|            |      4 |        |            |    120 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   8 |        FAST DUAL                              |            |      4 |      1 |     2   (0)|      4 |00:00:00.01 |       0 |       |       |          |
|   9 |        RECURSIVE WITH PUMP                    |            |    120 |        |            |    116 |00:00:00.01 |       0 |       |       |          |
|  10 |     TABLE ACCESS FULL                         | HOLIDAYS   |      1 |      2 |    17   (0)|      1 |00:00:00.01 |      23 |       |       |          |
|* 11 |    INDEX UNIQUE SCAN                          | TIMEOFF_PK |     84 |      1 |     1   (0)|      1 |00:00:00.01 |      84 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter( IS NULL)
   3 - access("START_DATE"="H"."HOLIDAY_DATE")
   6 - filter(SUBSTR("E"."WORK_DAYS",TRUNC(INTERNAL_FUNCTION("START_DATE"))-TRUNC(INTERNAL_FUNCTION("START_DATE"),'fmiw')+1,1)='Y')
  11 - access("T"."EMPLOYEE_ID"=:B1 AND "T"."TIMEOFF_DATE"=:B2)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

As you can see, even though the query as written didn’t include the recursive CTE, the recursive query against DUAL appears in the plan. In fact the plan is exactly the same as the plan for the original query with the embedded CTE, though there is one interesting little difference – the generated query block names differ between plans.

Pros and Cons

Given that this is a lightweight example of a simple use of the SQL macro there’s not really a lot that can be said when comparing pipelined functions with macro functions. Both hide complexity and give you the opportunity to optimise an awkward piece of the code that might be (in effect) a common sub-routine.

The pipelined function does have to deal with the PL/SQL to SQL interchange – but that’s not a significant feature in this example. The main benefits, perhaps, of the macro are that the plan shows you the table(s) that would be hidden by the pipelined function, and may allow the optimizer to get better estimates of data sizes because it will be examining real tables with real statistics rather than taking a guess at a “pickler fetch” from a collection with a block box function.

Update (pre-publication)

There is some pleasure to be had by making mistakes in public, because that’s when you can learn something new. In my example to the OP on the Developer forum I used a much messier piece of code to embed the date values into the macro string, with lots of doubled and trebled quotes, to_char() functions, and concatenation all over the place.

Alex Nuijten replied to my suggestion pointing out that this degree of complexity was not necessary, and you could reference the functions parameters to construct the string. The only problem with that was that it hadn’t worked when I had tried it. Alex’s comment, however, also mentioned the problem and supplied the explanation: Bug 32212976: USING SCALAR ARGUMENTS IN WITH CLAUSE IN SQL TABLE MACRO RAISES ORA-06553 PLS-306 ). This was exactly the problem that I had been getting (the error message was – wrong number or types of arguments in call to ‘GENERATE_DATES_MACRO’ and I hadn’t thought about searching for known bugs or patches, I just hacked my way around the problem.

Here’s an alternative macro function supplied by Alex (edited slightly to be consistent with the function and column names in my example):

create or replace function generate_dates_macro(
    p_from in date,
    p_to  in date
)
return varchar2
sql_macro
is
    v_sql varchar2(4000);
begin
  v_sql := 'select trunc (generate_dates_macro.p_from) - 1 + level as day
       from dual
       connect by level <= (generate_dates_macro.p_to - generate_dates_macro.p_from) + 1';

--  dbms_output.put_line(v_sql);
    return v_sql;

end generate_dates_macro;
/

Test Code

If you want to experiment further, here’s the code to create the tables used in this demo:

rem
rem     Script:         19c_macro_2.sql
rem     Author:         Jonathan Lewis / "BeefStu"
rem     Dated:          July 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem
rem     Notes:
rem     A Macro solution to a problem that might
rem     otherwise be solved with a pipelined function
rem


drop table holidays;
drop table employees;
drop table timeoff;
drop table  emp_attendance;    
drop table absences;

drop function generate_dates_pipelined;
drop type nt_date;
drop type obj_date;

drop function generate_dates_macro;

-- @@setup

create table holidays(
        holiday_date    date,
        holiday_name    varchar2(20)
)
;

insert into holidays (holiday_date, holiday_name)
values ( to_date('2021/07/21 00:00:00', 'yyyy/mm/dd hh24:mi:ss'), 'July 21 2021') ;

create table employees(
        employee_id     number(6), 
        first_name      varchar2(20),
        last_name       varchar2(20),
        card_num        varchar2(10),
        work_days       varchar2(7)
)
;

alter table employees
        add constraint employees_pk primary key (employee_id)
;

insert into employees(employee_id, first_name, last_name, card_num, work_days)
with names as ( 
select 1, 'Jane', 'Doe', 'f123456', 'NYYYYYN' from dual 
union all 
select 2, 'Madison', 'Smith', 'r33432','NYYYYYN' from dual 
union all 
select 3, 'Justin', 'Case', 'c765341','NYYYYYN' from dual 
union all 
select 4, 'Mike', 'Jones', 'd564311','NYYYYYN' from dual 
) 
select * from names
;

create table timeoff(
        seq_num         integer generated by default as identity (start with 1) not null,
        employee_id     number(6),
        timeoff_date    date,
        timeoff_type    varchar2(1),
        constraint timeoff_chk check (timeoff_date = trunc(timeoff_date, 'dd')),
        constraint timeoff_pk primary key (employee_id, timeoff_date)
)
;

insert into timeoff (employee_id,timeoff_date,timeoff_type) 
with dts as ( 
select 1, to_date('20210726 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual union all 
select 2, to_date('20210726 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual union all 
select 2, to_date('20210727 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual  
) 
select * from dts
;

create table  emp_attendance(    
        seq_num         integer  generated by default as identity (start with 1) not null,
        employee_id     number(6),
        start_date      date,
        end_date        date,
        week_number     number(2),
        create_date     date default sysdate
)
;

create table absences(
        seq_num         integer  generated by default as identity (start with 1) not null,
        employee_id     number(6),
        absent_date     date,
        constraint absence_chk check (absent_date=trunc(absent_date, 'dd')),
        constraint absence_pk primary key (employee_id, absent_date)
)
;

insert into emp_attendance (employee_id, start_date,end_date,week_number)
with dts as ( 
select 1, to_date('20210728 13:10:00','yyyymmdd hh24:mi:ss'), to_date('20210728 23:15:00','yyyymmdd hh24:mi:ss'), 30  from dual 
union all 
select 2, to_date('20210728 12:10:10','yyyymmdd hh24:mi:ss'), to_date('20210728 20:15:01','yyyymmdd hh24:mi:ss'), 30  from dual
)
select * from dts
;


July 20, 2021

Hex tip

Filed under: 19c,Bugs,Oracle,Troubleshooting — Jonathan Lewis @ 5:40 pm BST Jul 20,2021

A surprising amount of the work I do (or used to do) revolves around numbers; and once I’m outside the realm of the optimizer (i.e. getting away from simple arithmetic), one of the bits of playing with numbers that I do most often is conversion – usually decimal to hexadecimal, sometimes decimal to binary.

Here’s an example of how this helped me debug an Oracle error a few days ago. We start with someone trying to purge data from aud$ using the official dbms_audit_mgmt package, first setting the package’s db_delete_batch_size parameter to the value 100,000 then calling dbms_audit_mgmt.clean_audit_trail.

In theory this should have deleted (up to) 100,000 rows from aud$ starting from the oldest data. In practice it tried to delete far more rows, generating vast amounts of undo and redo, and locking up resources in the undo tablespace for ages. The SQL statement doing all the work looked like the following (after a little cosmetic work):

DELETE FROM SYS.AUD$ 
WHERE  DBID = 382813123 
AND    NTIMESTAMP# < to_timestamp('2020-12-17 00:00:00', 'YYYY-MM-DD HH24:MI:SS.FF')
AND    ROWNUM <= 140724603553440

That’s a rather large number in the rownum predicate, much larger than the expected 100,000. Whenever I am puzzled by very large numbers in places I’m not expecting to see them one of the first things I do to poke it around is to convert it to hexadecimal. (Although it seems a fairly random thing to do it doesn’t take very long and it produces an interesting result fairly frequently.)

140724603553440 (dec) = 0x7FFD000186A0

You may not think that the resulting hex number is very interesting – but there’s a string of zeros in the middle that is asking for a little extra poking. So let’s convert the last 8 digit (starting with those 3 zeros) back to decimal.

0x000186A0 = 100,000 (dec)

There’s an interesting coincidence – we’ve got back to the 100,000 that the OP had set as the db_delete_batch_size. Is this really a coincidence or does it tell us something about a bug? That’s easy enough to test, just try setting a couple of different values for the parameter and see if this affects the rownum predicate in a consistent fashion. Here are the results from two more test values:

1,000,000 ==> 140733194388032 (dec) = 0x7FFF000F4240 .... 0x000F4240 = 1,000,000 (dec)
   50,000 ==> 140728898470736 (dee) = 0x7FFE0000C350 .... 0x0000C350 =    50,000 (dec)

The top 4 digits (2 bytes) have changed, but the bottom 8 digits (4 bytes) do seem to hold the db_delete_batch_size requested. At this point I felt that we were probably seeing some sort of pointer error in a C library routine. If you examine the file $ORACLE_HOME/rdbms/admin/prvtamgt.plb) you’ll find that one of the few readable lines says:

CREATE OR REPLACE LIBRARY audsys.dbms_audit_mgmt_lib wrapped

My guess was that there were probably a couple of external C routines involved, with PL/SQL wrappers in the public package; and that there was a mismatch between the declarations in C and the declarations in the PL/SQL.

It turns out that I wasn’t quite right, but I was in the right olympic stadium. This is now (unpublished) bug 33136016, and if you’ve been seeing unexpected work patterns when purging the audit trail after upgrading to 19c or later then there may be a patch for you in the not too distant future.

January 20, 2021

Hint Errors

Filed under: 19c,dbms_xplan,Execution plans,Hints,Ignoring Hints,Oracle — Jonathan Lewis @ 11:06 am GMT Jan 20,2021

This is a list of possible explanations of errors that you might see in the Hint Report section of an execution plan. It’s just a list of the strings extracted from a chunk of the 19.3 executable around the area where I found something I knew could be reported, so it may have some errors and omissions – but there are plenty of things there that might give you some idea why (in earlier versions of Oracle) you might have seen Oracle “ignoring” a hint:

internally generated hint is being cleared
hint conflicts with another in sibling query block
hint overridden by another in parent query block
conflicting optimizer mode hints
duplicate hint
all join methods are excluded by hints
index specified in the hint doesn't exist
index specified in hint cannot be parallelized
incorrect number of indexes for AND_EQUAL
partition view set up
FULL hint is same as INDEX_FFS for IOT
access path is not supported for IOT
hint on view cannot be pushed into view
hint is discarded during view merging
duplicate tables in multi-table hint
conditions failed for array vector read
same QB_NAME hints for different query blocks
rejected by IGNORE_OPTIM_EMBEDDED_HINTS
specified number must be positive integer
specified number must be positive number
specified number must be >= 0 and <= 1
hint is only valid for serial SQL
hint is only valid for slave SQL
hint is only valid for dyn. samp. query
hint is only valid for update join ix qry
opt_estimate() without value list
opt_estimate() with conflicting values spec
hint overridden by NO_QUERY_TRANSFORMATION
hinted query block name is too long
hinted bitmap tree wasn't fully resolved
bitmap tree specified was invalid
Result cache feature is not enabled
Hint is valid only for select queries
Hint is not valid for this query block
Hint cannot be honored
Pred reorder hint has semantic error
WITH_PLSQL used in a nested query
ORDER_SUBQ with less than two subqueries
conflicting OPT_PARAM hints
conflicting optimizer_feature_enable hints
because of _optimizer_ignore_parallel_hints
conflicting JSON_LENGTH hints

Update August 2021 – New items in 21.3

Hint id larger than number of union groups
ORDER_KEY_VECTOR_USE with less than two IDs
ORDER_SUBQ referenced query block name, which cannot be found
Same table referenced in both lists

Website Powered by WordPress.com.