Oracle Scratchpad

August 21, 2021

GTT LOBs

Filed under: Bugs,Infrastructure,Oracle — Jonathan Lewis @ 1:53 pm BST Aug 21,2021

Searching my blog recently for some details about a temporary space problem I came across a draft I’d written a few years ago about LOBs in global temporary tables. It was a summary of an exchange from OTN that did a good job of capturing my general air of skepticism when reviewing database results. The basic problem came from a user who had discovered that when they included a LOB column in a global temporary table the LOBINDEX segment was created in the SYSTEM tablespace. It’s probably not going to be of much practical benefit to many people – but it does demonstrate a principle and a pattern of thinking, so here it is – 4 years late.

 

Like the “Earn $50M by helping me steal $100M” email the claim seemed a little suspect.  My basic approach to Oracle is: “if it looks unreasonable test it”, so I did. Here’s the first bit of SQL (which I ran on 12.1.0.2 on an empty schema):


rem
rem     Script:         gtt_lobs.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4

create global temporary table gtt1(
        id      number          not null,
        v1      varchar2(10)    not null,
        c1      clob,
        constraint gtt1_pk primary key(id)
)
-- tablespace gtt_temp
;

select
        table_name, index_name, tablespace_name
from
        user_indexes
;

TABLE_NAME           INDEX_NAME                       TABLESPACE_NAME
-------------------- -------------------------------- ------------------------------
GTT1                 GTT1_PK
GTT1                 SYS_IL0000168694C00003$$         SYSTEM

Sure enough, the query against user_indexes says the LOBINDEX for the LOB in the global temporary table will be created in the SYSTEM tablespace! This is clearly ridiculous – so I’m not going to believe it until I’ve actually confirmed it. I tend to trust the data dictionary rather more than I trust the manuals and MOS – but the data dictionary is just a bunch of tables and a bit of SQL that’s been written by someone else so even the data dictionary can be wrong. It’s easy enough to test:

insert into gtt1 values(1,'a',rpad('x',4000,'x'));

select
        username, tablespace, segtype, segfile#, segblk#
from
        v$tempseg_usage
;

USERNAME                       TABLESPACE                      SEGTYPE     SEGFILE#    SEGBLK#
------------------------------ ------------------------------- --------- ---------- ----------
TEST_USER                      TEMP                            DATA             201     261632
TEST_USER                      TEMP                            INDEX            201     261760
TEST_USER                      TEMP                            LOB_DATA         201     262016
TEST_USER                      TEMP                            INDEX            201     261888

Whatever the data dictionary says, the actual segment (at file 201, block 261888 – it would be nice if the type were LOB_INDEX, but that still hasn’t been fixed, even in 21.3.0.0) has been created in the temporary tablespace. Checking the definiton of the dba_indexes view, I came to the conclusion that the property column of the sys.ind$ table hadn’t had the bit set to show that it’s an index associated with a temporary table; as a result the SQL in the view definition reports tablespace 0 (SYSTEM) rather than decoding the zero to a null.

You’ll note that there’s a commented reference to “tablespace gtt_temp” in my original table creation statement. It’s not terribly well known but in recent versions of Oracle you can associate a global temporary table with a specific temporary tablespace – this gives you some scope for easily monitoring the impact of particular global temporary tables on the total I/O resource usage. After re-running the test but specifying this tablespace as the location for the GTT I got the following results:


TABLE_NAME           INDEX_NAME                       TABLESPACE_NAME
-------------------- -------------------------------- ------------------------------
GTT1                 GTT1_PK                          GTT_TEMP
GTT1                 SYS_IL0000262251C00003$$         GTT_TEMP

USERNAME                       TABLESPACE                      SEGTYPE     SEGFILE#    SEGBLK#
------------------------------ ------------------------------- --------- ---------- ----------
TEST_USER                      GTT_TEMP                        DATA             202        512
TEST_USER                      GTT_TEMP                        INDEX            202        384
TEST_USER                      GTT_TEMP                        LOB_DATA         202        128
TEST_USER                      GTT_TEMP                        INDEX            202        256

The anomaly disappears – everything is reported as being linked to the gtt_temp tablespace (and that includes the table itself, as reported in view user_tables, though I haven’t included that query in the test or results).

Footnote:

A few months after the first draft of this note  I happened to rediscover it (but still failed to publish it) and after a quick search on MOS found the following bug (reported as fixed in 12.2, with a backport currently available to 11.2.0.3) :

Bug 18897516 : GLOBAL TEMPORARY TABLE WITH LOBCOLUMNS CREATE LOB INDEX IN SYSTEM SCHE

Interestingly the description says: “lob index created in system tablespace” rather than “lob index incorrectly reported as being in system tablespace”. You do have to be very careful with how you describe things if you don’t want to cause confusion – this “problem” isn’t a space management threat, it’s just an irritating reporting error.

 

 

August 6, 2021

Sequence Accelerator

Filed under: 19c,Infrastructure,Oracle,Performance — Jonathan Lewis @ 11:22 am BST Aug 6,2021

Connor McDonald has just published a blog note about a tweak to sequences that appeared in recent versions of Oracle (19.10 – see tweet from Timur Akhmadeev).

To address the problems caused by people leaving the sequence cache size at the default of 20 (leading to contention on very busy sequences – see footnote) Oracle’s internal code will now check the rate at which a sequence nextval is being called and “ignore” the cache definition, using larger and larger values to bump the sequence highwater in the updates to the seq$ table.

Connor pointed out that if you really wanted to see how big the jump might get you could crash your instance in the middle of a run, and see how large the gap in the sequence was at the next startup. But if you want to experiment a little further with the feature here’s a less painful way of doing it – enable SQL trace for just the sequence update statement – which in current versions has an SQL_ID of 4m7m0t6fjcs5x:

alter system  set events 'sql_trace[SQL:4m7m0t6fjcs5x] wait=false, bind=true';

-- wait a bit

alter system  set events 'sql_trace[SQL:4m7m0t6fjcs5x] off';

I’ve shown how to set the trace at the system level but it is possible to use the session level, and I’ve requested bind variables to be dumped on every execution of the statement. After you’ve got some trace files you can examine them to pick out the relevant values. (In a unix environment I’d use grep and awk to automate this).

Here’s a little script to create a table and sequence, enable tracing, then hammer the sequence. I’ve left everything to default so the sequence cache will be 20 and on older versions of Oracle we’d see the highwater mark of the sequence incremented by 20 on each update to seq$. I’m running 19.11.0.0

rem
rem     Script:         trace_seq_update.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2021
rem
rem     Last tested 
rem             19.11.0.0
rem

define m_sql_id = '4m7m0t6fjcs5x'

drop table t1;
drop sequence s1;

create table t1 (n1 number (10,0));
create sequence s1;

insert into t1 values(0);
commit;

spool trace_seq_update.lst

select  cache 
from    user_sequences 
where   sequence_name = 'S1'
;


alter session  set events 'sql_trace[SQL:&m_sql_id] wait=false, bind=true';

insert into t1 select s1.nextval 
from    all_objects
where   rownum <= 5000
/

alter session  set events 'sql_trace[SQL:&m_sql_id] off';

And here’s the first extract from the trace file:

PARSING IN CURSOR #140658757581416 len=129 dep=1 uid=0 oct=6 lid=0 tim=338002229669 hv=2635489469 ad='77638310' sqlid='4m7m0t6fjcs5x'
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
END OF STMT
BINDS #140658757581416:

To find the values used to update highwater from this point onwards I just kept searching for “BINDS #140658757581416:”,stepping down to “Bind#6”, and reporting the “value=” line that was 4 lines beyond that.

If you want to repeat the tests you’ll (probably) find that your cursor number (BINDS #nnnnnnnnnnnn) is difference. If you’ve done a system-wide trace, of course, you might have multiple sequences updated in the same trace file, in which case you’ll also need to report the value for “Bind#9” to separate the different sequences. Moreover, just to make automatic harder, you may find that the update cursor closes and re-opens with a new cursor number from time to time.

Here’s the complete list of Bind#6 entries for my test:

 Bind#6
  oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=02  flg=09
  value=21

 Bind#6
  oacdty=02 mxl=22(03) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=03  flg=09
  value=221

 Bind#6
  oacdty=02 mxl=22(03) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=03  flg=09
  value=2221

 Bind#6
  oacdty=02 mxl=22(04) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=76723ce7  bln=22  avl=04  flg=09
  value=22221

As you can see, the highwater jumps by 20, then 200, then 2,000 then 20,000. As a preliminary hypothesis it looks as if Oracle is going to take the cache size (BIND#5) as a base, but escalate that value internally by powers of 10 until the frequency of updates to seq$ drops to an acceptable value. (The value of cache – Bind#5 – isn’t changed by this mechanism, however).

Connor supplies a link to the 21c documentation on the feature – but it’s a bit thin, so there’s some scope for checking if RAC or scaled/expanded sequences produce any unexpected or puzzling side-effects.

Timur’s tweet also supplied a reference to MOS Doc ID: 2790985.1 Sequence dynamic cache resizing feature which has some helpful technical details.

Footnote

I’ve written a 4-part series on sequences for Simpletalk (final part awaiting publication), starting with at this URL.

August 5, 2021

Memory_target

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 1:10 pm BST Aug 5,2021

When defining memory usage to an Oracle instance you can specify an “SGA target” and a separate “PGA target”; alternatively you could specify a single “Memory target”. I’ve not seen many people using the second option and there are reasons why it’s not a good idea – like the side effect it has on the use of large/huge pages – but a thread appeared on the Oracle Developers’ forum recently asking why the Buffer Cache Advisory section of an AWR report would suggest increasing the buffer cache when the Memory Statistics section showed the PGA+SGA usage to be just 6GB out a declared memory_target of 8GB – why was Oracle “wasting” (or losing) 2GB of memory that the report said could be put to good use.

My first thought was that there was a reporting error – maybe a coding error, maybe a problem of inconsistent definition, so my first piece of advice was to poke around for a little more data; and while that was going on I thought I’d run up an instance specifying a memory_target to see what funny numbers might appear.

My first attempt at instance startup failed because I was using a small machine and had allocated more than half the memory to huge pages – and Oracle didn’t want to use huge pages when the memory target was set, so it ran into various resource problems and let me know that I would need to change a few O/S settings 0 so I reduced the memory_target to something small enough to allow the instance to start.

This is where my memory/sga/pga settings ended when the instance finally started (cut-n-paste with some of the output deleted):

SQL> show parameter target

NAME                                 TYPE        VALUE                          
------------------------------------ ----------- ------------------------------ 
memory_max_target                    big integer 1808M                          
memory_target                        big integer 1808M                          
pga_aggregate_target                 big integer 0                              
sga_target                           big integer 0                              

The 1,808MB was what I’d set in the startup file, and here’s a little corroboration of the figures:

SQL> show sga

Total System Global Area 1895823408 bytes
Fixed Size                  9136176 bytes
Variable Size            1107296256 bytes
Database Buffers          771751936 bytes
Redo Buffers                7639040 bytes

The top line is the sum of the next 4 figures, and the numbers add up to 1,808 MB (with an error of 2,000 bytes) but isn’t it interesting that the total s reported as the Total System Global Area – apparently leaving nothing for the PGA.

How does this compare with information from the next available AWR report.

Memory Statistics
~~~~~~~~~~~~~~~~~                       Begin          End
                                 ------------ ------------
                  Host Mem (MB):      3,692.4      3,692.4
                   SGA use (MB):      1,072.0      1,072.0
                   PGA use (MB):        334.9        381.0
    % Host Mem used for SGA+PGA:        38.10        39.35

Suddenly the SGA is only 1,072MB, while the PGA is now reporting 334MB – which means that somewhere something is failing to report the roughly 400MB. Maybe the “Dynamic Memory Components” section of the AWR will help (I’ve deleted all the zero lines here):

                 Begin Snap     Current         Min         Max   Oper Last Op
Component         Size (Mb)   Size (Mb)   Size (Mb)   Size (Mb)  Count Typ/Mod
--------------- ----------- ----------- ----------- ----------- ------ -------
DEFAULT buffer       704.00      704.00      704.00      752.00      0 SHR/IMM
PGA Target           736.00      736.00      736.00      736.00      0 STA/
SGA Target         1,072.00    1,072.00    1,072.00    1,072.00      0 STA/
Shared IO Pool        48.00       48.00         .00       48.00      0 GRO/IMM
java pool             16.00       16.00       16.00       16.00      0 STA/
large pool            16.00       16.00       16.00       16.00      0 STA/
shared pool          272.00      272.00      272.00      272.00      0 STA/
                          ------------------------------------------------------

That seems to be a “Current Size” total of 2,864MB – but luckily we can spot that the presence of the SGA Target in the list means we’re double counting, and the total excluding that line is 1,792MB. That’s a shortfall of 16MB from our declared 1,808MB, but that’s okay because with this small memory target Oracle is operating in 16MB granules and one granule has been reserved for the fixed memory and public redo log buffers.

So we haven’t lost a big chunk of the memory_target – what we (or the OP) are seeing is a variation in the purpose of the two different parts of the AWR. The Memory Statistics tell us about PGA being used (at the moment of the snapshot) by current sessions, not about the memory that is reserved as an “internal” PGA target. Maybe some investigation of the Memory Resize Operations would show us that Oracle has reached this distribution of memory between SGA and PGA over a period of time as a fairly stable position with some movement of granules back and forth between the two. We could also check the summary of Process Memory that appears in the AWR.

Geek stuff

I had overlooked the possibility of simply looking at the Dynamic Memory Components section of the AWR to check whether the “missing” memory was allocated to the PGA,, so in the original thread on the developer forum I supplied a query that I wrote many years ago to run as sys against the x$ksmge (granule map) when suitable v$ didn’t exist, so just for completeness here’s that query with the results:

set linesize 156
set trimspool on
set pagesize  60
set tab off

column indx             format 999
column component        format a20
column cursize          format 9,999
column gransize         format 999,999,999,999
column grantype         format 999
column granstate        format a10

column ct               format 9,999
column total_memory     format 999,999,999,999

break on report
compute sum of total_memory on report

select
        sct.indx, sct.component, sct.cursize, 
        ge.gransize, ge.grantype, ge.granstate, ct,
        ge.gransize * sct.cursize total_memory
from
        x$kmgsct        sct,
        (
        select
                ge.grantype, ge.granstate, ge.gransize,
                count(*) ct
        from
                x$ksmge         ge
        group by
                ge.grantype, ge.granstate, ge.gransize
        )       ge
where
        ge.grantype(+) = sct.grantype
and     sct.cursize != 0
order by
        sct.indx, sct.component, ge.granstate
;

INDX COMPONENT            CURSIZE         GRANSIZE GRANTYPE GRANSTATE      CT     TOTAL_MEMORY
---- -------------------- ------- ---------------- -------- ---------- ------ ----------------
   0 shared pool               18       16,777,216        1 ALLOC          18      301,989,888
   1 large pool                 1       16,777,216        2 ALLOC           1       16,777,216
   2 java pool                  1       16,777,216        3 ALLOC           1       16,777,216
   5 SGA Target                67
   7 DEFAULT buffer cache      43       16,777,216        9 ALLOC          43      721,420,288
  15 Shared IO Pool             3       16,777,216       17 ALLOC           3       50,331,648
  20 PGA Target                46
                                                                              ----------------
sum                                                                              1,107,296,256


As you can see, this reports the SGA and PGA targets in terms of granules available, but neither reports any granules actually allocated. However, adding the 46 PGA Target granules of 16MB to the total memory of (approx) 1.1GB we can see that Oracle is clearly aware of the full 1,808 MB (less the one fixed granule), and the oddity of “lost memory” was simply about the choice of what to report.

Memory Target and O/S

Just for completeness – here’s a pair of extracts from the alert log of the instance startup. The first with memory_target set, the second using the sga_target and pga_aggregate_target and leaving the memory_target unset. I’ve removed a load of lines which were simply the timestamps (separated by microseconds):

Starting ORACLE instance (normal) (OS id: 2025)
****************************************************
 /dev/shm will be used for creating SGA
Large pages will not be used. Only standard 4K pages will be used
****************************************************
**********************************************************************
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

 Per process system memlock (soft) limit = 128G
 Expected per process system memlock (soft) limit to lock
 instance MAX SHARED GLOBAL AREA (SGA) into memory: 1808M

 Available system pagesizes:
  4K, 2048K

 Supported system pagesize(s):
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
        4K       Configured          462853          462853        NONE

 Reason for not supporting certain system pagesizes:
  2048K - Dynamic allocate and free memory regions

Note the warning: Large pages will not be used. Only standard 4K pages will be used. That’s half the machine memory unavailable, and lots of memory used by per-process memory maps

Starting ORACLE instance (normal) (OS id: 2968)
****************************************************
 Sys-V shared memory will be used for creating SGA
 ****************************************************
**********************************************************************
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

 Per process system memlock (soft) limit = 128G
 Expected per process system memlock (soft) limit to lock
 instance MAX SHARED GLOBAL AREA (SGA) into memory: 1122M

 Available system pagesizes:
  4K, 2048K

 Supported system pagesize(s):
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
        4K       Configured               4               4        NONE
     2048K             1200             561             561        NONE

Summary

While I’ve highlighted a couple of details about the differences between memory target and sga/pga target, the main point of this note comes back to something I’ve said many times in the past. When you’re looking at reports (AWR / Statspack / home-grown) you do need to understand the meaning of the figures in the report, and when they don’t seem to make sense you need to cross-check with other sources so that you can be confident that you understand what the figures are showing.

In this case the “Host Memory” usage figures for the PGA probably reflect the current (and instantaneous) usage by Oracle processes, not the granules allocated to the instance by the operating system. There are other parts of the AWR report that tell you how the granules are split between the SGA and PGA, and while I haven’t shown it, there’s a section of the report (Memory Resize Operations) that tells you if the instance has been under pressure to move memory granules between the SGA and PGA.

July 11, 2021

Quiz Night

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 11:41 pm BST Jul 11,2021

How do you explain the apparent inconsistency between the two outputs from this tiny fragment of an SQL*plus script (last tested 19.11.0.0):

describe t1
create table t2 as select * from t1;
describe t2

The results of the two describe commands are as follows (cut-n-paste, with no editing, including the feedback from the CTAS):

 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                            NOT NULL NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(10)
 PADDING                                VARCHAR2(100)


Table created.

 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                                     NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(10)
 PADDING                                VARCHAR2(100)

Answer, and comments on why it’s worth knowing, some time tomorrow (Monday)

Update (Monday morning)

.Well that didn’t take very long at all (31 minutes according to the timestamps). The effect is due to Oracle’s implementation of declared primary key constraints. A primary key means “unique and not null”, and when you add it Oracle flags the column as NOT NULL in the data dictionary (col$.null$ > 1) but doesn’t create a “not null” check constraint.

When you drop the primary key constraint the implicit “not null” is dropped; when you copy the table with a CTAS the implicit “not null” is not copied.

Sample code:

rem
rem     Script:         pk_overhead_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem             11.2.0.4
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          n1,
        rownum                          n2,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
;

alter table t1 add constraint t1_pk primary key(n1);
alter table t1 modify n2 not null;

set echo on

describe t1

create table t2 as select * from t1;
describe t2

alter table t1 drop primary key;
describe t1

set echo off

And the output, cut-n-paste:

SQL> describe t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                            NOT NULL NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(40)
 PADDING                                VARCHAR2(100)

SQL> 
SQL> create table t2 as select * from t1;

Table created.

SQL> describe t2
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                                     NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(40)
 PADDING                                VARCHAR2(100)

SQL> 
SQL> alter table t1 drop primary key;

Table altered.

SQL> describe t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                                     NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(40)
 PADDING                                VARCHAR2(100)

SQL> 
SQL> set echo off

Why does it matter?

The reason why I wrote and posted this note is because of a recent request for comments on the Oracle-l mailing list about deleting half the data from a 222GB table with 900M rows. Since this is running on 11.2.0.4 none of the really nice options that appeared in 12.2 is available (and even then there might be time, space, or performance costs that would make the newer options unnacceptable).

One of the suggestions (probably the best option in the circumstances) was to use CTAS to create a partitioned table with one partition, selecting only the data that was needed from the original table, add the primary key and any indexes needed, then exchange the original table with the one partition.

The beneftis of this approach are:

  • no massively expensive delete
  • minimal undo and redo on the CTAS
  • option for maximum parallel processing on the CTAS and any index creation
  • no worries about making mistakes with access privileges on the table (thanks to the exchange “trick”)

And this is where the blog note comes in. The OP started asking questions about the cost of creating primary key. When you’ve got a table with 450M rows (half the original) of 111GB (half the original) you probably care about Oracle doing a full tablescan to add a NOT NULL constraint as step 1 of adding the primary key constraint. So you need to know how the primary key was created on the original table – was the column explicitly declared NOT NULL, or did its NOT NULL appear as a side effect of adding its primary key.

To be on the safe size, of course, you can write the CTAS to list the columns and explicitly include a NOT NULL on the primary key columns. Here’s one last fragment of my demo script:

create table t3(n1 not null, n2, v1, padding) 
as 
select * from t1
;

describe t3

 Name                          Null?    Type
 ----------------------------- -------- --------------------
 N1                            NOT NULL NUMBER
 N2                            NOT NULL NUMBER
 V1                                     VARCHAR2(40)
 PADDING                                VARCHAR2(100)

There are several other details that would need to be addressed (and tested) before applying this mechanism e.g (but probably incomplete):

  • any indexes you create on the partitioned table have to be local, and if that’s not possible you have to create them on the standalone table after the exchange;
  • since it’s 11.2.0.4 there may be problems if the original table has had columns marked as unused but not dropped (and you don’t want to drop a couple of columns from a 900M row table), of if there were virtual columns, or function-based indexes with their hidden columns – you might need to create the partitioned table empty using the exact steps that got the original table to its current state otherwise you could run into the error “ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION”
  • At this type of scale you probably want to be certain that all the operations take place at the maximum degree of parallelism that the machine (or your ration of the machine) can handle.
  • Other … give me a little time and I’ll probably think of something.
  • And here’s another – which appeared at comment 3 while I was typing in the update: CTAS loses any default declarations you’ve made for columns.

ORA-14097: Oracle addressed the problem of getting the right table definition in 12c with the “create table for exchange” option – but we’re taking advantage of an exchange “the wrong way round” so that wouldn’t help in this particular case.

Footnote

There are a couple of other posts on my blog that might also be relevant to the OP, one is about the effect of dropping a primary key constraint the other about the cost of adding a primary key to a table.

June 1, 2021

Distributed Sequences

Filed under: distributed,Infrastructure,Oracle — Jonathan Lewis @ 5:55 pm BST Jun 1,2021

This is an other detail to add to the note I wrote recently about the effects of adding a sequence number to a remote select when moving data from one database to another. It probably shouldn’t have much impact on performance unless it runs into code that does a lot of single row processing to handle a large batch of data.

We start with the database link, tables and sequence I created in the previous article (link will open in a separate tab/window for easy viewing), but this time I’m going to do nothing but a simple select from a single remote table – with, or without, a call to my local sequence.

alter system flush shared_pool;

select  /*+ id_only */
        id 
from    test_user.t1@&m_target
where   rownum < 4;

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate +remote -plan_hash'));

select  /*+ seq_only */
        s1.nextval 
from    test_user.t1@&m_target
where   rownum < 4;

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate +remote -plan_hash'));

select  /*+ both */
        id, v1, s1.nextval 
from    test_user.t1@&m_target
where   rownum < 4;

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate +remote -plan_hash'));


I’ve flushed the shared pool before running my three queries because I’m going to want to do a trivial search of the library cache to pick up some run-time information as easily as possible.

A key feature of this set of queries is the “rownum” predicate – and the surprisie you get when sequences and remote queries don’t treat it nicely. (It’s also demonstrating a point about checking which bits of Oraclec code run at what stage of a query.)

Here’s the execution plan output from the three queries:

select /*+ id_only */  id from test_user.t1@orclpdb@loopback where rownum < 4

NOTE: cannot fetch plan for SQL_ID: a3jkr0hqbyccp, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)



select  /*+ seq_only */  s1.nextval from test_user.t1@orclpdb@loopback where rownum < 4

-------------------------------------------------
| Id  | Operation        | Name | Inst   |IN-OUT|
-------------------------------------------------
|   0 | SELECT STATEMENT |      |        |      |
|   1 |  SEQUENCE        | S1   |        |      |
|*  2 |   COUNT STOPKEY  |      |        |      |
|   3 |    REMOTE        | T1   | ORCLP~ | R->S |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<4)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT 0 FROM "TEST_USER"."T1" "T1" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )



select /*+ both */  id, s1.nextval from test_user.t1@orclpdb@loopback where rownum < 4

-------------------------------------------------
| Id  | Operation        | Name | Inst   |IN-OUT|
-------------------------------------------------
|   0 | SELECT STATEMENT |      |        |      |
|   1 |  SEQUENCE        | S1   |        |      |
|*  2 |   COUNT STOPKEY  |      |        |      |
|   3 |    REMOTE        | T1   | ORCLP~ | R->S |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(ROWNUM<4)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "ID" FROM "TEST_USER"."T1" "T1" (accessing
       'ORCLPDB.LOCALDOMAIN@LOOPBACK' )

The first query (id only) is full remote, so the call to dbms_xplan.display_cursor() couldn’t find a plan in the local library cache for it, hence the Note.

The second query (sequence number only) has the sort of plan you’d expect – a fetch with stop key. Looking at the “Remote SQL information”, though, you can see that the “rownum” predicate has not reached the remote site – you might wonder what effect this wil have.

The final query (id and sequence number) seems to have an identical plan, again without passing a rownum predicate to the remote.

But let’s search for the remote SQL in the library cache – we know from experience that it will be sent in all capitals (with identifiers double quoted), Here’s a suitable query with its results:

column sql_text format a60


select
        sql_id, parse_calls, fetches, executions, rows_processed, sql_text
from    v$sql
where   sql_text like 'SELECT%T1%'
;


SQL_ID        PARSE_CALLS    FETCHES EXECUTIONS ROWS_PROCESSED SQL_TEXT
------------- ----------- ---------- ---------- -------------- ------------------------------------------------------------
71bq0j03wtsmz           2          0          0              0 SELECT /*+ FULL(P) +*/ * FROM "TEST_USER"."T1" P
1kzgwm8cj1t1x           1          2          1              3 SELECT 0 FROM "TEST_USER"."T1" "T1"
dkxd06sct6y76           1          2          1              3 SELECT "A1"."ID" FROM "TEST_USER"."T1" "A1" WHERE ROWNUM<4
c6cavnnps7kn8           1          2          1          10000 SELECT "ID" FROM "TEST_USER"."T1" "T1"


There are 4 statements that have been parsed at the remote site. I think the first statement above is probably sent from the local to the remote to allow semantic checking at the local site – there’s a little oddity in the timing of when the parse_calls column is incremented, the statement is actually sent for all three of my queries, but always seems to report one less parse than the number of statement I actually test with.

The other three statements are the different effects we get from the three statements I executed. You’ll notice that they’ve all done 2 Fetch calls – this is the standard “get the first row, get more rows” effect that you usually see from SQL*Plus. The interesting points lie in the differences between the texts sent and in the numbers of rows fetched.

  • Selecting the id only results in the (3rd) text: SELECT “A1″.”ID” FROM “TEST_USER”.”T1″ “A1” WHERE ROWNUM<4 which fetches exactly the 3 rows demanded by its rownum predicate.
  • Selecting the sequence only results in the (2nd) text: SELECT 0 FROM “TEST_USER”.”T1″ “T1” which has no rownum predicate, but manages to fetch only the 3 rows required.
  • Selecting both id and sequence results in the (4th) text: SELECT “ID” FROM “TEST_USER”.”T1″ “T1” which again has no rownum predicate, but in this case fetches all 10,000 rows from the table. This isn’t quite as bad as it appears at first glance, the data fetched is one SDU (session data unit), it’s not deliberately the entire table and if we select id and v1 the rows_processed drops, in my case, to 3,879. It’s worth remembering, of course, that this could still be a lot of work at the remote server, and the SDU in 12.2 onwards has a maxmimum value of 2MB.

tl;dr

The bottom line on this little note, though is simply this: even in the very simplest cases, sequences make a difference to the way that Oracle handles remote/disrtibuted queries – so look very closely at how you’re using them in these circumstances

May 29, 2021

Partitioning existing data

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 10:35 am BST May 29,2021

I started drafting this note 3 years ago and never got round to finishing it, but I rediscovered it today while looking for something I had written about the “alter table … modify” command options that you can use to convert a non-partitioned table into a partitioned table, or change the partitioning of an already partitioned table (which may need extra preparatory work). I’ve left the opening comments and list of thoughts in place, but there are two particular details I want to talk about in a little more detail because it’s possible that they may turn into relatively commonly occurring issues that will stopyou using this mechanism.

So let’s go back three years.


When someone asks the question, as happened on the ODC (OTN) database forum a little while ago [ed: March 2018]: “How do I partition a 90GB table?” there’s really only one correct (though somewhat cruel) answer: “If you have to ask the question, you probably shouldn’t.”

There are so many possible side effects to partitioning that if you’ve managed to work out what column(s) to partition on, what partitioning strategy is appropriate, what partitioning maintenance operations you will have to handle, what problems you might have with referential integrity, and how you’re going to re-engineer the indexes then the question of how to do the physical re-arrangement is trivial by comparison. Let’s assume we have done all the other homework, though, and consider possibilities.

  • Would partitioning “from now on” be acceptable, or do you have to partition the past ?
  • Is there down-time for the operation, or does it have to be done online.
  • Whether online or “offline” you need a lot of space somewhere to duplicate the data (and, possibly, the indexes)
  • Could you benefit from different storage attibutes for different (e.g. aged) partitions: pctfree 0, basic compresion
  • Could you take advantage of new features as you partition (partial indexes, attribute clustering)
  • Have you allowed time for re-creating indexes on the copy
  • Will you need, or could you benefit from, changes to indexes – local / global / removing the partiitoning column
  • Do you need to allow time for re-validating referential integrity (both ways, perhaps)
  • Do you have a library of scripts that could handle all the referential integrity, privileges, etc.
  • Do you have a standby, and what are the implications for refreshing it or keeping it in synch
  • Do you have any columns marked unused that you would like to see dropped.
  • Do you have any columns that were “fast add default” columns that could be cleaned up

Today (May 2021) I had a reason to consider the last two points in that list and how well the options in “alter table XXX modify” for partitioning might handle them. I’ll take the unused columns first because their implementation is the more commonly known.

Unused Columns

When you set a column unused, it gets flagged and renamed in the data dictionary but any data stored in that column stays in place (invisibly) in the segment until you subsequently drop the column – and dropping a column is a very expensive thing to do, dropping multiple columns is even worse as Oracle drops each column for each row as a separate pair of undo and redo vectors. I wrote an article about this for Simpletalk a few years ago – the figures in the article came from an instance running 12.1.0.2 but the behaviour is the same in 19.11.0.0.

So marking a column unused is quick and cheap, dropping it is hideously slow and expensive. However it would be good to have a mechanism for getting rid of the column without the awful overheads because if you don’t do so the day will come when someone, someday, will accidentally “drop column” when they really meant “set unused” and then your production system will effecetively stop until the whole operation is complete.

This is where I thought the 12.2 option for online turning a non-partitioned table into a partitioned table or even the simple “alter table XXX  move online” might help. These commands will copy every single row in the table from a source copy to a destination copy (dealing with all the indexes etc.) so they could (in principle) rewrite the rows without the unused columns and then fix up the data dictionary at the end of the move/modify.

They don’t.

They get a little way towards cleaning the table up, all the unused columns are set to null while the rows are copied so you get the space back – and it’s more likely to be usable and safe to use than it would be with the “drop column” command, but the column is still there (with an obscure name in the data dictionary telling you when it was dropped (e.g. SYS_C00002_21052719:24:01$ was once the second column in the table, marked unused at 19:24:01 on 27th May 2021). So the table is still exposed to the risk of someone coming along and issuing a system-threatening “drop column”. And, of course, it’s still sitting there contributing to the 255 “real-column” limit where the row has to start splitting into multiple row-pieces.

Fast Add Nullable (FAN) Default

What about tables where you’ve added a column with a default value.  In 11g Oracle introduced a mechanism that stored the value in the data dictionary if the column were declared as not null and didn’t have to update every row in the table. In 12c the mechanism was extended to include nullable columns by adding an extra column on the table to flag rows that were inserted or had the default value updated since the column was added. (Each bit in this extra column corresponds to one FAN column in the table.)

Again you a might think that “alter table … move” and “alter table … modify” would be able to tidy things up by filling in the default value as they rewrite the table. And, just as we’ve seen with unused columns, they nearly do the job  – but they still leave the spare (and now apparently redundant) mapping column in place with all the infrastructure that would allow Oracle to work out whether or not a null meant null.

But there’s more to this than just the a little bit of infrastructure in this case because the optimizer and run-time engine carry on doing all the work that they did when the mapping column was needed. Here’s a little code to demonstrate the point:


rem
rem     Script:         fast_default_add_col.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem             18.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
where
        rownum <= 1e3   -- > comment to avoid WordPress format issue
;

alter table t1 add (def_val varchar2(10) default 'XXX' /* not null */);

column data_default format a20

select
        column_name, column_id, segment_column_id, internal_column_id, avg_col_len, data_default
from
        user_tab_cols
where
        table_name = 'T1'
order by
        column_id
;

insert into t1(id, v1, padding, def_val) values(-1, 'x', 'x', 'xz');
insert into t1(id, v1, padding, def_val) values(-2, 'x', 'x', null);
update t1 set def_val = 'yy' where id = 1;
commit;

column sys_nc00004$ format a32

select distinct sys_nc00004$, count(*) 
from t1 
group by sys_nc00004$
;

I’ve created table with 1,000 rows, added a (nullable) column with default value, then inserted two rows and update the new column of a pre-existing row. I’ve run a query to show that there’s a “spare” system generated column which is a real (stored in the segment) column. I’ve then reported the aggregate information about this sytem generated column. Here are the results of the two queries:

OLUMN_NAME           COLUMN_ID SEGMENT_COLUMN_ID INTERNAL_COLUMN_ID AVG_COL_LEN DATA_DEFAULT
-------------------- ---------- ----------------- ------------------ ----------- --------------------
ID                            1                 1                  1           4
V1                            2                 2                  2          11
PADDING                       3                 3                  3         101
DEF_VAL                       4                 5                  5           4 'XXX' /* not null */
SYS_NC00004$                                    4                  4           0


SYS_NC00004$                       COUNT(*)
-------------------------------- ----------
01                                        3
                                        999

If you cared to query t1 where sys_nc00004$ is not null you’d see that the three rows from the aggregate report were the two I’d inserted plus the one I’d updated.

So let’s run a couple of queries against the data and check their execution plans – first a simple aggregate query on the def_val column:


set serveroutput off

set feedback only
select def_val, count(*) from t1 group by def_val;
set feedback on
select * from table(dbms_xplan.display_cursor(format=>'+projection'));

Plan hash value: 136660032

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     5 (100)|          |
|   1 |  HASH GROUP BY     |      |     1 |     4 |     5  (20)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  1000 |  4000 |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00004$",0)),NULL,NVL("DEF_VAL
       ",'XXX'),'0',NVL("DEF_VAL",'XXX'),'1',"DEF_VAL")[10], COUNT(*)[22]
   2 - DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00004$",0)),NULL,NVL("DEF_VAL
       ",'XXX'),'0',NVL("DEF_VAL",'XXX'),'1',"DEF_VAL")[10]

The Column Projection Information tells use that Oracle is not just looking at the def_val column to produce a result for this query. The optimizer has used the data dictionary information to construct a decode (not case !) expression that will be evaluated for every row. The expression checks the system generated column to check the bit relevant to the def_val column and uses it to decide whether to report the def_val column itself, or to report the default value from the data dictionary.  (It is possible to get the same result using a decode with just two components, but I think the Oracle developer may have decided that the longer expression would be decoding “most likely first”.)

The same complicated expression appears as the projection for operation 1, but I don’t think this means Oracle is actually re-evaluating the expression – after all def_val isn’t being projected upwards from operation 2 so isn’t available for slotting into the expression.

Here’s another example that might be used – find all the rows where null was explicitly supplied:

set feedback only
select * from t1 where def_val is null;
set feedback on
select * from table(dbms_xplan.display_cursor(format=>'+projection'));

Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     4 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |   116 |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00004$",0)),NULL,NVL("
              DEF_VAL",'XXX'),'0',NVL("DEF_VAL",'XXX'),'1',"DEF_VAL") IS NULL)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "T1"."ID"[NUMBER,22], "T1"."V1"[VARCHAR2,40],
       "T1"."PADDING"[VARCHAR2,100], DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00004$
       ",0)),NULL,NVL("DEF_VAL",'XXX'),'0',NVL("DEF_VAL",'XXX'),'1',"DEF_VAL")[
       10]

As you can see, the same decode() expression appears in both the Predicate Information and as a column projection. Again, this probably doesn’t mean it has to be evaluated twice per row but there’s clearly a CPU overhead for taking advantage of the fast add default mechanism – though it might be possible to find cases where this is offset by the improved data packing due to shorter rows.

Eventually you might decide to tidy things up a bit with a simple ‘alter table move’ (or, as for the rest of this note, ‘alter table … modify” so what changes if you do this?

The actual default value will get written into the row, but apart from that nothing changes – the mapping column stays exactly where it is, none of its bits are altered, and (possibly most importantly) the complex decode() expression doesn’t go away. It’s another case where there seems to be an opportunity for the code to do a more thorough job.

tl;dr

From 12c onwards when “alter table move” and “alter table … modify “ are going to rewrite the entire table you might expect them to tidy up all the mess introduced by setting columns unused or introducing nullable columns with default values . On the plus side they do clean the data, but they don’t clean the metadata as thoroughly as they could do and this does leave processing overheads continuing into the future.

Footnote

You will often hear the warning “The execution plans from Explain Plan may be telling lies”. The new mechanisms for adding columns with default values gives it a lovely opportunity to do so. Here’s what explain plan does with the last query above:


explain plan for
select * from t1 where def_val is null;

select * from table(dbms_xplan.display(format=>'projection'));

lan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |   116 |     4   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |   116 |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("DEF_VAL" IS NULL)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "T1"."ID"[NUMBER,22], "T1"."V1"[VARCHAR2,40],
       "T1"."PADDING"[VARCHAR2,100], "DEF_VAL"[VARCHAR2,10]

 

May 22, 2021

255 columns

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 1:19 pm BST May 22,2021

Here’s a little note about column counts that I’ve not mentioned before now in the various articles I’ve written about the side effects of declaring tables with more than 255 columns. The count is about “real” columns and you should ignore virtual columns – but there are some real columns you might not notice unless you look carefully. Here’s a little quiz, starting with a table definition:

rem
rem     Script:         column_count.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem 

create type jpl_row as object (n1 number, v1 varchar2(10));
/

create table t1 (
        n1      number, 
        ot      jpl_row, 
        n2      number
)
;

How many “real” columns are there in table t1?

Let’s ask the data dictionary:

select  column_name, column_id, data_type
from    user_tab_columns
where   table_name = 'T1'
order by 
        column_id
/

COLUMN_NAME           COLUMN_ID DATA_TYPE
-------------------- ---------- -------------------------
N1                            1 NUMBER
OT                            2 JPL_ROW
N2                            3 NUMBER

3 rows selected.

It looks like three, just as we declared, except we’re looking at the wrong bit of the data dictionary:

select
        column_name, 
        column_id,
        segment_column_id,
        data_type,
        hidden_column,
        virtual_column
from
        user_tab_cols
where   table_name = 'T1'
order by 
        column_id
/

COLUMN_NAME           COLUMN_ID SEGMENT_COLUMN_ID DATA_TYPE                 HID VIR
-------------------- ---------- ----------------- ------------------------- --- ---
N1                            1                 1 NUMBER                    NO  NO
OT                            2                 2 JPL_ROW                   NO  NO
SYS_NC00003$                  2                 3 NUMBER                    YES NO
SYS_NC00004$                  2                 4 VARCHAR2                  YES NO
N2                            3                 5 NUMBER                    NO  NO

5 rows selected.

The correct answer is five, and they’re all “real” – check the segment_column_id. I was slightly surprised at this because I had been assuming that the two attributes of the jpl_row type would be real but hidden and the jpl_row type itself to be virtual.

Conclusion

When checking the number of columns in a table, make sure you look at the xxx_tab_cols view – not the xxx_tab_columns to get complete definitions of all the columns, real, virtual, and hidden.

Lagniappe

How many “real” columns does the following statement add to the table definition:

alter table t1 add n3 number default 999;

Here’s the result of repeating my query against user_tab_cols:

COLUMN_NAME           COLUMN_ID SEGMENT_COLUMN_ID DATA_TYPE                 HID VIR
-------------------- ---------- ----------------- ------------------------- --- ---
N1                            1                 1 NUMBER                    NO  NO
SYS_NC00003$                  2                 3 NUMBER                    YES NO
OT                            2                 2 JPL_ROW                   NO  NO
SYS_NC00004$                  2                 4 VARCHAR2                  YES NO
N2                            3                 5 NUMBER                    NO  NO
N3                            4                 7 NUMBER                    NO  NO
SYS_NC00006$                                    6 RAW                       YES NO

7 rows selected.

The mechanics of Oracle’s “fast add” of a default column actually adds two column, one hidden, if the column being added is a nullable column. [See comment #1 below from Adam Leszczyński you get just the one “extra” column no matter how many nullable columns with defaults you subsequently add to the table.]

April 13, 2021

CSV

Filed under: Infrastructure,Oracle,Problem Solving — Jonathan Lewis @ 3:00 pm BST Apr 13,2021

An old question with a little extra twist came up on the Oracle Developer forum recently – someone wanted to export a table into a CSV file but wanted to do a character set conversion at the same time because various “non-printable” characters were getting corrupted.

Looking at the code they had supplied they were running from SQL*Plus and were trying to use a select statement to generate a script that, for each row in a table, produced a statement of the form:

insert into tableX({list of columns}) values({list of values});

This worked reasonably well when they didn’t attempt to get a UTF8 output, but when they modified their client O/S environment to make SQL*Plus produce the desired UTF8 output the operation was much slower, increasing from about 1 minute to more than half an hour. (This may simply be the overhead of SQL*Net translating each line of output from the server character set to the client character set.)

Since the system in question was running on 12.2 I suggested a change of strategy so that they could at least take advantage of a built-in CSV generator to see how this behaved. Oracle extended the “set markup” command in 12c to include a CSV option:

set markup csv on delimiter {character} quote on

If this behaves nicely with the user’s attempt to get the output in their chosen characterset then a change in strategy to generate a “proper” CSV file and reload it through an external table, rather than generating and executing a list of “insert ….” statements might do a lot less work. (Unfortunately they had a requirement to generate the insert statements rather than adopting this bulk strategy – but I wasn’t going to let that stop me finishing this note.)

One detail to note with this markup option is that, unlike the “delimiter” character, the “quote” character cannot be specified and will always be the double-quote (ASCII character 34). If this character appears in any of the reported strings Oracle will “escape” it by duplicating it. Here’s a little example to demonstrate the effects – first a little data with a basic output (running 12.2.0.1):

rem
rem     Script:         markup.csv.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2020
rem

create table t1 (v1 varchar2(10), n1 number, n2 number, v2 varchar2(10));

insert into t1 values('abcdef',1,1,'qwer');
insert into t1 values('abc"def',2,null,'asdf');
insert into t1 values (null,3,3,null);

commit;

prompt  ==================
prompt  markup not yet set
prompt  ==================

select * from t1;


V1                 N1         N2 V2
---------- ---------- ---------- ----------
abcdef              1          1 qwer
abc"def             2            asdf
                    3          3

3 rows selected.

I’ve included one row with a double-quote in one of the columns, and a couple of rows with NULLs in various columns. Now I just switch markup to CSV using its default options:

prompt  ==================
prompt  Default CSV markup
prompt  ==================

set markup csv on
select * from t1;


"V1","N1","N2","V2"
"abcdef",1,1,"qwer"
"abc""def",2,,"asdf"
,3,3,

3 rows selected.


As you can see this has behaved nicely – every column is present (even when empty – judging by the number of delimiters) and character strings have been quoted with the double-quote character, and the stored double-quote in the second row has been duplicated. The default option for CSV should be nearly perfect for use as an extrernal table – though we probably want to get rid of column headings, page breaks, and feedback etc.

Before creating a suitable external table, let’s play around with a couple of variations for the options:

prompt  ===========
prompt  delimiter =
prompt  ===========

set heading off
set pagesize 0
set feedback off

set markup csv on delimiter =
select * from t1;

"abcdef"=1=1="qwer"
"abc""def"=2=="asdf"
=3=3=

prompt  ================================
prompt  quote off (delimiter remembered)
prompt  ================================

set markup csv on quote off
select * from t1;

abcdef=1=1=qwer
abc"def=2==asdf
=3=3=

Now to read back one of the outputs – I’ll pick the default output for this (but with headers etc. off to leave just three lines of data). Here’s how I can define my external table to read back the values from the file:

create or replace directory ext_tab as '/mnt/working/';
create or replace directory ext_log as '/mnt/working/';

create table ext(
        v1 varchar2(10), 
        n1 number, 
        n2 number, 
        v2 varchar2(10)
)
organization external(
        type oracle_loader
        default directory ext_tab
        access parameters (
                records delimited by newline 
                discardfile     ext_log:'exttab_dis.txt'
                badfile         ext_log:'exttab_bad.txt'
                logfile         ext_log:'exttab_log.txt'
        fields 
                csv 
                with embedded
        missing field values are null
        )
        location ('markup_csv.lst')
)
reject limit 10
;

select * from ext;

V1                 N1         N2 V2
---------- ---------- ---------- ----------
abcdef              1          1 qwer
abc"def             2            asdf
                    3          3

3 rows selected.

I’ve created a couple of directories to get started – linking a physical location to a logical name that I can use in the external table definition.

The key details that allow Oracle to read the file correctly appear in the “fields” specification: it’s a pity that the output from the default CSV markup doesn’t become the default expectation for the external table CSV option, but I needed to use the “with embedded” to adjust the duplicated “double-quote” characters and then add “missing field values are null” to cope with a null column at the end of the line.

Summary

With a simple call to “set markup csv on” (and a handful of other set {formatting} commands) it’s very easy to turn a table into a flat CSV file that is comma separated, (double)quote deliminted.

It’s then very easy – but requires two non-default settings – to read this flat file back into an Oracle database using “fields CSV” as the basic formatting directive.

Footnote

I’ve kept the example very simple, but as soon as you start dealing with dates and timestamps – and even very large numbers – you’ll have to worry about formatting when generate the flat file (set nls_date_format, column n1 format 99999, etc.) to make sure you don’t lose precision on the output; and for dates and timestamps you’ll have to include the equivalent date_format options in the external table definition.

I’ve only demonstrated how to reload the file back into Oracle as an external table. If you wanted to use this markup option to prepare the data for an Excel spreadsheet all you have to do is spool to a file with the “.csv” extension and Excel will be able to open the output file and parse the data correctly into a multi-column display. (But see comment #1 below for a warning from Jason Bucata.)

February 1, 2021

data_default

Filed under: Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 4:50 pm GMT Feb 1,2021

Here’s a quirky little detail – probably totally irrelevant to virtually everyone – that came up in a question on the Oracle Developer Forum a couple of days ago. It concerns the way Oracle stores and records default values for columns, and it also does a hat-tip to the “commas at the start/end of the line” argument. Here’s a little script to create two identical tables:

create table t1 (
        t1 timestamp default '01-Jan-2021 12:00:00'
,       t2 timestamp default '01-Jan-2021 12:00:00'
,       t3 timestamp default '01-Jan-2021 12:00:00'
)
;


create table t2 (
        t1 timestamp default '01-Jan-2021 12:00:00',
        t2 timestamp default '01-Jan-2021 12:00:00',
        t3 timestamp default '01-Jan-2021 12:00:00')
;

Here’s a query to check that we’ve set the defaults correctly, followed by the output:

break on table_name skip 1
set linesize 180

spool default_length.lst

select
        table_name, column_name, default_length, data_default
from
        user_tab_columns
where
        table_name in ('T1', 'T2')
order by
        table_name,
        column_name
;

TABLE_NAME           COLUMN_NAME          DEFAULT_LENGTH DATA_DEFAULT
-------------------- -------------------- -------------- ----------------------------------
T1                   T1                               23 '01-Jan-2021 12:00:00'
                     T2                               23 '01-Jan-2021 12:00:00'
                     T3                               23 '01-Jan-2021 12:00:00'

T2                   T1                               22 '01-Jan-2021 12:00:00'
                     T2                               22 '01-Jan-2021 12:00:00'
                     T3                               22 '01-Jan-2021 12:00:00'

It would appear that we have the same default values set for the columns – but for table t1 the length of the default values is 23, while for table t2 it’s only 22. How strange, how do we investigate what’s going on.

A check of the view user_tab_columns tells us that data_default is a long column so we can’t dump() it, and we can’t substr() it. We could dump the relevant block from sys.col$, but rather than do that I’ll write a little PL/SQL block that reads the long into a PL/SQL varchar2() and outputs the last byte:

declare
        v1 varchar2(32);
begin
        for r in (
                select  table_name, column_name, default_length, data_default
                from    user_tab_columns
                where   table_name in ('T1','T2')
        ) loop
                v1 := r.data_default;
                dbms_output.put_line(
                        r.table_name || ' ' ||
                        r.column_name || ' ' ||
                        r.default_length || ' ' ||
                        ascii(substr(v1,r.default_length))
                );
        end loop;
end;
/

T1 T1 23 10
T1 T2 23 10
T1 T3 23 10
T2 T1 22 39
T2 T2 22 39
T2 T3 22 39

The last character of data_default for the t1 table (with length 23) is chr(10) – the line-feed, while the last character for the t2 table (with length 22) is chr(39) – the single-quote character.

The text stored in the data_default column is literally the text you supplied to Oracle (it’s not an expression that is stored and evaluated at table creation time); and the text that’s stored seems to be all the text that Oracle see up to the point where a new token tells it to stop, and in the case of t1 that’s the comma after the line-feed (if you’re running on Windows you might see the length as 24 since DOS uses “carriage return – line feed” compared to the UNIX line-feed only).

Here’s another variant, just to emphasise the point, showing another table declaration and the associated output from the PL/SQL:

create table t3 (
        t1 timestamp default '01-Jan-2021 12:00:00'    
,       t2 timestamp default '01-Jan-2021 12:00:00'     ,       t3 timestamp default '01-Jan-2021 12:00:00'
)
;


T3 T1 23 10
T3 T2 27 32
T3 T3 23 10

In this case there are 5 spaces between the declaration of column t2 and the comma that separates it from the declaration of column t3. As you can see the default length is longer and the last stored byte is chr(32) – the space character

Lagniappe

You could criticise me for not including a format string as part of my definition of the default value, so it should have been something like: to_timestamp(’01-Jan-2021 12:00:00′,’dd-mon-yyyy hh24:mi:ss’) There is, however, one drawback to this – the expression is now 62 characters long (at least), which means the default value won’t be cached in the dictionary cache (v$rowcache)- and this might introduce a parsing overhead that you would prefer to avoid.

January 18, 2021

Supplemental Defect

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 5:22 pm GMT Jan 18,2021

Here’s an anomaly that appeared in a question on the ODC recently about tweaking the output of dbms_metadata.get_ddl(), As the title suggests, though, this posting isn’t about the dbms_metadata package it’s about supplemental logging and something that should not to be possible and may have unfortunate side effects.

We start with a little script that creates a table, inserts some data, adds a couple of constraints, and then introduces some supplemental logging requirements. As the script header indicates I’ve only tested this on 19.3.0.0:

rem
rem     Script:         supplemental_defect.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2021
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1 (
        n1      number,
        n2      number,
        n3      number,
        n4      number
);

insert into t1 (n1, n2, n3, n4) values (1,2,3,4);
commit;

alter table t1 add constraint t1_pk primary key(n1, n2)
/

alter table t1 add constraint t1_uk unique(n3) using index (
        create index t1_uk on t1(n3, n4)
)
/

alter table t1 add supplemental log data (primary key, unique) columns
/

alter table t1 add supplemental log group t1_g1 (n1, n2) always
/

alter table t1 add supplemental log group t1_g2 (n1, n2) always
/

There’s nothing particularly special or complex about this code, and every statement runs successfully. You might notice that I’ve created two identical supplemental log groups, of course, and that’s clearly a mistake and I’d say that Oracle should have raised an error when I tried to create group t1_g2.

In fact my script is slightly worse than you might think at first sight because (n1, n2) is the primary key of the table, and I’ve added supplemental logging on the primary key already, so even supplemental log group t1_g1 really ought to raise an error as well.

Side note: you’ll notice that I’ve got a single-column unique constraint protected by non-unique multi-column index, and part of my supplemental log data specifies unique columns. If you check the syntax this option can be expressed as “unique” or “unique indexes” – which leaves me wondering whether Oracle my code would treat this as a command relating to n3 and n4, or just to n3.

So far all I’ve done is create a table in a way that’s left me with a couple of questions – let’s see what we get when we use dbms_metadata.get_ddl() to generate code to recreate the table:

begin
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SQLTERMINATOR',true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'STORAGE', false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'TABLESPACE',false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'CONSTRAINTS',true);
end;
/

set long 20000
set heading off

select dbms_metadata.get_ddl('TABLE','T1') from dual;

================================================================

  CREATE TABLE "TEST_USER"."T1"
   (	"N1" NUMBER,
	"N2" NUMBER,
	"N3" NUMBER,
	"N4" NUMBER,
	 CONSTRAINT "T1_PK" PRIMARY KEY ("N1", "N2")
  USING INDEX  ENABLE,
	 CONSTRAINT "T1_UK" UNIQUE ("N3")
  USING INDEX  ENABLE,
	 SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS,
	 SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS,
	 SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS,
	 SUPPLEMENTAL LOG GROUP "T1_G2" ("N1", "N2") ALWAYS
   ) ;

The “create table” statement includes 4 supplemental log clauses – notice how my original “two-part” clause for the primary key and unique constraints has been split into two – with the key word index being added to the latter.

You might note that if you try to execute this statement to recreate the table you’re going to run into an immediate problem – Oracle will create a unique singe-column index to protect the unique constraint – it doesn’t know that the constraint should be protected by a two-column non-unique index.

But there’s another problem that will protect you from the indexing issue. Here’s what I get when I try to execute the statement:

	 SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS,
	                                *
ERROR at line 12:
ORA-02261: such unique or primary key already exists in the table

The funniest thing about this error comes when you look at the error message file ($ORACLE_HOME/rdbms/mesg/oraus.msg in my case), where we can find the “Cause” and “Action” that Oracle supplies for the error:

02261, 00000, "such unique or primary key already exists in the table"
// *Cause: Self-evident.
// *Action: Remove the extra key.

It’s not too difficult to guess why the error has appeared but “Self-evident” seems a little optimistic. In fact your first guess about the error may be wrong. Obviously I could simply delete the lines that create the t1_g1 and t1_g2 logs since they appear to be redundant copies of the (primary key) supplemental log – but what if I just delete the line that creates the (primary key) supplemental log? Oracle still raises the ORA-02261 error. It’s only when I delete the declaration of the primary key (and I can leave the declaration of the (primary key) supplemental log in place) that the error stops appearing and I recreate the table … even if I’ve left the decarations of the two supplemental log groups on (n1, n2) in place.

On the plus side – although the export/import utilities expdp and impdp make use of the dbms_metadata package they don’t run into the same problem. After I’d exported, dropped and imported the t1 table with the sqlfile=xxx.sql option this is how impdp reported the code it had run to recreate the table (edited to remove the space management bits):

CREATE TABLE "TEST_USER"."T1" 
   (    "N1" NUMBER, 
        "N2" NUMBER, 
        "N3" NUMBER, 
        "N4" NUMBER
   ) 
;

ALTER TABLE "TEST_USER"."T1" ADD CONSTRAINT "T1_PK" PRIMARY KEY ("N1", "N2")
;
ALTER TABLE "TEST_USER"."T1" ADD CONSTRAINT "T1_UK" UNIQUE ("N3")
;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG GROUP "T1_G2" ("N1", "N2") ALWAYS;

The underlying code for expdp/impdp generates constraints separately from the table and then adds the constraints to the tables – and it views supplemental logs as a variant on constraints (you can stop the supplemental log clauses appearing by using the dbms_metadata.transform_param() procedure to disable the generation of constraints) and adds them later.

You might notice in passing that the index created by impdp to support the unique constraint is NOT the index originally specified. This is a very old problem – though the nature of the problem has varied with time – so watch out if you move data from a production system to a test system using export/import.

Summary

The internal code to handle supplemental logging allows you to create redundant supplemental log groups that will cause errors if you use dbms_metadata.get_ddl() in the most straightforward way to generate code to recreate the table.

The error message that appears if you haven’t allowed for the anomaly is a little counter-intuitive.

January 13, 2021

Check Constraints

Filed under: constraints,Oracle — Jonathan Lewis @ 3:17 pm GMT Jan 13,2021

This is a note I drafted in 2018 but never got around to publishing. It’s an odd quirk of behaviour that I discovered in 12.2.0.1 but I’ve just checked and it’s still present in 19.3.0.0.

Here’s a funny little thing that I found while checking some notes I had on adding constraints with minimum service interruption – a topic I last wrote about a couple of years ago [ed. now nearly 5 years ago]. This time around I did something a little different, and here’s a cut-n-paste from the first couple of steps of my test when I had previously used another session to delete a row without committing (table t1 is a table I created as select * from all_objects).

Note that the first SQL statement uses disable while the second uses enable:


SQL> alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) disable novalidate
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;

At this point my session was hanging – and I find it a little surprising that the attempt to create the constraint disabled returns an immediate ORA-00054, while the attempt to create it enabled waits. A quick check of v$lock showed that my session was requesting a TX enqueue in mode 4 (transaction, share mode) waiting for the other session to commit or rollback .

In the following output from 12.1.0.2 my session is SID 16 and I’ve simply reported all the rows for the two sessions from v$lock:


       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
        16 TX     327704      12790          0          4        169          0          0
           TX      65550       9613          6          0        169          0          0
           TM     192791          0          2          0        169          0          0
           OD     192791          0          4          0        169          0          0
           AE        133          0          4          0        579          0          0

       237 TX     327704      12790          6          0        466          1          0
           TM     192791          0          3          0        466          0          0
           AE        133          0          4          0        582          0          0

You’ll notice my session is holding an OD enqueue in mode 4 and a TM lock in mode 2 – the value 192791 is the object_id of the table in question. The OD lock is described in v$lock_type as “Lock to prevent concurrent online DDLs”.

It would appear, therefore, that we are stuck until the other session commits – so I hit ctrl-C to interrupt the wait, and then tried to add the constraint again, stil without committing (or rolling back) the other session. Here’s the cut-n-paste from that sequence of events:

{hits ctrl-C}
alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

SQL> alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate;
alter table t1 add constraint c1 check(owner = upper(owner)) enable novalidate
                              *
ERROR at line 1:
ORA-02264: name already used by an existing constraint

I’ve interrupted the command and (according to Oracle) cancelled the current operation – but it then becomes apparent that I have successfully added the constraint anyway!

SQL> select constraint_name, constraint_type, search_condition from user_constraints where table_name = 'T1';

CONSTRAINT_NAME      C SEARCH_CONDITION
-------------------- - --------------------------------------------------------------------------------
SYS_C0018396         C "OWNER" IS NOT NULL
SYS_C0018397         C "OBJECT_NAME" IS NOT NULL
SYS_C0018398         C "OBJECT_ID" IS NOT NULL
SYS_C0018399         C "CREATED" IS NOT NULL
SYS_C0018400         C "LAST_DDL_TIME" IS NOT NULL
SYS_C0018401         C "NAMESPACE" IS NOT NULL
C1                   C owner = upper(owner)

And this is what happened when I switched to the other session – where I had still not committed or rolled back – and tried to execute an update:


SQL> update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1;
update t1 set owner = lower(owner) where owner = 'SYSTEM' and rownum = 1
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.C1) violated

So the constraint really is present and is visible to other sessions – even though the attempt to add it hung and had to be interrupted!

I can’t think of any reason why this might cause a problem in the real world – but it is an oddity that might have echoes in other cases where it matters.

Update (next day)

When I posted a link to this post on twitter one of the replies referred me to a very similar post by Oren Nakdimon with a very similar date to my 2018 draft, which makes me wonder if I had decided not to publish it because I’d done a Google search on the topic after I had written this note.

I’ve just done a search on ODC (formerly OTN) for possible references (date range around the end of 2018, with text “enable novalidate”)  and found this question  So maybe we had seen the same question and done the same tests at the same time.

 

December 21, 2020

Why Why Why Why?

Filed under: Infrastructure,Oracle,Troubleshooting — Jonathan Lewis @ 2:24 pm GMT Dec 21,2020

Here’s a little puzzle – and if you don’t see the answer almost immediately you did need to read this note. The question comes from a thread on the Oracle Groundbreakers’ Forum –

“Why I am getting 0020 instead of 2020 for below query?”

select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual.

Instead of sysdate if I give date like ‘5-dec-2020’ it’s giving correct. Any ideas why iso value for sysdate year is coming as 0020?

There are many things that people do wrong with dates and many ways in which Oracle tries to help so I’ll start with a general-purpose reminder: Oracle supplies a “date” datatype, if you want to store dates, compare dates, or do date arithmetic make sure you are using the date datatype.

(Technically, of course, Oracle’s date datatype is actually a “date with time to nearest second” type, so there are a couple of details you need to remember to avoid running into surprises that are due to the presence of the time component – for example you might use a constraint like “check(date_col = trunc(date_col)” to enforce date-only values for a column.)

Sysdate is a date, and the function to_date() expects its first parameter to be a character string; so Oracle implicitly converts sysdate to a character type with the to_char() function before it does anything else and it uses the session’s nls_date_format parameter to supply the formatting string. On my instance this parameter has the value ‘DD-MON-RR’ (an option created in an attempt to work around the “Y2K” problem – which some of you will remember).

So sysdate was converted (on the day the question was asked) to the character string ’10-DEC-20′, and when the to_date() function call tried to convert this back to a date type using the explicitly supplied format dd-mm-yyyy Oracle used leading zeros to pad the 2-digit year to a 4-digit year which made the year 0020 which is exactly what the OP saw.

Solution

To correct this code, take out the call to to_date().

SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
0020

SQL> select to_char(sysdate,'iyyy') from dual;

TO_C
----
2020

If you can’t correct the code then you might be able to work around the error by setting the nls_date_format to a more appropriate value. In fact the nls_date_format is one of those parameters that you probably ought to change from its default value the moment you set up your database. It’s just rather scary to do so if you’re running a system that has been around for a few years and may (accidentally) include some code that depends on the default setting to get the right answers or best execution plans.

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL>  alter session set nls_date_format='dd-Mon-rrrr';
SQL> select to_char(to_date(sysdate,'dd-mm-yyyy'),'iyyy') from dual;

TO_C
----
2020

SQL> alter session set nls_date_format='dd-Mon-yyyy';
SQL> select to_date('01-Dec-20','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-2020

SQL> select to_date('01-Dec-50','dd-mm-rrrr') from dual;

TO_DATE('01
-----------
01-Dec-1950

I’ve included a couple of examples using the ‘rrrr’ (or ‘RRRR’) format for the year component. Oracle introduced the ‘RR’ format over 20 years ago as a best-guess workaround to Y2K problem. The two-character ‘RR’ format means values between 50 and 99 imply previous century, and values between 00 and 49 imply current century (as show in the last two simpler examples). It’s an option that should have been deprecated within a few years of its introduction and desupported soon after, but the terrible inertia of IT juggernaut means it’s still around. If you’re worried about the impact of changing your nls_date_format from ‘dd-mon-RR’ to ‘dd-Mon-yyyy’ you may feel a little safer switching to ‘dd-Mon-RRRR’ – whatever you do, though, you’ll almost certainly find examples where the code misbehaves because of the side-effects of the change in formatting.

One tiny detail you might have noticed in the original posting is that the user tested their code with the literal value ‘5-Dec-2020’, and got the result they wanted even though the format they had used to convert from character to date was ‘dd-mm-yyyy’. Oracle tries quite hard to cope with date conversions, as Connor McDonald pointed out several years ago.

While we’re on the topic of conversion it’s worth revisiting my comment about the date type including a time component. I’ve often seen expressions like to_date(to_char(sysdate,’dd-mon-yyyy’)) being used to ensure that a date that might include a time component is reduced to a “date-only” value (although that really means the time-component is “00:00:00”). If you need only the date component it’s far better to use trunc(date_expression) rather than this double type-conversion; there’s probably not a lot of savings in terms of simple CPU-usage, but (a) you might as well take it and (b) you might be able to give the optimizer a chance of getting a better cardinality estimate hence a better execution plan.

Summary

  • sysdate is a date type, don’t try to use to_date() on it.
  • to_char(date_expression) will use the nls_date_format value if you haven’t supplied an explicit format string so you should always include an explicitly chosen format expression in the call.
  • The nls_date_format defaults to a value that is bad on two counts: it expects a 2-digit year and uses RR rather than YY. You should be using four digits for the year, and the only argument for using RRRR is if you are temporarily in transition from RR to YYYY.

To misquote Napolean (the pig, Animal Farm): “4 Y’s good, 2 Y’s bad”. And it’s not going to change to “4 Y’s good, 2 Y’s better”.

December 9, 2020

Edition-based Redefinition

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 12:00 pm GMT Dec 9,2020

This note is a list of links to the presentations that Bryn Llewellyn gave to the Swedish Oracle Users Group in a full-day seminar on EBR in 2017. I’ve had it as a draft note on my blog for some time, but have just revisited it and thought that others might like to be reminded about it.

There are 8 videos ranging from about 40 to 60 minutes each, and the last link is for the full playlist.

November 30, 2020

In-row LOBs

Filed under: Infrastructure,LOBs,Oracle — Jonathan Lewis @ 11:27 am GMT Nov 30,2020

If you’re like me there are probably lots of little things that you know but find terribly easy to forget. Here’s one of my bêtes noires – starting with a little quiz:

rem
rem     Script:         lob_in_row.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 (
        v1      varchar2(20),
        b1      clob,
        s1      clob
)
lob(b1) store as basicfile  b_lob,
lob(s1) store as securefile s_lob
;

insert into t1 values(
        rpad('x',20,'x'),
        rpad('x',20,'x'),
        rpad('x',20,'x')
)
/

commit;

execute dbms_stats.gather_table_stats(user,'t1');

select  column_name, avg_col_len
from    user_tab_columns
where   table_name = 'T1'
/

select  avg_row_len
from    user_tables
where   table_name = 'T1'
/

column rel_file_no new_value m_file_no
column block_no    new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &m_file_no block &m_block_no;

I’ve created a table with a varchar2() column, a basicfile CLOB, and a securefile CLOB, and inserted the same value into all three. By default the CLOB columns will be defined as ‘enable storage in row’ and since the inserted value is very short it will be stored “in row” for both CLOBs. The question is:

Which column takes the most space in the data block?

To answer this question we need only gather stats and query user_tab_columns and/or user_tables – except the results may be a little surprising, so my script also uses the dbms_rowid package to find the file and block number where the row has been inserted, flushes the buffer cache to make sure that we don’t get confused by older versions of the block (you probably shouldn’t run this script on a big busy system) then dumps the block into the session trace file.

You’ll see why it’s necessary to look at the block dumps when I show you the results from gathering stats. The three sets of results below come from 11.2.0.4, 12.2.0.1, and 19.3.0.0 in that order. A key feature the databases have in common is that they were all created with a multi-byte character set:


11.2.0.4
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           123
S1                           117 

AVG_ROW_LEN
-----------
        261

12.2.0.1
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           171
S1                           165

AVG_ROW_LEN
-----------
        357

19.3.0.0
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                            77
S1                            71

AVG_ROW_LEN
-----------
        169

There are three things that the three versions agree on.

  • First the row length is the sum of the column lengths.
  • Secondly the average length of the varchar2() column is 21 – remember that the avg_col_len includes the length byte(s) when we collect stats with the dbms_stats package.
  • Finally that the difference between the basicfile and securefile LOBs is 6.

But are the actual values reported for the LOB Lengths in any way truthful, or are we seeing reporting errors (that vary with version).

That’s why we need to look at the block dump, and this is what we find as the interesting bit of the dump – which is basically the same in all three cases:


block_row_dump:
tab 0, row 0, @0x1eec
tl: 172 fb: --H-FL-- lb: 0x1  cc: 3

col  0: [20]  78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78

col  1: [76]
 00 54 00 01 02 0c 80 00 00 02 00 00 00 01 00 02 75 f6 ff fb 00 38 09 00 00
 00 00 00 00 28 00 00 00 00 00 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00
 78

col  2: [70]
 00 54 00 01 02 0c 80 80 00 02 00 00 00 01 00 02 75 f6 ff fc 00 32 48 90 00
 2c 00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78

end_of_block_dump

When Oracle 19c reported the avg_col_len (and maybe the same would be true of 18c) it was correctly reporting the space taken up in the row by the entries for the two LOBs (and their overheads) that  I had inserted.  The earlier versions of Oracle are doing something a little odd.

A key thing you’ll notice, of course, is that the varchar2() column is storing my input string as one byte per character, but because I used a multibyte characterset for my database characterset in all three cases Oracle has stored the LOB equivalents of the string using a two-byte fixed-width characterset that has effectively doubled the storage (and then has to include the “Lob Locator” overhead – hence the 20 character string turning into 70+ bytes).

This highlights two important issues. First that trying to work out how to “tune” your table storage based on the avg_col_len / avg_row_len is a lot harder to do if you’ve got LOB columns in the table, and your method of estimating storage will have to change to suit the version of Oracle. Secondly, and much more importantly, if you’re thinking of changing a varchar2() column (such as “description” or “comments” from a character data type to a CLOB to allow for a small number of cases where people want to supply longer bits of text then if you’re using a multibyte character set there are two awful consequences:

  • the storage requirement for the column will more than double
  • the LOB will go out of line when the in-row storage requirement exceeds 4,000 bytes – which means when the original string content exceeds something between 1,938 and 1,985 characters depending on your version of Oracle and whether you are using basicfile or securefile LOBs.  (And a “long” input string will exceed a single block in the LOB segment whent it goes over roughly 4,000 characters.)

So if you want to handle “long comments” in multibyte charactersets you might want to consider making your code more complex so that up to 4,000 bytes you store the data as a varchar2(), and only use a CLOB column when the data goes over that limit.

Footnote

If you’re wondering  what all the overheads are for the in-row LOB you’ll find that the 11g block dump gives you some extra details (though the extra details disappeared from my 12c and 19c block dumps).  Here’s the content that appeared immediately after the col1 and col2 data respectively in the 11g block dump.

LOB
Locator:
  Length:        84(76)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fb
  Flags[ 0x02 0x0c 0x80 0x00 ]:
    Type: CLOB
    Storage: BasicFile
    Enable Storage in Row
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  Inode:
    Size:     56
    Flag:     0x09 [ Valid DataInRow ]
    Future:   0x00 (should be '0x00')
    Blocks:   0
    Bytes:    40
    Version:  00000.0000000001
    Inline data[40]
Dump of memory from 0x00007FB4244BED8D to 0x00007FB4244BEDB5
7FB4244BED80                            00780001              [..x.]
7FB4244BED90 00780078 00780078 00780078 00780078  [x.x.x.x.x.x.x.x.]
        Repeat 1 times
7FB4244BEDB0 00780078 54004678                    [x.x.xF.T]



LOB
Locator:
  Length:        84(70)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fc
  Flags[ 0x02 0x0c 0x80 0x80 ]:
    Type: CLOB
    Storage: SecureFile
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  SecureFile Header:
    Length:   50
    Old Flag: 0x48 [ DataInRow SecureFile ]
    Flag 0:   0x90 [ INODE Valid ]
    Layers:
      Lengths Array: INODE:44
      INODE:
        00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78

Footnote 2

Oracle allows for an empty_clob() to populate a CLOB column with a zero length CLOB, and an empty_clob() is not the same as a null.

Many years ago, when I wrote “Practical Oracle 8i” I pointed out that if you were using a LOB column it would be better to use empty_[bc]lob() to represent an absence of data to avoid the need for checking the column in two different ways for “no data”.

There are two details to remember about this, though. First, empty_clob() take 36 bytes of storage for a basicfile and 30 bytes for a secure file (reported at 37/31 in 19c, 135/131 in 12cR2 and 103/97 in 11gR2). You may decide that’s expensive way of saying “nothing to see here”.

Secondly, while PL/SQL will allow you to compare a PL/SQL clob variable with empty_clob() you cannot use a simple “column = empty_clob()” predicate in SQL and will need something like:

where length(s1) = 0
where dbms_lob.getlength(s1) = 0
where dbms_lob.compare(s1, empty_clob) = 0

As a final closing comment about the oddity of avg_col_len and LOBs – my first test to check the lengths reported for an empty_clob() also inserted NULL into the v1 column. In this special case (all the columns in the row were empty_clob() or null) 11g reported the avg_col_len of s1 and b1 as zero!

November 12, 2020

rowsets

Filed under: Execution plans,Infrastructure,Oracle,Performance — Jonathan Lewis @ 12:35 pm GMT Nov 12,2020

Here’s a little demonstration of the benefit of rowsets. It started with a very simple question that arrived in my in-box from Kaley Crum.

  • “Why does the query with the UNION ALL take so much longer than the query without the UNION ALL?”

Here are the two queries – surely they should take virtually the same amount of time.

select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me);


select count(*)
from (select /*+ no_merge */
             object_id
      from drop_me
      --
      union all
      --
      select 1 from dual);

Look closely – the difference between the two queries is just the addition through a UNION ALL of “select 1 from dual”. The first query took about 3.5 seconds to complete (there were 70M rows in the table), and the second took nearly 69 seconds.

Here are the execution plans showing the rowsource execution statistics (and Query Block // Object Alias information) – first the baseline query:

QL_ID  0ph1vfuuxkbqb, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me)
 
Plan hash value: 3609429292
 
--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        | 26465 (100)|          |      1 |00:00:03.34 |   95896 |  95891 |
|   1 |  SORT AGGREGATE     |         |      1 |      1 |            |          |      1 |00:00:03.34 |   95896 |  95891 |
|   2 |   VIEW              |         |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.32 |   95896 |  95891 |
|   3 |    TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:03.27 |   95896 |  95891 |
--------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$2 / from$_subquery$_001@SEL$1
   3 - SEL$2 / DROP_ME@SEL$2

And then the UNION ALL query:

SQL_ID  0chdajr28y0ub, child number 0
-------------------------------------
select count(*) from (select /*+ no_merge */              object_id    
  from drop_me       --       union all       --       select 1 from
dual)
 
Plan hash value: 3408528233
 
---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name    | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |         |      1 |        | 26467 (100)|          |      1 |00:01:18.58 |   95896 |  95891 |
|   1 |  SORT AGGREGATE      |         |      1 |      1 |            |          |      1 |00:01:18.58 |   95896 |  95891 |
|   2 |   VIEW               |         |      1 |     70M| 26467   (2)| 00:00:02 |     70M|00:01:10.84 |   95896 |  95891 |
|   3 |    UNION-ALL         |         |      1 |        |            |          |     70M|00:00:53.13 |   95896 |  95891 |
|   4 |     TABLE ACCESS FULL| DROP_ME |      1 |     70M| 26465   (2)| 00:00:02 |     70M|00:00:19.28 |   95896 |  95891 |
|   5 |     FAST DUAL        |         |      1 |      1 |     2   (0)| 00:00:01 |      1 |00:00:00.01 |       0 |      0 |
---------------------------------------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SET$1 / from$_subquery$_001@SEL$1
   3 - SET$1
   4 - SEL$2 / DROP_ME@SEL$2
   5 - SEL$3 / DUAL@SEL$3

This is not a trick question – I’ve got the SQL to create the data set and run the test; and I’ve run the test through 19.3.0.0 and 12.2.0.1 with results very similar to the above. Here’s a slightly modified version of the script – you’ll notice the column name has changed because I’ve used my usual “large data set” generator rather than scaling up a clone of all_objects:

rem
rem     Script:         rowset_kaley.sql
rem     Author:         Kaley Crum / JP Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem

create table drop_me
pctfree 0
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum  n1
from
        generator
cross join
        generator
where rownum <= 7e7
;

alter session set statistics_level = all;

set timing on
set serveroutput off

prompt  ===========================================
prompt  Baseline: Simple query without a union all.
prompt  Runs in 3-4 seconds
prompt  ===========================================

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

prompt  ==========================================
prompt  Add a UNION ALL for one record from dual.
Prompt  Runs in over a minute
prompt  ==========================================

pause Press return

select
         count(*)
from (select /*+ no_merge */
             n1
      from drop_me
      --
      union all
      --
      select 1 from dual);

select *
from table(dbms_xplan.display_cursor(format => 'allstats last advanced'));

There are two factors involved in the massive variation in performance. The first factor is one that the test case will report – though I didn’t show it in the output above, the second is that the test case has enabled rowsource execution statistics.

Here’s the big clue – from the Column Projection Information, which is one of the things that appears with the “advanced” format option (or when you add the “projection” format option) in the call to dbms_xplan.display_xxx()

First for the fast query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]
   2 - (rowset=1019)
   3 - (rowset=1019)

The rowsource passes 1,019 rows at a time from the tablescan operation to the view operation and from the view operation to the sort operation, for a total of roughly 70,000 calls for each of those two steps of the plan.

Compare this to the projection information for the slow UNION ALL query:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]

I haven’t missed a bit in copying – this really is it. The rows move up the stack one at a time, not in a rowset array. That’s 70,000,000 subroutime calls for each of the two steps of the plan.

I’m sure most of us have heard the mantra “row by row is slow by slow” (or some variant on the theme). This is true all the way down to the internal levels of an execution plan.

Of course 70M calls vs. 70K calls shouldn’t really explain a difference of more than 60 seconds – but when the calls to the O/S for timing information for rowsource execution stats are repeated for each call (or even if it’s only a 1% sample of the calls) down the stack that’s where a lot of time can disappear.

On my laptop, running Oracle 19.3 in a VM, this is how my timing went:

  • Fast query: 1.94 seconds, dropping to 1.79 seconds when I disabled rowsource execution stats.
  • Slow query: 65.91 seconds, dropping to a far less astounding 3.26 seconds when I disabled rowsource execution stats.

So there really is a noticeable difference between row-by-row and array-processing but most of the difference in the original test came from using rowsource execution statistics to measure how much of a difference there would be.

It’s also worth mentioning that this is probably the most extreme case you could produce to show the difference – using the largest possible rowset size with the smallest possible rows when you want to mazimise the gap between (internal) array processing and single row processing – and then processing a very large number of rows. [But isn’t ridiculously large numbers of rows what you do with Exadata?].

If you want further evidence that the difference is due to the rowset size you can always alter session set “_rowsets_enabled”=false; and watch the fast query slow down. It will take about half the time of the slow query as it only has to pass 70M rows up one step of the plan rather than the two steps that are in the UNION ALL plan.

In theory it looks as if you could also restrict the size of the rowset by setting _rowsets_max_rows or _rowsets_target_maxsize, but since the default value for the former is 256 the results above suggest that the settings might be ignored, and when I tried adjusting them at the session level nothing changed.

Another test you could run is to adjust (alter session) the parameter “_rowsource_statistics_sampfreq”, When I set this to 100 the time for the slow query dropped to about 21 seconds (and the fast query – with rowsets disabled – dropped to about 11 seconds).

Footnote

It’s always easy to hit an accidental special case without realising it, so when you’re testing something it’s important to think about what’s special in the example. I came up with two possibilities in this case – dual is always going to be a very special case in any circumstances, and I’ve got a table in my query that is guaranteed to return exactly one row. So I repeated the test with:

  • a real table with one row and primary key in case the reference to dual was disabling rowsets
  • a real table with 100 rows in case this enabled a rowset size of 100

In neither case did Oracle start using array processing.

Footnote 2

One of the patterns I see occasionally follows the form of the slow query above. Some front-end tool executes a select statement that starts with a select {list of desired column headings} from dual so that the front-end code doesn’t have to be programmed to create headings as it receives the data.

Although the impact this would have on rowsets looks like a large percentage of the work done in the database in this case, in more realistic cases it would probably be irrelevant, until someone starts testing with the statistics_level set to all, or _rowsource_execution_statistics explicitly enabled.

There is a related effect, though, from the SQL Monitor feature (which activates automatically for a statement if it’s executing in parallel, or if it’s predicted to take more than 5 seconds to complete). When I added the /*+ monitor */ hint to the two queries (and didn’t fiddle with the rowsets parameter) the fast query averaged 2.26 seconds instead of 1.79 seconds, and the slow query averaged 3.51 seconds instead of 3.26 seconds. I have to say, though, that the variation between executions of the same statement was quite significant compared the variation produced by adding or eliminating the hint.

Next Page »

Website Powered by WordPress.com.