Oracle Scratchpad

December 14, 2006

Constraints, Inserts and bind

Filed under: Infrastructure,Oracle,Performance,Troubleshooting,Tuning — Jonathan Lewis @ 10:25 pm GMT Dec 14,2006

Everyone knows that bind variables are an important aid to achieving scalability  in OLTP systems. Here’s an example to show how important they are with insert statements – particularly if you believe in using constraints to enforce data correctness.


rem
rem     Script:         constraints.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2006
rem
rem     Last tested
rem             12.1.0.2  [Dec 2017]
rem             10.2.0.1
rem              9.2.0.6
rem

create table t1 (
        n1      number          check (n1 > 0),
        v1      varchar2(10)    check (v1 = upper(v1))
)
;       

begin
	for i in 1..1000 loop
		execute immediate
			'insert into t1 values(' || i || ',''XXX'')';
	end loop;
end;
/

The pl/sql loop is going to produce 1,000 statements like: insert into t1 values(1,’XXX’);

When this has run, take a look in v$sql for statement looking like: select condition from cdef$ where rowid=:1; you’ll find that it has been run 2,000 times. That’s twice for each statement you executed – once for each constraint.


select condition
from
 cdef$ where rowid=:1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     2000      0.01       0.00          0          0          0           0
Execute   2000      0.00       0.00          0          0          0           0
Fetch     2000      0.00       0.00          0       4000          0        2000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     6000      0.01       0.00          0       4000          0        2000

Oracle doesn’t keep long columns cached in the dictionary cache (rowcache) so every time it optimises a new statement that uses a check constraint (and the condition column is a long column) it has to re-read the constraint definition from the database – just as it does with the view definition when you optimise a statement that uses a view.

So if you have lots of constraints in place, and do lots of separate insert statements – make sure you use bind variables.

Update Dec 2017:

Following an incoming comment I rediscovered this observation about the “unexpected” impact of constraints on inserts that I hadn’t thought about for more than 10 years; so I re-ran the test to check if it were still true – it is.

Update 2 [same day]

After tweeting a link to this note, Mohamed Houri followed up with a reference to a similar problem that appears with column groups in 12cR1:

“A similar parsing issue occurred in 12cR1 due to bug 20907061 (HIGH # OF EXECUTIONS FOR RECURSIVE CALL ON COL$) linked with column group extension having default value > 32 bytes”

47r1y8yn34jmj  select default$ from col$ where rowid=:1

A quick search on MoS (formerly Metalink) suggests that there’s also a patch for this issue needed on 11.2.0.4, and my search also found: Bug 26818146 : HIGHER EXECUTIONS FOR RECURSIVE CALL ON COL$ EVEN AFTER PATCHING 20907061.

At the same time the search reminded me that “as of SCN” queries parse on each call – which means they could see the same issue even if the text of the query doesn’t change! (See Bug 23189809 : AS OF SCN IS SLOWER AFTER UPGRADE TO 12C) – which is marked as not a bug, but references base bug 20907061.

 

13 Comments »

  1. So does that imply that the parsed version of the SQL statement has keep the information about all the constraints that that statement could possibly trip–if it doesn’t look at cdef$ again each time it’s run?

    For a table with a lot of constraints, couldn’t that get big?

    Comment by Jason Bucata — December 14, 2006 @ 11:03 pm GMT Dec 14,2006 | Reply

  2. Jason, Good question.

    create table t1 (n1 number check (n1 = 1), ...
    

    for 255 columns.

    insert /*+ tagged */ into t1 values (1,1,1...)
    

    for 255 columns.
    Track down the SQL statement in query v$db_object_cache to check the shared memory:

    select  sharable_mem, name 
    from    v$db_object_cache 
    where   name like '%tagged%'
    and     name not like '%db_object_cache%'
    ;
    

    A test with 255 constraints in place used 170KB of memory, repeated without constraints the demand was 80KB.

    Worth knowing about the extra memory requirements – on the other hand I’d rather use the memory than allow the data to be wrong.

    Comment by Jonathan Lewis — December 15, 2006 @ 7:37 am GMT Dec 15,2006 | Reply

  3. support!

    Comment by globelovers — December 15, 2006 @ 6:00 pm GMT Dec 15,2006 | Reply

  4. Jonathan,

    A small confusion in your example, you define column v1 datatype as varchar, but, you say,

    >>Oracle doesn’t keep long columns cached in the dictionary cache (rowcache)

    Is this same with column have other datatype than long?

    Jaffar

    Comment by Syed Jaffar Hussain — December 15, 2006 @ 6:22 pm GMT Dec 15,2006 | Reply

  5. Jaffar, the long column in question is the definition column of cdef$. It has nothing to do with the type of the column in the user’s table.

    Comment by Jonathan Lewis — December 15, 2006 @ 6:30 pm GMT Dec 15,2006 | Reply

  6. Thanks for clarification. I got it.

    Jaffar

    Comment by Syed Jaffar Hussain — December 15, 2006 @ 7:01 pm GMT Dec 15,2006 | Reply

  7. I generally agree that it’s a worthwhile use of memory. My current job has me thinking a lot about the shared pool, since we’ve got a lot of shared pool pressure, so that’s just something I’ve started glomming onto…

    Comment by Jason Bucata — December 18, 2006 @ 3:42 pm GMT Dec 18,2006 | Reply

  8. I tried a similar test with SELECT and saw the same thing going on with cdef$. I can understand needing the constraint during an insert, but why does Oracle need it during a query? I’m running 10.2.0.3 on Solaris SPARC.
    declare
            n1 number;
            v1 varchar2(10);
            v_cmd varchar2(256);
    begin
            for i in 1..1000 loop
                v_cmd := ‘select n1, v1 from t1 where n1 = ‘ || i;
                    execute immediate v_cmd into n1, v1;
            end loop;
    end;
    /

    Comment by Robert Winter — June 28, 2007 @ 2:30 am BST Jun 28,2007 | Reply

  9. Robert, since 9i the optimizer has been able to combine existing predicates with constraints to generate new predicates. I suspect you have a constraint on column n1 that is being read each time you optimise a new query. The scale of activity is much less for queries, though, as it only applies to the where clauses, not to every column in the table.

    Comment by Jonathan Lewis — June 28, 2007 @ 10:06 am BST Jun 28,2007 | Reply

  10. […] As usual my master comes to the rescue with this note – Constraints, Inserts and bind […]

    Pingback by latch: row cache objects « Oracle — January 25, 2012 @ 10:07 pm GMT Jan 25,2012 | Reply

  11. […] How Hot Blocks in Oracle Cause Cache Buffers Chains Latches Contention Constraints, Inserts and bind […]

    Pingback by Hello! – LinSeoDam's House — December 29, 2017 @ 4:17 am GMT Dec 29,2017 | Reply

  12. […] on from a Twitter reference and an update to an old posting about a side effect of  constraints on the work done inserting data, I decided to have a closer […]

    Pingback by Defaults | Oracle Scratchpad — January 2, 2018 @ 8:43 am GMT Jan 2,2018 | Reply

  13. […] Lewis’ update on his post “Constraints, Inserts and bind” prompted another thought: is there a difference between a check constraint (IS NOT NULL) and a NOT […]

    Pingback by Parse overhead: NOT NULL vs CHECK(xyz IS NOT NULL) | Spot on Oracle — January 27, 2018 @ 3:06 pm GMT Jan 27,2018 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.