Oracle Scratchpad

January 5, 2007

Bind Variables

Filed under: Performance,Statspack,trace files,Troubleshooting — Jonathan Lewis @ 9:50 pm GMT Jan 5,2007

I have made a few comments in previous articles about the use of bind variables and some of the peripheral details that can introduce surprises; and in the article on superfluous updates I made a throwaway comment about getting multiple child cursors for a single statement if you had columns of varchar2() or nvarchar2() defined to be longer than 32 bytes. It’s worth expanding on this point.

Here’s an extract from a 15-minute statspack report that I had to examine recently. It was from a system running Oracle 10.2.0.2, using snapshots at level 5, and the extract shows just the top entry from the “Version Count” section of the report:

 Version
   Count  Executions   Old Hash Value
-------- ------------ ---------------
     103       13,726    186168523
UPDATE tableX SET col1 = :1, col2 = :2, ...
col21 = :21 ...  {where rowid = :61}

Obviously I have changed the table and column names – essentially the statement was a simple update by rowid of about 60 columns in a single table.

Further checks on v$sql, v$librarycache, the data definitions and so on showed that the high version count (number of child cursors) was not related to name resolution issues, fine grained access control, cursor invalidation, or any of the other common structural or coding reasons for multiple child cursors. So where were the versions coming from ? The data itself.

There were 18 columns in the table declared as varchar2(4000), and the users’ inputs could be highly variable, so any one of these columns might actually be null, or hold data of any length up to the full 4,000 characters (although sizes in excess of 500 bytes were rare).

The problem is this – part of the identifying characteristic for a child cursor is the memory allocation for the input bind variables and, at some layer in the library code, character columns are allowed to have 4 different allocation sizes, namely 32, 128, 2000, or 4000 bytes. [Update Aug 2017 – actually 5, I forgot the possibility of zero lengths: see comment 36]

So, when the front-end code was calling this statement, part of the definition for the incoming bind variables was set by the actual values supplied by the end user. Since there were 18 columns declared as varchar2(4000), and each of these columns could (in principle) be declared with one of 4 different bind allocation sizes, then there could have been (in principle) power(4,18) – roughly 64 billion – different child cursors needed to cope with all possible variations in the data supplied.  Luckily we only got a few hundred in the worst possible cases.

Does it matter if you get lots of (valid) versions of a single cursor ? Yes, it does, because all the versions are covered by the same library cache latch; and if you have to spend time checking which version is the right one for your specific data set, then you will be holding the latch and blocking other users for longer and longer periods as the version count grows. Indeed, one of the most significant performance problems on this system was latch contention for the library cache latches. (As I write this, it’s just occurred to me that this might also be a limiting issue with the new mutex strategy for 10gR2 – it’s all very well using mutexes to protect a child cursor, but if your session has to pick the right child cursor of several dozen on the same latch will there be any side effects?)

Our solution: in this case the main issue was the “one statement fits all updates” strategy. By recoding to have several different statements (and this is not always possible, of course) we ended up spreading the updates across many different library cache latches, and that reduced the contention. We also included logic that minimised the total number of versions (i.e. the sum of all versions of all statements) because we split the 18 columns up in several different ways.

If you want to demonstrate the effect here’s a simple test for Oracle 9i (it doesn’t give the same results on 10gR2, possibly because of a change in some layer of the SQL*Plus code).

rem
rem     Script: bind_issue_2.sql
rem     Dated:  Dec 2006
rem     Author: J.P.Lewis
rem

drop table t1;
create table t1 (v1 varchar2(4000), n1 nvarchar2(2000));               

alter session set events '10046 trace name context forever, level 4';               

variable b1 varchar2(5);
variable b2 nvarchar2(5);
exec :b1 := 'abc'; :b2 := 'abc';
insert into t1 values (:b1, :b2);               

variable b1 varchar2(18);
variable b2 nvarchar2(18);
exec :b1 := 'abc'; :b2 := 'abc';
insert into t1 values (:b1, :b2);               

variable b1 varchar2(33);
variable b2 nvarchar2(33);
exec :b1 := 'abc'; :b2 := 'abc';
insert into t1 values (:b1, :b2);               

variable b1 varchar2(129);
variable b2 nvarchar2(129);
exec :b1 := 'abc'; :b2 := 'abc';
insert into t1 values (:b1, :b2);               

alter session set events '10046 trace name context off';               

I’ve included an nvarchar2() column in the example simply to demonstrate that the effect depends on byte count, not character count (on my system, nvarchar2() uses a two-byte fixed length format).

In the trace file, you will find sections like the following, which is the section relating to the insert where the two bind variables were declared as 18 characters long:

PARSING IN CURSOR #3 len=32 dep=0 uid=62 oct=2 lid=62 tim=1121585657476 hv=340431440 ad='6fd220d0'
insert into t1 values (:b1, :b2)
END OF STMT
PARSE #3:c=0,e=83,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1121585657468
BINDS #3:
 bind 0: dty=1 mxl=32(18) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=160 offset=0
   bfp=0344fb30 bln=32 avl=03 flg=05
   value="abc"
 bind 1: dty=1 mxl=128(36) mal=00 scl=00 pre=00 oacflg=03 oacfl2=10 size=0 offset=32
   bfp=0344fb50 bln=128 avl=06 flg=01
   value=""
EXEC #3:c=0,e=1125,p=0,cr=1,cu=1,mis=1,r=1,dep=0,og=1,tim=1121585659132

Note the declarations of maximum bind length followed by the actual byte count mxl=32(18) for the varchar2 and mxl=128(36) for my nvarchar2.

Note also the report of a miss in the library cache mis=1 that appears in the EXEC line that follows the BINDS section.

Each variation of the variable sizing in the test case will introduce another library cache miss – which means another child cursor.

38 Comments »

  1. Jonathan,

    i didn’t test that through all versions but on 10.2.0.1/10.2.0.2 one of the possible workarounds seems to be “do dummy insert with largest possible lengths”.

    SQL> create table test (a varchar2(2000));
    
    Table created.
    
    SQL> variable a varchar2(1);
    SQL> exec :a:='1';
    
    PL/SQL procedure successfully completed.
    
    SQL> insert into test values (:a);
    
    1 row created.
    
    SQL> variable a varchar2(33);
    SQL> exec :a:='2';
    
    PL/SQL procedure successfully completed.
    
    SQL> insert into test values (:a);
    
    1 row created.
    
    SQL> select sql_text, version_count
      2   from v$sqlarea
      3   where lower(sql_text) like 'insert into test values%';
    
    SQL_TEXT                       VERSION_COUNT
    ------------------------------ -------------
    insert into test values (:a)               2
    
    note version count reported as 2 (as expected)
    
    now - if we flip inserts...
    
    SQL> alter system flush shared_pool;
    
    System altered.
    
    SQL> variable a varchar2(33);
    SQL> exec :a:='2';
    
    PL/SQL procedure successfully completed.
    
    SQL> insert into test values (:a);
    
    1 row created.
    
    SQL> variable a varchar2(1);
    SQL> exec :a:='1';
    
    PL/SQL procedure successfully completed.
    
    SQL> insert into test values (:a);
    
    1 row created.
    
    SQL> select sql_text, version_count
      2   from v$sqlarea
      3   where lower(sql_text) like 'insert into test values%';
    
    SQL_TEXT                       VERSION_COUNT
    ------------------------------ -------------
    insert into test values (:a)               1
    

    note – version count is now 1. Looks like varchar2(n) is able to share cursor with varchar2(m) as long as (a) n is less or equal m and (b) statement with m was submitted in front.

    Comment by Alexander Fatkulin — January 6, 2007 @ 4:41 am GMT Jan 6,2007 | Reply

  2. Alexander, that certainly looks promising – picking up the trace file for the second run where you’ve dropped the bind variable size, this is what I see (10.2.0.1)

    PARSING IN CURSOR #10 len=28 dep=0 uid=0 oct=2 lid=0 tim=1920072669135 hv=595497328 ad='6d874808'
    insert into test values (:a)
    END OF STMT
    PARSE #10:c=0,e=127,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1920072669128
    BINDS #10:
    kkscoacd
     Bind#0
      oacdty=01 mxl=128(01) mxlc=00 mal=00 scl=00 pre=00
      oacflg=03 fl2=1000000 frm=01 csi=178 siz=128 off=0
      kxsbbbfp=07ae20ac  bln=128  avl=01  flg=05
      value="1"
    EXEC #10:c=0,e=326,p=0,cr=1,cu=1,mis=0,r=1,dep=0,og=1,tim=1920072669617

    Note we still have mxl=128, and since we have mis=0 we can feel confident that we haven’t kicked out the older cursor and replaced it – we really have re-used it.

    I see the same results in 9i.

    An alternative strategy to this is to set event 10503 (attrib: Julian Dyke) at level “N” which seems to have the effect of fixing the bind allocation size at the smallest of the four levels that would be okay for a variable of lenght N. I’m not sure that this would be safe, of course, and it obviously needs a call back to Oracle Support for approval.

    Comment by Jonathan Lewis — January 6, 2007 @ 9:07 am GMT Jan 6,2007 | Reply

    • There’s been some chat on oracle-l recently about event 10503 not working at the session level – with a reference given, though, to Bug 10274265 : “EVENT 10503 NOT WORKING ON THE SESSION LEVEL” and followup reports that this bug is now fixed with various patches available.

      Comment by Jonathan Lewis — September 18, 2012 @ 10:58 am BST Sep 18,2012 | Reply

  3. Jonathan,

    There is an intersting presentation by Julian Dyke (http://julian.dyke.users.btopenworld.com/com/Presentations/Presentations.html#LibraryCacheInternals)

    One of the topics is the various reasons for having multiple child cursors, bind variables being one of them. Quoted from the presentation:


    Rounding of VARCHAR2 columns can be overridden by
    *) enabling event 10503
    *) setting level to minimum bind variable length e.g. 128

    So could using:

    ALTER SESSION SET EVENTS ‘10503 TRACE NAME CONTEXT FOREVER, LEVEL NNN’;

    be a remedy for specific sqls like these ?

    I tried setting this to 4000 (max permissible) for your exact same test case, unfortunately the EXEC line was still showing “mis=1” for all the sqls, so may be I did something incorrect with my limited understanding.

    p.s – Just started reading your excellent book on Cost Based Oracle and getting to grips with the complexity of the CBO !

    Vinay

    Comment by Vinay Pai — January 6, 2007 @ 12:14 pm GMT Jan 6,2007 | Reply

  4. Vinay, I had previously only checked it with 2,001 to tip the allocation over to 4,000. I’ve just checked 4,000 on 9.2.0.6 to see if it was a special case and, using my test case – cut and pasted from this very page – on my system, I got just one miss (on the first parse call) with maxl=4000().

    Did you re-run the whole test, including the drop/create ? If not, did you remember to flush the shared_pool before repeating the inserts ?

    Comment by Jonathan Lewis — January 6, 2007 @ 12:40 pm GMT Jan 6,2007 | Reply

  5. Building on Alexander’s test case:

    
    dellera@ORACLE10> create table test (a varchar2(2000));
    
    Table created.
    
    dellera@ORACLE10> variable a varchar2(1);
    dellera@ORACLE10> exec :a := 'x';
    
    PL/SQL procedure successfully completed.
    
    dellera@ORACLE10> insert into test values (:a);
    
    1 row created.
    
    dellera@ORACLE10> select sql_text, child_number, last_load_time, executions
      2    from v$sql
      3   where sql_text like 'insert into test values%'
      4  ;
    
    SQL_TEXT                       CHILD_NUMBER LAST_LOAD_TIME       EXECUTIONS
    ------------------------------ ------------ -------------------- ----------
    insert into test values (:a)              0 2007-01-06/20:18:49           1
    
    dellera@ORACLE10> -- sleep only to "mark" children by last_load_time
    dellera@ORACLE10> exec dbms_lock.sleep (3);
    
    PL/SQL procedure successfully completed.
    
    dellera@ORACLE10> variable a varchar2(33);
    dellera@ORACLE10> exec :a := 'x';
    
    PL/SQL procedure successfully completed.
    
    dellera@ORACLE10> insert into test values (:a);
    
    1 row created.
    
    dellera@ORACLE10> select sql_text, child_number, last_load_time, executions
      2    from v$sql
      3   where sql_text like 'insert into test values%'
      4  ;
    
    SQL_TEXT                       CHILD_NUMBER LAST_LOAD_TIME       EXECUTIONS
    ------------------------------ ------------ -------------------- ----------
    insert into test values (:a)              0 2007-01-06/20:18:49           1
    insert into test values (:a)              1 2007-01-06/20:18:51           1
    
    dellera@ORACLE10> variable a varchar2(1);
    dellera@ORACLE10> exec :a := 'x';
    
    PL/SQL procedure successfully completed.
    
    dellera@ORACLE10> insert into test values (:a);
    
    1 row created.
    
    dellera@ORACLE10> select sql_text, child_number, last_load_time, executions
      2    from v$sql
      3   where sql_text like 'insert into test values%'
      4  ;
    
    SQL_TEXT                       CHILD_NUMBER LAST_LOAD_TIME       EXECUTIONS
    ------------------------------ ------------ -------------------- ----------
    insert into test values (:a)              0 2007-01-06/20:18:49           1
    insert into test values (:a)              1 2007-01-06/20:18:51           2
    

    The last execution with “a varchar2(1)” picks the second child, that was parsed with “a varchar2(33)”.

    So – may it be that the first child is “logically obsolete”, and will eventually get aged out by the LRU algorithm ?

    Comment by Alberto Dell'Era — January 6, 2007 @ 7:28 pm GMT Jan 6,2007 | Reply

  6. Alberto, interesting test. We could hope so, however in some conversations I’ve had with Julian he’s seen some evidence that child cursors may get slightly lost, so that only the last (highest) number child can disappear. In your example, child 0 may be stuck until child 1 is aged out.

    This information may be out of date, though, it was some time since we last talked about it, and I haven’t looked at the latest stuff on the library cache on his website.

    Comment by Jonathan Lewis — January 6, 2007 @ 10:13 pm GMT Jan 6,2007 | Reply

  7. Jonathan,

    Have you come across of a Metalink Note:296377.1 Handling and resolving unshared cursors/large version_counts

    It has few good inputs about the valid reasons why cursors is not being shared. I found v$sql_shared_cursor view is a good point to start investigation.
    The note also says that ‘In 10G it is possible to use CURSORTRACE to aid the investigation of why cursors are not being shared.’

    Syntax:

    alter system set events
    ‘immediate trace name cursortrace level 577, address hash_value’;

    Comment by Jaffar — January 7, 2007 @ 7:13 am GMT Jan 7,2007 | Reply

  8. Jaffar, thanks – that’s a useful note I hadn’t seen it before, and it’s worth tracking. I wonder if cursortrace is the “official” entry point for event 10277. If so, that 577 could be a bitmap 512 + 64 + 1 – with different things being exposed by different bits. Worth a little playtime in a sandbox, perhaps.

    I particularly like the lines:

    ROLL_INVALID_MISMATCH – rolling invalidation window exceeded

    I suspect this can occur when you see a mix of this reason and some other one in v$sql_shared_cursor, together with a library cache latch issue. My suspicion is this is as a result of that ‘other’ reason so you should address that first to relieve the latch.

    It’s so nice to see someone give a clue about what they’ve been working on and what they think it means. It gives other people a stepping off point to work from … I wonder if the rolling invalidation thing relates in some way to the delayed cursor invalidation that you get from dbms_stats collceting statistics in 10g.

    Comment by Jonathan Lewis — January 7, 2007 @ 10:35 am GMT Jan 7,2007 | Reply

  9. I remember that long time ago I have asked a question on how to prevent cursor invalidation of associated tables when dbms_stats run on tables or schema. Version 9iR2.
    I knew that in 10g, we can make some delay of cuurso invalidation while collecting stats using dbms_stats.

    >>
    I wonder if the rolling invalidation thing relates in some way to the delayed cursor invalidation that you get from dbms_stats collceting statistics in 10g.
    >>

    I guess, this could be a reason point for this.

    Jonathan, without enabling tracing to know the bind varaible length, as you have used for the example, we can use v$sql_bind_metdata dynamic view(max_length column) to know the length of the bind variable. Of course, it doesn’t tell the library cache miss as the trace file.

    Julian also stated in his library cache internal presetnation that ‘One additional child cursor is created for each parent when trace is enabled’

    Comment by Jaffar — January 7, 2007 @ 12:02 pm GMT Jan 7,2007 | Reply

  10. Jaffar, In case Julian didn’t explain why this happens – the sql_trace parameter is part of the optimizer environment, just like db_file_multiblock_read_count and dozens of others. If a session changes any of these optimizer-related parameters then that session has to generate new child cursors for its “private” use.

    Comment by Jonathan Lewis — January 7, 2007 @ 10:43 pm GMT Jan 7,2007 | Reply

  11. Very true. I got it.

    Comment by Jaffar — January 8, 2007 @ 6:39 am GMT Jan 8,2007 | Reply

  12. Jonathan,

    I would appreciate if you could help me understand the meaning of Parse_calls and executions in v$sql.

    I always see that the value of execution is close to (if not equal) parse_calls.

    Is this a count of soft Parse ?

    Thanks

    Rita,

    Comment by Rita — January 11, 2007 @ 10:56 pm GMT Jan 11,2007 | Reply

  13. Roughly speaking, a parse call (recorded as a parse_call in v$sql) is a program saying to the database “here’s an SQL statement, work out what you have to do to make it happen”, and the execute call is the call to “make it happen”.

    The work that Oracle has to do to “work out what has to be done” may be as little as searching through the library cache for a matching statement, it may require a full-scale optimisation operation.

    But the very good, high performance, systems, will be coded to hold cursors. This means that they make a single parse call for a statement, but tell Oracle to “protect” it and tell them where it’s been put. Thereafter, instead of saying “here’s a statement, see what you can do with it”, the program is coded to say “go and re-execute the statement that’s being held at location X”.

    If you have written an application that has a lot of re-usable SQL, but has not been written to hold cursors, the session cursor cache is an undercover feature (see parameter session_cached_cursors) that tries to identify the more popular statements and use the holding mechanism on your behalf. When this occurs, your program’s parse calls will still be noted and counted even though the cursor cache is making the program behave like a cursor-holding program.

    Comment by Jonathan Lewis — January 12, 2007 @ 3:03 pm GMT Jan 12,2007 | Reply

  14. Hi Jonathan

    SHARED POOL LATCHING in LIBRARY CACHE.

    Why is it bad? Shared pool is where the SQL code and the data dictionary lives. Isn’t this structure ‘mostly’ READ_ONLY? Why latch/lock? Why do you care to latch? What will be corrupted in the library cache? Why does Oracle have to “protect” it, as you say above?

    To stop latching/parsing and the bad scalability which comes with it, Tom Kyte says we must use bind variables, yes I agree, but why latch the shared pool? What will that other “nasty” user do to your SQL statement in the shared pool apart from sharing the SQL with you.

    Shared Pool is not like the Data Buffer where data lives . In the Data Buffer I understand the reason latching/locking (shared locks, exclusive locks during updates, selects) is bad, and why it inhibits concurrency, ‘Consistent Gets’ ACID theory of transactions, locks, transactions trying to concurrently modify same data, phantom reads and dirty reads etc, they all come to mind, fine.

    But why in SHARED POOL, it is just SQL text, what is going to be corrupt? Why does it have to be concurrent and use latches (serialization devices) ?

    Many thanks

    Kubilay Cilkara

    Comment by Kubilay Cilkara — January 20, 2007 @ 11:08 am GMT Jan 20,2007 | Reply

  15. Hi Jonathan,

    I was going through this article. We have one issue in our system. We have defined a row say for example

    lrec_f_emp        emp%rowtype;
    

    But the table emp is in different database and we access that table via dblink. we have created a synonym for that table.

    we populate that record l_rec_f_emp say for example

    lrec_f_emp.empno   := 1234;
    lrec_f_emp.empname := 'Mr. Jonathan';
    lrec_f_emp.deptno  := 10;
    

    then we have insert statement as

    insert into f_emp(empno,
                      empname,
                      deptno)
                values(lrec_f_emp.empno,
                       lrec_f_emp.empname,
                       lrec_f_emp.deptno);
    

    now if we try to insert 20 records using 20 different sessions.

    we have 20 version count v$sqlarea.
    Why ? are we not using bind variable. or is the table diffferent db that is why we are getting this error. Any help appreciated.

    Thanks
    Akhilesh..

    Comment by Akhilesh Bharati — January 20, 2007 @ 7:19 pm GMT Jan 20,2007 | Reply

  16. Akhilesh, I tried (very briefly) to emulate this on a 9.2.0.8 database, but didn’t get the same behaviour. It could depend on version, type of synonym, type of database link, existence of logon triggers for sessions, whether the pl/sql is authid current user or definer, and so on. Try to strip it back to the simplest possible build script, and then send it in to Oracle as an SR.

    Comment by Jonathan Lewis — January 21, 2007 @ 11:21 pm GMT Jan 21,2007 | Reply

  17. […] Lewis @ 11:38 pm UTC Jan 21,2007 The following question appeared in response to my comments in a posting on Bind Variables: Shared pool is where the SQL code and the data dictionary lives. Isn’t this structure […]

    Pingback by Shared SQL « Oracle Scratchpad — January 21, 2007 @ 11:38 pm GMT Jan 21,2007 | Reply

  18. Kubilay, I’ve replied to your question through my latest posting.

    Comment by Jonathan Lewis — January 22, 2007 @ 2:00 am GMT Jan 22,2007 | Reply

  19. synonym and database link are all public and there are no logon triggers. Probably we shall send it to Oracel as a SR.

    Comment by Akhilesh Bharati — January 22, 2007 @ 5:30 pm GMT Jan 22,2007 | Reply

  20. I would appreciate if you could help me with the following :

    I have a pl/sql function called gu. It basically takes an input parameter and then returns a name.

    Using sqlplus I called this function multiple times .The SQL inside the function is using bind variables but every call to this function is getting parsed as you can see from the trace.
    How can this be avoided ?

    Thanks

    Rita

    PARSING IN CURSOR #3 len=25 dep=0 uid=62 oct=3 lid=62 tim=1142261420255991 hv=2349859795 ad='3e3dff24'
    select gu(2000) from dual
    END OF STMT
    
    PARSING IN CURSOR #1 len=51 dep=1 uid=62 oct=3 lid=62 tim=1142261420256593 hv=275926857 ad='2b0ca244'
    SELECT emp_name  FROM AP_USER WHERE UID=:B1
    END OF STMT
    --
    WAIT #0: nam='SQL*Net message from client' ela= 4090548 driver id=1650815232 #bytes=1 p3=0 obj#=51784 tim=1142261424348725
    =====================
    PARSING IN CURSOR #4 len=26 dep=0 uid=62 oct=3 lid=62 tim=1142261424356869 hv=3554195286 ad='38cc3df0'
    select gu(3000) from dual
    END OF STMT
    
    PARSING IN CURSOR #2 len=26 dep=0 uid=62 oct=3 lid=62 tim=1142263172549585 hv=857662553 ad='3bd637e8'
    select gu(7000) from dual
    END OF STMT
    

    Comment by Rita — January 25, 2007 @ 11:52 pm GMT Jan 25,2007 | Reply

  21. Rita, as far as I can tell from the text you have supplied, it is the query “select gu(NNNN) from dual” that is being parsed every time – which is correct as every query is different. The SQL inside the function should be parsed only once. Here’s an example of the output from tkprof from a sample I wrote to emulate your requirement – my function gu() does a select from T1:


    SELECT V1 
    FROM
     T1 WHERE N1 = :B1 

    call     count       cpu    elapsed       disk      query    current       rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ---------
    Parse        1      0.00       0.00          0          0          0          0
    Execute      4      0.01       0.00          0          1          0          0
    Fetch        4      0.00       0.00          0         12          0          3
    ------- ------  -------- ---------- ---------- ---------- ----------  ---------
    total        9      0.01       0.00          0         13          0          3

    Misses in library cache during parse: 1
    Optimizer goal: ALL_ROWS

    Note how there is only one miss in the library cache for this statement – which is the one inside the function – even though it has been executed four times.

    Comment by Jonathan Lewis — January 26, 2007 @ 2:31 am GMT Jan 26,2007 | Reply

  22. I believe this it is the query “select gu(NNNN) from dual” that is being parsed every time is true for sequences when we write “select my_sequence.nextval id from dual “. I have a java program which is getting the sequence number using the above query. The query is parsed almost 23,000 times in single session.

    I am not sure whether this will be considered a soft parse or hard parse.From your quote which is correct as every query is different , I am inclining towards hard parse.

    In pl/sql we can use my_sequence.nextval inside the sql statement and can return the value using returning class. Is there a similar technique in java.

    with best
    CT

    Comment by CT — January 27, 2007 @ 3:29 am GMT Jan 27,2007 | Reply

  23. Alberto,

    I reproduced your test and I share your observation/assumption. Oracle does the work only when it absolutely has to, so the moment child cursor gets incremented to #1, it doesn’t go back to #0 as it probably entails extra work. Tracing your test-case it looks like the following:

    #1: child creation successful 5fc532ec 5ef27350 0
    PARSING IN CURSOR #1 len=28 dep=0 uid=47 oct=2 lid=47 tim=1146470615662120 hv=3312247303 ad='5fc53868'
    insert into test values (:a)
    BINDS #1:
     bind 0: dty=1 mxl=32(01) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=32 offset=0
    
    BEGIN dbms_lock.sleep (3); END;
    
    PARSING IN CURSOR #1 len=28 dep=0 uid=47 oct=2 lid=47 tim=1146470639920871 hv=3312247303 ad='5fc53868'
    insert into test values (:a)
    PARSE #1:c=0,e=448,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1146470639920860
    #1: checking for already pinned child
    #1: object is invalid
    #1: no valid child pinned
    #1: pinning parent in shared mode to search 5ef25610 5fc53868
    #1: kksfbc: calling kksscl outside while loop
    #1: kksscl: next child is #0
    #1: kksscl: pinning child #0 in shared mode 5ef26300 5fc532ec
    #1: kksscl: calling kksauc
    #1: kksscl: match == KKSCBTNOM
    #1: kksscl: releasing child
    #1: no suitable child found
    #1: no suitable child found
    #1: pinning parent in exclusive mode
    #1: creating new child object #1		--&gt; <b> there there’s a new born </b>
    #1: child creation successful 5fb732e8 5ef27350 0
    #1: downgrading child pin to share
    BINDS #1:
     bind 0: dty=1 mxl=128(33) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=128 offset=0
    
    #1: checking for already pinned child
    #1: no valid child pinned
    #1: pinning parent in shared mode to search 5ef25610 5fc53868
    #1: kksfbc: calling kksscl outside while loop
    #1: kksscl: next child is #1		--&gt; <b> "current" child is 1, so let's use it if we can </b>
    #1: kksscl: pinning child #1 in shared mode 5ef26300 5fb732e8
    #1: kksscl: calling kksauc
    #1: kksscl: unpinning the parent
    PARSING IN CURSOR #1 len=28 dep=0 uid=47 oct=2 lid=47 tim=1146470661087008 hv=3312247303 ad='5fc53868'
    insert into test values (:a)
    PARSE #1:c=0,e=432,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1146470661087000
    #1: checking for already pinned child
    BINDS #1:
     bind 0: dty=1 mxl=128(01) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=128 offset=0
    

    I think one case that Jonathan mentions where the above might not hold true is when the heap 6 of the “current” child gets aged out (it’s not pinned a all times isn’t it?). If that happens however I would think that Oracle marks it as broken and simply creates a new version as opposed to going back to the “older children”, no? Isn’t it one of the reasons that cursor sharing fails sometimes?

    Comment by Boris Dali — March 15, 2007 @ 7:27 pm GMT Mar 15,2007 | Reply

  24. Boris,

    Can you tell how did you get this trace?

    Regards
    Michel

    Comment by Michel Cadot — March 16, 2007 @ 6:37 am GMT Mar 16,2007 | Reply

  25. Michel,

    To trace Alberto’s test-case I used event 10270 combined with a sql trace at level 4.

    (Usual disclaimers and caveats of using half-documented events apply here. Contacting support is advisable. In particular with regards to this specific event, be aware of the bug# 1306324 applicable for the early versions of 9i, where you get core dumps if it is set at the session level)

    Comment by Boris Dali — March 16, 2007 @ 12:30 pm GMT Mar 16,2007 | Reply

  26. Thanks for your answer, Boris.

    Comment by Michel Cadot — March 16, 2007 @ 1:35 pm GMT Mar 16,2007 | Reply

  27. Boris,

    I think you know more than me about the library cache inner workings, as demonstrated by the fact that I don’t know the answers to your questions ;)

    Thanks for the very interesting test case, it is absolutely convincing that Oracle will try to use the “current” (last) child first if it is compatible instead of performing a full search to find the most compatible one – so (potentially?) using a less efficient plan but avoiding a lot of pinning. I guess (stress on guess) that this will make the first child colder and colder, so eventually, in a library cache really pressed for space, eligible for being removed – but look at Jonathan’s comment on my test case above.

    Comment by Alberto Dell'Era — March 17, 2007 @ 10:06 pm GMT Mar 17,2007 | Reply

  28. […] Jonathan Lewis answered my question. Check the link https://jonathanlewis.wordpress.com/2007/01/05/bind-variables/ […]

    Pingback by shared_pool latching « Database Systems — April 1, 2007 @ 7:31 pm BST Apr 1,2007 | Reply

  29. Cross-reference to a similar entry on tkyte-blog: http://tkyte.blogspot.com/2007/04/share-and-share-alike.html

    Comment by Martin — April 24, 2007 @ 9:00 am BST Apr 24,2007 | Reply

  30. I’ve 2 questions:

    1) Does Oracle ever come up with a different plan for child cursors when they were spawned because of their datatype lengths?
    2) If answer to the above question is no, Why is Oracle is spawning different child cursor due to bind lengths, is it to save on shadow process memory?

    Thanks.

    Comment by Saibabu Devabhaktuni — May 25, 2007 @ 6:20 am BST May 25,2007 | Reply

  31. In Response to Akhileshs’ query about the table in remote database and the query showing a high version count. A probable solution would be to create a view in the local database as select * from emp@remote. Now, when this local view is referenced, the version count should come down.

    Regards
    Vivek

    Comment by Vivek Sharma — May 25, 2007 @ 1:52 pm BST May 25,2007 | Reply

  32. Saibabu, it is possible, though perhaps not by design. Say you have query with a range predicate:
    char_col < :b1
    When you first run the statement your value is ‘aaa’, very short and close to the start of the alphabet. When I run the query some time later my value is ‘zzz…..zzz’ – very long and close to the end of the alphabet.
    Because of the differences in length I get a second child cursor; because of the differences in value I may get a different execution plan. But, as I said, this may be accident rather then design.
    Answer to question 2: I don’t know, I didn’t write the specification or the code; but it’s a reasonable guess. Maybe this is a hangover from the days when memory was very tightly limited and “wasting” 4KB was important. In 10g a lot of the code has apparently changed to try and reduce fragmentation of memory by sticking (where possible) to a small number of allocation sizes – maybe this strategy will also surface at some future time with this bind-variable issue.

    Comment by Jonathan Lewis — May 26, 2007 @ 1:33 pm BST May 26,2007 | Reply

  33. Hi,

    Question Reg Bind Variables.

    SQL> declare
        begin
         for i in 1..100000
         loop
          insert into x values(i);
         end loop;
        end;  2    3    4    5    6    7
      8  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:00:20.66
    SQL> declare
        begin
         for i in 1..100000
         loop
          execute immediate 'insert into t1 values(:x)' using i;
         end loop;
        end;  2    3    4    5    6    7
      8  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:04:45.08
    

    If i use bind variables it is taking nearly 4min where as without bind variables it takes only 20Sec to finish the job can you help me explain me the concept in detail that will be great help.

    2. will soft parse undergo latches.

    Regards

    Balu.

    Comment by Balu — July 23, 2007 @ 4:02 am BST Jul 23,2007 | Reply

  34. We are using VPD.
    As a result of this we are seeing that we are getting plenty of child cursors (in some cases 600+).I checked the auth_check_mismatch in v$sql_shared_cursor and it has a ‘Y’ value.
    I would really appreciate if you could help us in understanding and resolving this issue.

    Thanks

    Comment by Rita — November 1, 2007 @ 10:35 pm GMT Nov 1,2007 | Reply

  35. Hello Jonathan,

    we see 2 namespaces in v$librarycache in 11.2 (‘SQL AREA BUILD’ and ‘SQL AREA STATS’) , I have not found any details on these 2. What are these 2 new namespace relates to ?

    Comment by Uday — April 13, 2012 @ 5:13 am BST Apr 13,2012 | Reply

  36. “at some layer in the library code, character columns are allowed to have 4 different allocation sizes, namely 32, 128, 2000, or 4000 bytes”

    perhaps, there is one more allocation size, which is 0 (zero).

    The following test-case shows different child numbers for zero-length and non-zero-length :A2

    create table tab1 as select * from all_objects;
    create unique index tab1_obj_id on tab1(object_id);
    create index tab1_type_obj_id on tab1(object_type,object_name,object_id);
    
    exec dbms_stats.gather_table_stats(user, 'TAB1', cascade=&gt; true, method_opt =&gt; 'FOR ALL COLUMNS SIZE 1');
    
    variable a1 number
    variable a2 varchar2(10);
    
    begin
      :a1:=29;
      :a2:=null;
    end;
    /
    
    select count(1) from tab1 where object_id=:a1 and object_type=:a2;
    select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));
    
    begin
      :a1:=29;
      :a2:='CLUSTER';
    end;
    /
    
    select count(1) from tab1 where object_id=:a1 and object_type=:a2;
    select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));
    

    Comment by Yuri — August 29, 2017 @ 10:38 am BST Aug 29,2017 | Reply

    • Yuri,

      Thanks for pointing out that option.

      It’s a little surprising that it didn’t come up sooner but I guess it’s a measure of how easy it is to overlook something that’s not quite where you’re focusing.

      Comment by Jonathan Lewis — August 29, 2017 @ 2:31 pm BST Aug 29,2017 | Reply


RSS feed for comments on this post.

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.