Oracle Scratchpad

March 30, 2012

Quiz Night

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 5:38 pm BST Mar 30,2012

Here’s a question prompted by a recent comment on OTN.

I have a table declared as follows (and the ellipsis means repeat the column definitions according to the simplest and most obvious pattern to give you 1,000 columns all of type number(1)):


rem     Script:         c_wide_table_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2012

create table t1 (
        col000  number(1),
        col001  number(1),
        col002  number(1),
...
        col997  number(1),
        col998  number(1),
        col999  number(1),
        constraint t1_pk primary key (col000)
)
;

I have one row in the table.

How many row pieces might that row consist of ?

Update (Saturday Morning)

We have some good responses so far, and a general conclusion that the row might be stored as 1, 2, 3, or 4 row pieces depending on the number of trailing null columns. This is true, but isn’t a complete list of the possibilities – so I’m looking for a little more.

A few highlights in the responses – it was definitely a good move from Jithin Sarath to do a block dump and view the results; and at one point he also created the row with just the first and last columns populated – which is also an interesting thing to dump.

VishalDesai also took an interesting approach, in effect following  an important related topic. If you have to read a single row that is chained from multiple row-pieces what do the workload stats look like. (There’s an interesting oddity there as well – but there’s also a lot of variation across versions and the results are dependent on access path too.)

Answer:

For the rather staggering answer, please read through the comment trail, especially this comment and my response.

Update April 2017

See also this note by Sayan Malakshinov for further details, and follow the pingback for an observation about improvements (fixes) in 12.2

45 Comments »

  1. 4 row pieces

    Comment by dorobantun — March 30, 2012 @ 5:49 pm BST Mar 30,2012 | Reply

  2. it might be 1, 2 3 or four row pieces depending on the number of trailing NULL columns.

    Comment by Jan-Marten Spit — March 30, 2012 @ 5:59 pm BST Mar 30,2012 | Reply

  3. I think Oracle starts creating row pieces after 255 columns so it should be 4 (assuming there are no trailing nulls).

    Created table and ran select on table and snapper from other session.

    @snapper ash,stats 15 1 192
    Sampling SID 192 with interval 15 seconds, taking 1 snapshots…

    — Session Snapper v3.14 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

    ————————————————————————————————–
    SID, USERNAME , TYPE, STATISTIC , HDELTA,
    ————————————————————————————————–
    192, VDESAI , STAT, table scan rows gotten , 4,

    Comment by vishaldesai — March 30, 2012 @ 6:32 pm BST Mar 30,2012 | Reply

    • Joel,

      That’s a nice example of how the manuals can be sufficiently vague to cause incomprehension and misunderstanding. Why “typically”, and what happens in the atypical cases ?

      Comment by Jonathan Lewis — March 31, 2012 @ 8:52 am BST Mar 31,2012 | Reply

  4. Well, the right answer here is “it depends”, which shouldn’t come as a surprise to anyone. Single Oracle row piece can contain only 255 columns, which means that there are potentially 4 row pieces. However, if the last 745 columns are all containing NULL values, which reminds me of some designs that I have seen once upon a time, there will be only the single piece. That too, however, can be changed. Block size of 1K is still legal for some editions. If the tablespace has manual segment space management and if PCTFREE is large enough, we can even split the 255 column pieces. I am not sure what would happen in the ASSM tablespae with 1K block size and if that is at all possible.

    Comment by Mladen Gogala — March 30, 2012 @ 7:05 pm BST Mar 30,2012 | Reply

    • Mladen,
      Thank you for the reminder about specifying the problem completely. I always find that every question can prompt more questions – although I try to restrict myself to the “more reasonable” questions, so I wouldn’t worry too much about 1KB block sizes or PCTFREE being set to 99, although I might think about 2KB block sizes, and odd effects of updates and ASSM with it’s quarter-block granularity.

      (In passing, I think that any version capable of creating a 1KB block size won’t be a version that handles 1,000 columns in a table.)

      Comment by Jonathan Lewis — March 31, 2012 @ 8:57 am BST Mar 31,2012 | Reply

  5. Uuuups! I haven’t read the other replies. I humbly apologize.

    Comment by Mladen Gogala — March 30, 2012 @ 7:05 pm BST Mar 30,2012 | Reply

  6. My previous test was using 16k block size and below one is using 4k.

    VDESAI@csprod2_asm > @snapper ash,stats 5 1 192 (no nulls)
    Sampling SID 192 with interval 5 seconds, taking 1 snapshots…

    — Session Snapper v3.14 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

    —————————————————————————————————————–
    SID, USERNAME , TYPE, STATISTIC , HDELTA, HDELTA/SEC,
    —————————————————————————————————————–
    192, VDESAI , STAT, table scan rows gotten , 8, .53,

    Comment by vishaldesai — March 30, 2012 @ 7:09 pm BST Mar 30,2012 | Reply

    • Vishaldesai,

      There are some odd effects that appear (and the stats can be misleading) when you start using rows with more than 254 columns. My first tests didn’t reproduce the doubled count for the 4KB block size. But did you remember to collect stats on the table before querying it ? This might be a side effect of dynamic sampling. Interestingly I did an “alter table t1 move” as a test, and then saw the doubling effect you are reporting – on 4KB, 8KB and 16KB. (Using 10.2.0.3)

      Comment by Jonathan Lewis — March 31, 2012 @ 8:45 am BST Mar 31,2012 | Reply

  7. Hi Jonathan,
    Pardon me if I am posting total rubbish.
    From oracle documentation, it is clear that 11g can store only 255 columns in a row piece. When an insert happens into a 1000 column table, it means that oracle may use up to 4 row pieces.
    But it also says that a null column stores only column length and trailing null columns don;t even store column lengths.

    So get over the confusion, I created a dummy table as per your spec .The row I inserted was simple a value of 1 for first column and null for all others (the pk key meant that col000 was mandatory).
    I followed your directions on “https://jonathanlewis.wordpress.com/2011/12/16/i-wish-3/” to dump the row and analyze.
    I believe that it was a simple chained row .. first section with a H & F bits set, middle with no flags but an nrid and last one with a L bit set.
    So per this the number of row pieces are 4.

    I also tested with a row of all 1’s and the pieces were 4.

    The only doubt I have here is the layout of the dump.. the H & F bits were present at the end of the block_row_dump and the L bit was present in the beginning.
    Does oracle traverse from last column to first row or am I doing everything wrong.

    block_row_dump:
    tab 0, row 0, @0x1c70
    tl: 768 fb: -----L-- lb: 0x1  cc: 255
    col  0: [ 2]  c1 02
    col  1: [ 2]  c1 02
    col  2: [ 2]  c1 02
    ......
    col 252: [ 2]  c1 02
    col 253: [ 2]  c1 02
    col 254: [ 2]  c1 02
    tab 0, row 1, @0x196a
    tl: 774 fb: -------- lb: 0x1  cc: 255
    nrid:  0x00411621.0
    col  0: [ 2]  c1 02
    col  1: [ 2]  c1 02
    col  2: [ 2]  c1 02
    ......
    col 252: [ 2]  c1 02
    col 253: [ 2]  c1 02
    col 254: [ 2]  c1 02
    tab 0, row 2, @0x1664
    tl: 774 fb: -------- lb: 0x1  cc: 255
    nrid:  0x00411621.1
    col  0: [ 2]  c1 02
    col  1: [ 2]  c1 02
    col  2: [ 2]  c1 02
    ......
    col 252: [ 2]  c1 02
    col 253: [ 2]  c1 02
    col 254: [ 2]  c1 02
    tab 0, row 3, @0x139a
    tl: 714 fb: --H-F--- lb: 0x1  cc: 235
    nrid:  0x00411621.2
    col  0: [ 2]  c1 02
    col  1: [ 2]  c1 02
    col  2: [ 2]  c1 02
    ......
    col 232: [ 2]  c1 02
    col 233: [ 2]  c1 02
    col 234: [ 2]  c1 02
    

    Comment by Jithin Sarath — March 30, 2012 @ 7:13 pm BST Mar 30,2012 | Reply

    • Jithin,

      It’s nice to see the description of what you’re doing and why. And I really appreciate the fact that you kept the dump to a minimum, showing just enough to make the point and not listing all 1,000 columns.

      What you see is exactly what I was expecting – I have a (private) note dated 2002, referencing back to 8i, which comments on the fact that in this case Oracle seems to store the last 255 columns as the last row piece, and work backwards, storing the row pieces in reverse order. It’s just one of the many strange things you see with chained rows.

      I note you also created a row with just the first and last columns populated – there’s an interesting observation that could be made about what Oracle stores as a results, and how this isn’t sensible unless there’s an error in the simple model that we infer from the reading the manual.

      Comment by Jonathan Lewis — March 31, 2012 @ 8:49 am BST Mar 31,2012 | Reply

  8. Depends on block size, but i don’t understand why there are pk. What does it means?

    Comment by xtender — March 30, 2012 @ 7:53 pm BST Mar 30,2012 | Reply

    • xtender,

      I included a primary key because I always like to have a primary key.
      It’s also possible that it might make a difference to the way the row behaves (at run-time rather than anything to do with physical storage).

      Comment by Jonathan Lewis — March 31, 2012 @ 8:50 am BST Mar 31,2012 | Reply

  9. ok block dump shows 4 (no nulls) row pieces or 2 row pieces (with trailing nulls from 501-999 columns). Please explain why table scan rows gotten are higher with 4k block size compared to 16k block size.

    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    
    
    -- 16k blocksize all columns are populated
    nrow=4
    0xe:pti[0]      nrow=4  offs=0
    tab 0, row 0, @0x3c68
    tab 0, row 1, @0x3962
    tab 0, row 2, @0x365c
    tab 0, row 3, @0x3392
    
    TABLE SCAN ROWS GOTTEN - 4
    
    -- 16k blocksize first 500 columns are populated and 501-999 columns are set to null
    nrow=2
    0xe:pti[0]      nrow=2  offs=0
    tab 0, row 0, @0x3c80
    tab 0, row 1, @0x399b
    
    
    -- 4k blocksize all columns are populated
    nrow=4
    0xe:pti[0]      nrow=4  offs=0
    tab 0, row 0, @0xc68
    tab 0, row 1, @0x962
    tab 0, row 2, @0x65c
    tab 0, row 3, @0x392
    
    TABLE SCAN ROWS GOTTEN - 8
    
    -- 4k blocksize first 500 columns are populated and 501-999 columns are set to null
    nrow=2
    0xe:pti[0]      nrow=2  offs=0
    tab 0, row 0, @0xc80
    tab 0, row 1, @0x99b
    
    

    Thanks,
    Vishal

    Comment by vishaldesai — March 30, 2012 @ 8:04 pm BST Mar 30,2012 | Reply

    • i’m not sure, but i seen data that suggest that “table scan rows gotten” should really be called “table scan row pieces gotten”, with the odd exception that for row pieces that exist because of the 255 column limit, the number is counted as 2 no matter how many pieces there are.

      Comment by Jan-Marten Spit — March 30, 2012 @ 8:18 pm BST Mar 30,2012 | Reply

      • Jan-Marten,

        I agree – with multiple row-pieces Oracle seems to count row-pieces rather than rows. But the real anomaly here is that sometime Oracle double counts. Strangely I found that when I set my data up with the primary key index, then dropped the index, double-counting didn’t happen; but if I created the data without the index then double-counting happened. I haven’t got to the bottom of that anomaly yet – and I would have to check different versions of Oracle and different block sizes, and different ways of creating the data to try and figure it out.

        Comment by Jonathan Lewis — March 31, 2012 @ 9:02 am BST Mar 31,2012 | Reply

  10. Apologies for my earlier post — I made a mistake of not issuing a commit after inserting the single column row – I was dumping from another session. The actual dump showed the below:

    block_row_dump:
    tab 0, row 0, @0x1f9a
    tl: 6 fb: –H-FL– lb: 0x1 cc: 1
    col 0: [ 2] c1 02
    end_of_block_dump

    In my case for a 8K block size on 11.2.0.2, I got 1 row piece when only the mandatory column value was inserted.
    So, the number of row pieces depend upon the number of null trailing columns and the number of populated columns. The row pieces can vary from 1 to max of 4?

    Comment by Jithin Sarath — March 30, 2012 @ 8:22 pm BST Mar 30,2012 | Reply

  11. How many row pieces might that row consist of ?
    It might consist of just 1. The ‘I have one row in the table’ means there is at least one row piece.
    If all columns but the first are implicitly left NULL, as opposed to explicitly inserting NULL into them, then nothing will be stored in the row for them.

    This from ‘The Column Count’ in this paper ‘The Oracle Data Block’ by David Litchfield

    Click to access OracleForensicsDataBlock.pdf

    The Column Count
    The Column Count indicates the number of columns that can be found in the row
    data. Note that this does not mean the number of columns in the table and so the
    column count may be less than the actual number of columns in the table. For
    example, assume there is a table with 3 columns. If an insert occurs into only the first
    column of this table then the row data will only have 1 column with the 2nd and 3rd
    columns, by their absence, having implicitly a NULL value. If however, an insert
    occurs in the 3rd column then an explicit NULL must be inserted for the 1st and 2nd
    columns.

    Comment by rpo428 — March 30, 2012 @ 8:46 pm BST Mar 30,2012 | Reply

  12. OK, now that the quiz question is answered, what are the prizes? Do we get at least a virtual beer?

    Comment by Mladen Gogala — March 30, 2012 @ 9:22 pm BST Mar 30,2012 | Reply

    • Mladen,

      Unfortunately I haven’t seen an answer that merits a prize yet. A couple of them are worth honorable mentions for method and presentation, but I’m still waiting for someone to point out the shock horror bit of the answer.

      Comment by Jonathan Lewis — March 31, 2012 @ 6:03 am BST Mar 31,2012 | Reply

  13. Jonathan,
    What if row piece(s) get migrated due to lack of space on update of column(s) from null value to some value?
    What if it happens many times?
    regards
    srivenu

    Comment by srivenu — March 31, 2012 @ 8:51 pm BST Mar 31,2012 | Reply

    • Srivenu,

      Excellent questions, exactly the right sort of things to think about when you start looking at rows with more than 255 columns.

      Comment by Jonathan Lewis — April 1, 2012 @ 10:51 am BST Apr 1,2012 | Reply

  14. There is situation when Intra-Block chaining becomes real block chaining, Most nasty effect of this chaining I can reproduce using such test case:

    insert into t1(col000) values(1);
    commit;
    begin
      for i in 255..999  loop
        execute immediate 'update t1 set col' ||i ||'=1';
      end loop;
      commit;
    end;
    /
    

    There will be 745 row pieces and 749 blocks below HWM in 8k, MSSM tablespace. table fetch continued row statistics grows at table full scan. But alter table move resolve this chaining issue, unlike ordinary row chaining. I often see such problem at applications like Siebel, which uses enormous number of columns and update on them.

    Comment by Vyacheslav Rasskazov — April 1, 2012 @ 2:44 am BST Apr 1,2012 | Reply

    • Vyacheslav,

      That’s the one I was looking for. In fact the demonstration I use starts by populating columns col000 and col254 before executing he loop, and (on my version of Oracle) this results in 746 row pieces spread across 746 blocks – the last piece is 255 columns, and the preceding 745 pieces hold one column each. (I think in your example you would end up with two columns in the first row-piece.)

      I’m glad you mentioned Siebel (or any application) that could suffer from this problem, because when I publish some of my wackier examples of things that can happen, people sometimes think that they’re not actually going to happen in real applications. I, too, have seen applications (and not just Siebel) that “insert” a row by inserting a stub and then updating it many times. Not only does this generate a huge amount of redo (and undo), but it can also lead to some very strange row-chaining effects.

      Comment by Jonathan Lewis — April 1, 2012 @ 10:59 am BST Apr 1,2012 | Reply

      • Don’t bother posting an SR, it already exists.
        Bug #8270070: 255+ columns row split in several blocks after add/update new columns
        I’ve had an SR outstanding on this since Sept 30th, 2010.
        It has been escalated, and de-escalated.
        On Aug 24th, 2011 the SR was updated with: The developer updated stating this is at the top of his backlog now.
        However, as of now, it is still outstanding. I’d almost forgotten about it.

        Comment by Alex White — April 11, 2012 @ 5:27 pm BST Apr 11,2012 | Reply

    • i had a hard time believing this reply, so i was about to test it, which is no longer necessary now :D

      this raises the question: why not open a SR for this bug, or it’s equivalent: why is this unavoidable behavior? i can’t see easily why it should be?

      as there is only 1 row in the table, and enough free space in the block, why create extra row pieces instead of merging into the existing ones?

      Comment by Jan-Marten Spit — April 1, 2012 @ 11:14 am BST Apr 1,2012 | Reply

      • Jan-Marten,

        In principle it’s not unavoidable – it’s just the way that it has been implemented. In general I don’t open SRs because the problem isn’t one I can attribute to my own CSI and I’m never on a client site long enough to see what happens if they open one.

        There are other anomalies with the way that Oracle handles such extreme table definitions that at least give you some idea about why they’ve managed to get to such a silly result from their basic implementation. I’ll probably add a couple more notes to the blog over the next few days to comment on this.

        Comment by Jonathan Lewis — April 1, 2012 @ 11:25 am BST Apr 1,2012 | Reply

      • this must have something to do with preferring intra block chaining above extra block chaining, so oracle rather uses small leftovers in the block than moving larger rowpieces out of the block? but even then, if there -is- enough free space, in the block, as per Vyacheslav’s example, why is Oracle not merging the pieces if it can see that it could?

        heck, i’m going to test several scenario’s anyway, because this whole subject is intriguing and fairly fundamental in performance analysis. after having a siebel table with 352 columns, all neatlty sorted on the number of null values, and someone adding a new column and updating all of them with a value, i thought i understood the basics of the mechanics here. clearly i don’t.

        Comment by Jan-Marten Spit — April 1, 2012 @ 11:33 am BST Apr 1,2012 | Reply

      • Jonathan,

        i mentioned the SR because i was really surprised to see this, and the behaviour appears to be ‘stupid’ at first sight. I did not mean it literally :D

        thx for this fun example!

        Comment by Jan-Marten Spit — April 1, 2012 @ 11:39 am BST Apr 1,2012 | Reply

        • Jan-Marten,

          I’ve just done a quick search on MOS and found note 238519.1, dated August 2004: “Updating a Row with More Than 255 Columns Causes Row Chaining [ID 238519.1]” One of the suggested fixes is to ensure that the last column of the row is populated on insert. It’s easy to make a good case for not doing this, and it probably won’t stop all sorts of odd chaining and migration effects relating to lots of null columns being updated to non-null, but it might stop some of the extreme cases like my 746 block chain.

          Comment by Jonathan Lewis — April 1, 2012 @ 7:35 pm BST Apr 1,2012

  15. If you discuss the table with 1000 cols, which could be result of the sequence of DDL and DML, it can have at least 10k row pieces(I havn’t tested for more yet). Moreover it is sufficient to have only 1 column in the table.

    Comment by Valentin Nikotin — April 1, 2012 @ 4:23 am BST Apr 1,2012 | Reply

    • Valentin,

      Are you thinking of a table where every column is declared as number(1) (as per the original specification), or a generic table ?

      Given that in the more generic case a row can hold a long column, which can be up to 2GB (or has it gone up to 4GB recently) a single column can span a little over 262,000 blocks even with an 8KB block size – and over a million with a 2KB block size).

      If you’ve find a way of getting a row with a single number(1) column to span 10,000 blocks I’d be interested to see it.

      Comment by Jonathan Lewis — April 1, 2012 @ 11:05 am BST Apr 1,2012 | Reply

      • My test was:

        drop table t2 purge;
        create table t2 (v0 varchar2(4000));
        insert into t2 values(lpad(1,4000));
        declare
          j number := 1e4;
        begin
          for i in 1 .. j loop
            execute immediate 'alter table t2 add v'||i||' varchar(4000)';
            execute immediate 'update t2 set v'||i||' = lpad(1,4000)';
            execute immediate 'alter table t2 drop column v'||(i-1);
          end loop;
          execute immediate 'alter table t2 add n number(1)';
          execute immediate 'alter table t2 drop column v'||j;
          execute immediate 'update t2 set n = 1';
        end;
        /
        desc t2
        select blocks from user_segments where segment_name = 'T2';
        begin
          for rec in 
          (
            select file_id fn, block_id minbn, block_id + blocks  - 1 maxbn
            from dba_extents 
            where owner = user and segment_name = 'T2'
          ) loop
            execute immediate 'alter system dump datafile '||rec.fn||' block min '||rec.minbn||' block max '||rec.maxbn;
          end loop;
        end;
        /
        set autot on
        select * from t2;
        

        As result I had table with 1 row and col that had 10k pieces.
        When I cheked it on 11.2.0.3 – there was only one piece, i.e. Oracle fixed this.
        Here is the result for 11.2.0.1:

        Connected to:
        Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
        With the Partitioning, OLAP and Data Mining options
        
        SQL> create table t2 (v0 varchar2(4000));
        
        Table created.
        
        SQL> insert into t2 values(lpad(1,4000));
        
        1 row created.
        
        SQL> declare
          2    j number := 1e4;
          3  begin
          4    for i in 1 .. j loop
          5      execute immediate 'alter table t2 add v'||i||' varchar(4000)';
          6      execute immediate 'update t2 set v'||i||' = lpad(1,4000)';
          7      execute immediate 'alter table t2 drop column v'||(i-1);
          8    end loop;
          9    execute immediate 'alter table t2 add n number(1)';
         10    execute immediate 'alter table t2 drop column v'||j;
         11    execute immediate 'update t2 set n = 1';
         12  end;
         13  /
        
        PL/SQL procedure successfully completed.
        
        SQL> desc t2
         Name  Null?  Type
         ----- ------ ---------
         N            NUMBER(1)
        
        SQL> select blocks from user_segments where segment_name = 'T2';
        
            BLOCKS
        ----------
               120
        
        SQL> begin
          2    for rec in
          3    (
          4      select file_id fn, block_id minbn, block_id + blocks  - 1 maxbn
          5      from dba_extents
          6      where owner = user and segment_name = 'T2'
          7    ) loop
          8      execute immediate 'alter system dump datafile '||rec.fn||' block min '||rec.minbn||' block max '||rec.maxbn;
          9    end loop;
         10  end;
         11  /
        
        PL/SQL procedure successfully completed.
        
        /***************************************************/
        block_row_dump:
        tab 0, row 0, @0x16d7
        tl: 9 fb: --H-F--- lb: 0x31  cc: 0
        nrid:  0x01002755.0
        tab 0, row 1, @0x16ce
        tl: 9 fb: -------- lb: 0x0  cc: 0
        nrid:  0x01002755.1
        
        ...
        
        tab 0, row 68, @0x169b
        tl: 9 fb: -------- lb: 0x0  cc: 0
        nrid:  0x01012ad5.44
        tab 0, row 69, @0x6ef
        tl: 6 fb: -----L-- lb: 0xa  cc: 1
        col  0: [ 2]  c1 02
        /***************************************************/
        
        SQL> set autot on
        SQL> select * from t2;
        
                 N
        ----------
                 1
        
        ...
        
        Statistics
        ----------------------------------------------------------
                  4  recursive calls
                  0  db block gets
              10181  consistent gets
                  0  physical reads
                  0  redo size
             100523  bytes sent via SQL*Net to client
                520  bytes received via SQL*Net from client
                  2  SQL*Net roundtrips to/from client
                  0  sorts (memory)
                  0  sorts (disk)
                  1  rows processed
        

        On 11.2.0.1, if I didn’t add the update statement for the last col I would get “ORA-03106: fatal two-task communication protocol error”, it also happens in the simpler situation (and I’ve seen similar bugs for 11.2.0.1 on MOS):

        Connected to:
        Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
        With the Partitioning, OLAP and Data Mining options
        
        SQL> create table t1 (v0 varchar2(4000));
        
        Table created.
        
        SQL> insert into t1 values(lpad(1,4000));
        
        1 row created.
        
        SQL> alter table t1 add v1 varchar(4000);
        
        Table altered.
        
        SQL> update t1 set v1 = lpad(1,4000);
        
        1 row updated.
        
        SQL> alter table t1 drop column v0;
        
        Table altered.
        
        SQL> alter table t1 add n number(1);
        
        Table altered.
        
        SQL> alter table t1 drop column v1;
        
        Table altered.
        
        SQL> select * from t1;
        select * from t1
        *
        ERROR at line 1:
        ORA-03106: fatal two-task communication protocol error
        
        SQL> update t1 set n = 1;
        
        1 row updated.
        
        SQL> select * from t1;
        
                 N
        ----------
                 1
        

        Comment by Valentin Nikotin — April 1, 2012 @ 9:34 pm BST Apr 1,2012 | Reply

        • >When I checked it on 11.2.0.3 – there was only one piece, i.e. Oracle fixed this.
          It was db_block_size = 16K :-) But the case that raises ORA-03106 is probably fixed.

          Comment by Valentin Nikotin — April 1, 2012 @ 10:00 pm BST Apr 1,2012

        • Valentin,

          Very clever – and I like the way you’ve used the ‘collapse=”true”‘ on the results, it makes the whole thing tidier and easier to read.

          I ran your test on 11.2.0.3 and it still produced the 10,000 pieces – but I was using an 8KB block size rather than the 16KB you mentioned in your follow-up.

          I suspect, but haven’t tested yet, that you could reproduce the effect in th 16KB block size if you created the table with 4 populated columns (which would give you an immediate chain) then added, updated, and dropped 5 columns at a time, so that each update introduced a new chain and the drop had to apply to the previous block in the chain.

          Comment by Jonathan Lewis — April 2, 2012 @ 7:27 am BST Apr 2,2012

        • ran your test on 10.2.0.5 64bit linux; bs=8192 (2 node RAC, crs=11.2.0.3)

          112 blocks in segment;

          select * from t2;

          1 opened cursors cumulative
          3 user calls
          10113 session logical reads
          2 CPU used when call started
          2 CPU used by this session
          2 DB time
          1 global enqueue gets sync
          1 global enqueue releases
          10113 consistent gets
          10113 consistent gets from cache
          5 calls to get snapshot scn: kcmgss
          10109 no work – consistent read gets
          1 table scans (short tables)
          10095 table scan rows gotten
          109 table scan blocks gotten
          9999 table fetch continued row
          1 cursor authentications
          10000 buffer is not pinned count
          1 parse count (total)
          1 execute count
          508 bytes sent via SQL*Net to client
          273 bytes received via SQL*Net from client
          2 SQL*Net roundtrips to/from client

          grep ^nrow=, add up gives 10001 pieces.

          and on 11.2.0.3 64 bit linux, bs=8192 (2 node RAC, crs=11.2.0.3)

          112 blocks in segment;

          2 Requests to/from client
          1 opened cursors cumulative
          3 user calls
          10113 session logical reads
          2 CPU used when call started
          2 CPU used by this session
          2 DB time
          14 non-idle wait count
          2 global enqueue gets sync
          2 global enqueue releases
          10113 consistent gets
          10113 consistent gets from cache
          9933 consistent gets from cache (fastpath)
          82845696 logical read bytes from cache
          4 calls to kcmgcs
          1 calls to get snapshot scn: kcmgss
          10109 no work – consistent read gets
          1 table scans (short tables)
          1 table scan rows gotten
          109 table scan blocks gotten
          9999 table fetch continued row
          1 cursor authentications
          10109 buffer is not pinned count
          1 parse time cpu
          1 parse time elapsed
          1 parse count (total)
          1 execute count
          100523 bytes sent via SQL*Net to client
          298 bytes received via SQL*Net from client
          2 SQL*Net roundtrips to/from client

          so although it reports “1 table scan rows gotten”, if i grep ^nrow= on the 11.2.0.3 trace file, i still see 894 pieces.
          also, the table fetch continued row is the same as on 10.2.0.5.

          so the fix seems to be just in changing the def for “table scan rows gotten”, or i am missing a patch?

          Comment by Jan-Marten Spit — April 2, 2012 @ 12:41 pm BST Apr 2,2012

        • typo in the above;

          “so although it reports “1 table scan rows gotten”, if i grep ^nrow= on the 11.2.0.3 trace file, i still see 894 pieces.”

          should be

          “so although it reports “1 table scan rows gotten”, if i grep ^nrow= on the 11.2.0.3 trace file, i still see 8092 pieces.”

          Comment by Jan-Marten Spit — April 2, 2012 @ 12:46 pm BST Apr 2,2012

    • Valentin,
      Do you mean:

      “Moreover it is sufficient to have only 1 ROW in the table”
      ?

      Comment by Todor Botev — April 1, 2012 @ 2:55 pm BST Apr 1,2012 | Reply

  16. Reblogged this on Jithin's Oracle Tuning Corner and commented:
    Some interesting facts I never know about rows in Oracle and how they are stored.. In my quest to answer this question, I learnt a lot about rows, row blocks, how they are composed of and some oracle anomalies.
    I’m glad that I looked at the question and got hooked. I am yet to reproduce and understand the final bit in this post, but will do it when I get a few hours time.

    Comment by Jithin Sarath — April 1, 2012 @ 7:43 pm BST Apr 1,2012 | Reply

  17. I’d like to know your considerations about to trust in values of columns chain_row_excess_total and chain_row_excess_delta (dba_hist_seg_stat, 11.2). How does they compares with chainded rows ANALIZE results? Thanks very much.

    Comment by Johnny — July 19, 2013 @ 12:42 pm BST Jul 19,2013 | Reply

    • Johnny,

      I can’t give a good answer to that question since Oracle has made several changes in recent versions to the way that accesses to chained row pieces get counted – particularly when the chained rows are chained “intra-block” (i.e. multiple row pieces in the same block) or in pinned buffers.

      An important detail, though: the user_tables.chain_cnt column set by the analyze command is about the number of rows which are chained; the statistics about chained rows relate to the number of fetches of columns in the 2nd or later row pieces – and that’s what the dba_hist_seg_stat values are counting. You could have one chained row which is subject to millions of chained row gets, you could have millions of chained rows but never fetch the second row-piece of any of them.

      Comment by Jonathan Lewis — July 24, 2013 @ 12:56 pm BST Jul 24,2013 | Reply

  18. […] If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment). […]

    Pingback by 255 columns | Oracle Scratchpad — February 19, 2015 @ 12:45 am GMT Feb 19,2015 | Reply

  19. […] trailing row-piece each time you populate a column past the current “highest” column.  In an extreme case I’ve managed to show an example of a single row consisting of 746 row pieces, each in a […]

    Pingback by Troubleshooting | Oracle Scratchpad — August 12, 2019 @ 5:02 pm BST Aug 12,2019 | Reply

  20. […] How many row-pieces (Mar 2012) – What’s the worst case scenario for a table with more th… […]

    Pingback by 255 column catalogue | Oracle Scratchpad — January 25, 2022 @ 12:17 pm GMT Jan 25,2022 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a reply to Troubleshooting | Oracle Scratchpad Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.