Oracle Scratchpad

April 18, 2011

Consistent Reads

Filed under: Infrastructure,Oracle,Performance,Read Consistency,Troubleshooting,undo — Jonathan Lewis @ 11:08 am BST Apr 18,2011

Here’s a quick demo to make a point about consistent reads (prompted by a question on the Oracle-L mailing list):

SQL> drop table t1;

Table dropped.

SQL> create table t1 (n1 number);

Table created.

SQL> insert into t1  values(0);

1 row created.

SQL> begin
  2  for i in 1..1000 loop
  3  update t1 set n1 = i;
  4  end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

Note that I haven’t issued a commit in this session, and all I’ve got is a single row in the table (and because it’s my usual demo setup of locally managed tablespaces with uniform extents of 1MB using freelist management I know that that one row is in the first available block of the table).

How much work is a second session going to do to scan that table ?

SQL> alter system flush buffer_cache;
SQL> execute snap_my_stats.start_snap
SQL> select * from t1;
SQL> set serveroutput on size 1000000 format wrapped
SQL> execute snap_my_stats.end_snap

---------------------------------
Session stats - 18-Apr 11:33:01
Interval:-  2 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
session logical reads                                                      967
consistent gets                                                            967
consistent gets from cache                                                 967
consistent gets - examination                                              964
consistent changes                                                       1,001
CR blocks created                                                            1
data blocks consistent reads - undo records applied                      1,001
IMU CR rollbacks                                                            41

The snap_my_stats package is similar in concept to Tom Kyte’s “runstats” or Tanel Poder’s “snapper” program to capture changes in values in the dynamic performance views over short time periods. In this case I’ve deleted all but a few of the larger changes, and a couple of small changes.

The figure that stands out (probably) is the “session logical reads” – we’ve done 967 logical I/Os to scan a tables of just one block. The reason for this is that we’ve created a read-consistent copy of that one block (“CR blocks created” = 1), and it has taken a lot of work to create that copy. We’ve had to apply 1,001 undo records (“data blocks consistent reads – undo records applied” = 1001).

Most of those undo records come from individual accesses (which are of the cheaper “consistent gets – examination” type that only need a single get on the “cache buffers chains” latch) to undo blocks, following the “UBA (undo block address)” pointer in the relevant ITL entry of the table block, but since this is a 10g database the last few undo records come out of the “In-memory Undo” of the other session. Basically the cloning operation is something like this:

  1. Get current block
  2. Notice uncommitted transaction
  3. Clone current block – then ignore current block
  4. Get Undo block indicated by uncommitted ITL and apply undo change vector
  5. Repeat step four – 1,000 times
  6. Block is now clear of all uncommitted transactions
  7. There are no committed transactions with a commit SCN later than the start of query
  8. Display contents of block

It is an interesting point that as the first session created undo records it would pin and fill undo blocks – so would only do a few current gets (one for each block) on the undo blocks it was using. As another process reverses out the changes in a CR clone it has to get and release each undo block every time it wants a single undo record … applying undo records introduces far more latch and buffer activity that the original generation of the undo.

Footnote

It’s worth knowing that there are three statistics relating to applying undo records:

transaction tables consistent reads - undo records applied        Estimating "old" commit SCNs during delayed block cleanout
data blocks consistent reads - undo records applied               Creating CR clones
rollback changes - undo records applied                           The work done by a real rollback

See this posting on comp.databases.oracle.server for a rough description of transaction table consistent reads; and this elderly posting highlighting a benefit of knowing about rollback changes. The text versions of the AWR report (and the Statspack report) show only the first 32 characters of the statistic name in the Instance Activity Stats, which means you don’t get to see the whole of the “undo records applied”; in fact, there will be two statistics reported as “transaction tables consistent re”, the larger will be the number of undo records applied, the smaller will be the number of transaction table blocks re-created.

Footnote 2

The second step in the list of actions is: “Notice uncommitted transaction”. It’s probably worth pointing out that another part of the ITL entry holds the transaction id (“xid”) which implicitly identifies the undo segment and the segment’s transaction table slot that was used to hold  the transaction state. The current contents of that slot allow Oracle to determine whether or not (and when, if necessary) the transaction was committed.

11 Comments »

  1. […] on from yesterday’s post on consistent reads, I thought I’d make the point that the way you work can make an enormous difference to the […]

    Pingback by More CR « Oracle Scratchpad — April 19, 2011 @ 6:35 pm BST Apr 19,2011 | Reply

  2. Very good article and I hope to re-read it into your next book. I learned another type of this “consistent gets – examination”. The Oracle doc states : “subset of consistent gets. It is the number of consistent gets that could be immediately performed without pinning the buffer. Usually apply to indexes and require only one latch”. But here I learn that the access to undo blocks in order to create consistent changes blocks also increment this ‘examination’, hence index block access is not the only source of this ‘examination’.

    Comment by Bernard Polarski — April 20, 2011 @ 7:08 am BST Apr 20,2011 | Reply

  3. Is there any way to capture PQ slave activity in snap_my_stats procedure?.

    I belive the PQ slaves get created during query parse time instead of session creation time.

    Thanks
    Ananda

    Comment by Ananda — June 1, 2011 @ 8:03 am BST Jun 1,2011 | Reply

    • Ananda,

      Capturing PQ slave activity isn’t really possible – some of the slave numbers sum back to the co-ordinator, but some may be missing. You really need to run an extract against v$px_sesstat just before the query ends.

      PQ Slaves are allocated as the query starts to run – not at parse time; if parallel_min_servers is non-zero they the processes can exist independently of any running query, even if parallel_min_servers there is an “idle time” before they die so one query can pick up the slaves spawned by another.

      Comment by Jonathan Lewis — June 2, 2011 @ 8:48 pm BST Jun 2,2011 | Reply

  4. How to tie the SYSSTAT of data blocks consistent reads – undo records applied Creating CR clones – back to a causing SQL.

    I have this isse and suspect 1 query is generting most of the ‘CR clones’ – bu how to make that join ?

    Comment by John Fak — December 14, 2011 @ 4:49 am GMT Dec 14,2011 | Reply

    • John.

      I can’t think of any good way of doing this; although v$sql (and related views) report the number of buffer_gets against an child cursor, there’s nothing similar that allows you to identify the buffer gets that were for undo records, or the number of CR buffers created.

      There is event 10201 (trace undo applied for CR), if you enabled this and event 10046 for a session you could get a (very large) trace file that allowed you to count the number of undo application preceding each execution or fetch of a statement.

      If you monitored all the sessions for a while, reporting the two stats (and the undos applied per CR created) every few minutes you might find a session responsible for a lot of the undo application – at which point you might be able to check v$open_cursor and v$sqlstats for the statements that the session was running.

      Neither strategy sounds particularly viable in a busy production system, though. The former will be more accurate, but introduce a lot of overhead to the critical statement – the latter is a bit hit and miss.

      Comment by Jonathan Lewis — December 14, 2011 @ 2:07 pm GMT Dec 14,2011 | Reply

  5. […] Как Oracle восстанавливает консистентную версию блока данных, многократно изменённого открытыми транзакциями (что отражается статистикой data blocks consistent reads – undo records applied) отлично описал Дж.Льюисом: […]

    Pingback by gc cr block busy при восстановлении CR копии блока (data blocks consistent reads – undo records applied) « Oracle mechanics — May 14, 2012 @ 9:38 pm BST May 14,2012 | Reply

  6. 2 CR blocks for 1 select operation.
    ———————————–

    Hello Jon,
    I was wondering if you could help me with this doubt. I have 3 terminals opened – one as sysdba, and other two as SCOTT user. (Please see the SQL prompt name).
    —————————————————————————–

    In fist scott session, I created table and insert a record and committed.

    SQL_SCOTT_1> create table T1
      2  (
      3  C1 number
      4  );
    
    Table created.
    
    SQL_SCOTT_1> insert into T1 values (1);
    
    1 row created.
    
    SQL_SCOTT_1> commit;
    
    Commit complete.
    

    —————————————————————————–
    As sysdba, got the file# and block# and also checked the current block.

    SQL_sysdba> select dbms_rowid.rowid_relative_fno(rowid) as file#, dbms_rowid.rowid_block_number(rowid) as block# from t1;
    
         FILE#     BLOCK#
    ---------- ----------
             6        268
    		 
    SQL_sysdba> select b.dbarfil, b.dbablk, b.class, 
    decode(state,0,’free’,1,’xcur’,2,’scur’,3,’cr’, 4,’read’,5,’mrec’,6,’irec’,7,’write’,8,’pi’, 9,’memory’,10,’mwrite’,11,’donated’) as state,cr_scn_bas,cr_uba_fil, cr_uba_blk
    from sys.x$bh b where dbarfil = 6 and dbablk = 268;
    
       DBARFIL     DBABLK      CLASS STATE   CR_SCN_BAS CR_UBA_FIL CR_UBA_BLK
    ---------- ---------- ---------- ------- ---------- ---------- ----------
             6        268          1 xcur             0          0          0
    
    1 row selected.
    

    —————————————————————————-

    Now, in scott session I updated the table and did NOT commit.

    SQL_SCOTT_1> update T1 set C1=2;
    
    1 row updated.
    

    —————————————————————————-

    And verified the CR block is created.

    SQL_sysdba> /
    
       DBARFIL     DBABLK      CLASS STATE   CR_SCN_BAS CR_UBA_FIL CR_UBA_BLK
    ---------- ---------- ---------- ------- ---------- ---------- ----------
             6        268          1 xcur             0          0          0
             6        268          1 cr         3538422          0          0
    
    2 rows selected.
    [/sourcode]
    ------------------------------------------------------------------------------
    
    Now, in another scott session, I selected the table. 
    [sourcecode]
    SQL_SCOTT_2> select * from T1;
    
            C1
    ----------
             1
    

    ——————————————————————————-

    Looked for CR blocks. Question is – Why did it created 2 new CR blocks(SCN-3538449,3538448) instead of one though there was only one select?
    Subsequent selects created 1 CR block for each select. Only the first select created 2. Please help me underdtand why!

    SQL_sysdba> /
    
       DBARFIL     DBABLK      CLASS STATE   CR_SCN_BAS CR_UBA_FIL CR_UBA_BLK
    ---------- ---------- ---------- ------- ---------- ---------- ----------
             6        268          1 cr         3538449          5       2270
             6        268          1 cr         3538448          5       2270
             6        268          1 xcur             0          0          0
             6        268          1 cr         3538422          0          0
    
    4 rows selected.
    

    Comment by Vikram — September 24, 2017 @ 9:30 am BST Sep 24,2017 | Reply

    • I think the first question I have to ask is why you haven’t bothered to get my name right.

      Then I’m going to go Socratic on you and ask you to work out what you haven’t done in your test and why you think there’s only one select and how you would test that claim.

      Comment by Jonathan Lewis — September 25, 2017 @ 6:49 pm BST Sep 25,2017 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.