Oracle Scratchpad

December 9, 2020

Edition-based Redefinition

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 12:00 pm GMT Dec 9,2020

This note is a list of links to the presentations that Bryn Llewellyn gave to the Swedish Oracle Users Group in a full-day seminar on EBR in 2017. I’ve had it as a draft note on my blog for some time, but have just revisited it and thought that others might like to be reminded about it.

There are 8 videos ranging from about 40 to 60 minutes each, and the last link is for the full playlist.

4 Comments »

  1. Hi Jonathan,

    Thank you very much for this playlist! Frankly, till these days I had several occasions where I was tempted to use redefinition in production, though there were always more experienced colleagues with a story of how they’ve got a huge headache with it, and how it would have been better to go the old way. So, in every case I was able to find a window to take the system down and upgrade it, and doing a production upgrade through EBR is still in the wishlist :)

    Thank you,
    Viacheslav

    Comment by Viacheslav Andzhich — December 9, 2020 @ 1:59 pm GMT Dec 9,2020 | Reply

  2. Hi,
    I am not sure I fully understood this feature but I see that as a poor-man version of the schemas.
    I wish Oracle could simply support schemas in a future version. You would just create a new schema for the pl objects, views etc. instead of a new “edition” and change the search path.
    Oracle has a “create schema” command but it’s just a way to group a few DDL commands it does not create a real schema.
    Oracle introduced the CDB/PDB in 12c, a feature that other major RDBMS already introduced many years ago. It’s now usable with Standard Edition 2 and it’s great. I am sure they could introduce schemas. It’s probably more complicated with transactional DDL/DCL, another key feature for safe application upgrades but at least support schemas.
    Best regards,
    Phil

    Comment by Phil Florent — December 16, 2020 @ 11:06 am GMT Dec 16,2020 | Reply

  3. I’ve been watching these videos carefully, and though there are some disturbing details, like admitting that for the long time Oracle was shaping the database to perform best in synthetic OLTP tests, rather than in real world, it’s a very deep and insightful review. However, my attention was caught by this particular thought experiment: https://youtu.be/SdYWZO1RQ70?t=1515

    Bryn suggests an experiment, where we create and populate a table. Then in one session we perform an update of a row without commiting. At the same time, in another session one should create an on update trigger that populates the second column of the table with values from the first one, and do a simple “update t set pk = pk;” to fire the trigger for every existing row.

    As per Bryn, it would create an outdated version of the row, modified in the first session, because this has something to do with how the SQL statement was compiled to include or not include the trigger call. But It doesn’t seem right to me, since until you issue a commit/rollback in the first session, TX lock will not disappear, and when you run the update in the other session, it will get stuck on this particular row, no matter if the trigger is there or not (unless there was a bug I’m not aware of).
    I ran a test in 19.5 cloud based instance, and it seem to prove my point – the session that runs the big update to fire the trigger waits for the second session to resolve its transaction. I left it in this state for a while to make sure there are no delayed effects. Eventually, the second session died and the first one was able to continue its update, while changes of the dead session were obviously rolled back. So it was impossible for me to get inconsistent results for the experiment.

    Am I missing something here, or did Bryn just miss some details?
    The code I tested with:

    /*First session*/
    drop table t purge;
    
    create table t (
      pk  int primary key,
      c1  varchar2(10),
      c2  varchar2(10)
    );
    
    insert into t (pk, c1)
    select rownum, chr(rownum+32)
    from dual connect by level <= 50;
    commit;
    
    /*Second session*/
    update t set c1 = c1 || 'Rabbit' where pk = 42;
    
    /*Back into the first session*/
    create or replace trigger t_trg
    before insert or update on t
    for each row
    begin
      :new.c2 := :new.c1;
    end t_trg;
    /
    
    update t set pk = pk;
    commit;
    
    select * from t where pk = 42;
    

    Output from the first session:

    Table T dropped.
    
    Table T created.
    
    50 rows inserted.
    
    Commit complete.
    
    Trigger T_TRG compiled
    
    

    Next it waits…

    Thank you,
    Viacheslav

    Comment by Viacheslav Andzhich — December 22, 2020 @ 11:49 am GMT Dec 22,2020 | Reply

    • Viacheslav,

      Your argument is correct.

      I think the point that Bryn was trying to make was that the person doing the 100% update to trigger the copy was trying to copy c1 as it was when their update started thanks to the “write consistency rollback” would have picked up the modified value. There’s no way that he would have forgotten that the big update would have to wait for the other session to commit, so I think he just left out the bit to clarify which version of the data the update would copy.

      Regards
      Jonathan Lewis

      Comment by Jonathan Lewis — December 24, 2020 @ 1:29 pm GMT Dec 24,2020 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.