It’s always the combinations that catch you out.
Bigfile tablespaces have their uses – especially in big systems
Materialized views have their uses – especially in big systems
There’s absolutely no reason why the two technologies should interfere with each other … until you find a bug !
Running an example, stripped to the bare minimum, and doing a couple of things that I personally don’t like doing, on 184.108.40.206:
drop materialized view t1_mv; drop table t1; create table t1 ( id number constraint t1_pk primary key, n1 number, n2 number ) tablespace bigfile_ts ; create materialized view log on t1 with rowid -- ugh! ; create materialized view t1_mv refresh fast on demand with rowid -- ugh! enable query rewrite as select id, n1 from t1 where n2 is not null ; -- Another ugh coming up ! insert into t1 values(1,null,null); update t1 set n1 = 2, n2 = 1 where id = 1; commit; begin dbms_mview.refresh( list => 'T1_MV', method => 'F', atomic_refresh => true ); end; /
Things I don’t like:
- Doing anything that is strongly dependent on rowid – I’d rather do my materialized view stuff by primary key … but, as Mick Jagger once told us: “You can’t always get what you want”.
- Processes that insert an “empty” row and then update it – it’s very inefficient, generates excessive undo and redo, and often leads to row migration
Most significantly I don’t like operations that have worked for ages suddenly crashing when someone adds a new piece of code to the system.
If you run the fragment above, the refresh will fail with the follow string of errors:
ERROR at line 1: ORA-12008: error in materialized view refresh path ORA-00001: unique constraint (TEST_USER.I_SNAP$_T1_MV) violated ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2809 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 3025 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2994 ORA-06512: at line 2
When you try to find out what the unique constraint is, it’s based on a hidden column (M_ROW$$) of type rowid that has been added to the materialized view to emulate a primary key; and, for some reason, if your base table is in a bigfile tablespace and you insert a row and then update it before you’ve run a refresh then you can’t do a fast refresh again until you clean up the mess (e.g. with a complete refresh).
Note – if you complete all your inserts, then refresh, you can update a row as many times as you like without causing subsequent refresh problems – it looks like it’s just “insert it, update it, refresh” (or “insert it, delete it, refresh”) that has a problem because the only way to hit the issue is to get a correct rowid (from an update or delete) and an incorrect rowid (from an insert) for the same row in the log at the same time.
Workaround: if you move the base table to a smallfile tablespace “you just might find you get what you need” (MJ).
This shows up on MoS as bug 17603987, fixed in 12.2.
If you want to dig in a little bit more, you can start by comparing the contents of t1 and mlog$_t1 (the materialized view log table). Critically:
SQL> select rowid from t1; ROWID ------------------ AAAXN2AAAAAAAJEAAA 1 row selected. SQL> select m_row$$, dmltype$$ from mlog$_t1; M_ROW$$ D -------------------------------- - AAAXN2AQAAAAAJEAAA I AAAXN2AAAAAAAJEAAA U 2 rows selected. SQL> select rowidtochar(m_row$$) m_row$$, dmltype$$ from mlog$_t1; M_ROW$$ D -------------------------------- - AAAXN2AAAAAAAJEAAA I AAAXN2AAAAAAAJEAAA U 2 rows selected.
If you compare (very carefully – it’s easy to miss) the rowid from t1 with the m_row$$ column (which should hold the rowids of rows from t1) from mlog$_t1 you’ll notice that the insert and the update have been given different values for the source row. Somehow the logging code for the insert has generated the wrong rowid value; however, when you apply a rowidtochar() to the wrong value the return value is the right value – and the merge statement that transfers modified data from the source table into the materialized view uses the rowidtochar() function to join the materialized view log back to the source table, with the result that the same rowid can be inserted twice – except the attempt results in a “duplicate key” error.