I have a fairly strong preference for choosing simple solutions over complex solutions, and using Oracle-supplied packages over writing custom code – provided the difference in cost (whether that’s in human effort, run-time resources or licence fees) is acceptable. Sometimes, though, the gap between simplicity and cost is so extreme that a hand-crafted solution is clearly the better choice. Here’s an idea prompted by a recent visit to a site that makes use of materialized views and also happens to be licensed for the partitioning option.
The client has one materialized view that is particularly large, but need only be refreshed once per day. Unfortunately the users may want to run queries while the MV is being refreshed so a “complete refresh” using the “truncate / insert append” option isn’t an option, and the call to dbms_mview.refresh results in a long and expensive (in terms of undo, redo and time) delete/insert cycle to replace all the data. Here’s a possible alternative that eliminates a huge amount of undo, redo and time, doesn’t seem to cost anything (beyond the – not so cheap – partitioning license), and doesn’t introduce complicated code.
In outline, you can do the following:
- Create a materialized view on a prebuilt partitioned table with a single partition.
- Create a simple clone of the materialized view table
- Every 24 hours (or any other time you want) truncate and re-populate the clone then exchange it with the single partition
Queries that are running against the partitioned table will follow the data segment as the exchange takes place so continuity should be no problem. You can use the append and nologging options for the materialized view and its clone, eliminating lots of undo and redo; it’s even safe to compress the table, and the index will be the minimum size of the data. Here’s some sample code – first we drop and create the relevant objects – note the compress, pctfree and nologging used on the segments relating to materialized views:
drop materialized view mv1; drop table mv1; drop table mv_clone; drop table t1; create table t1 as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e4 ) select mod(rownum-1, 1000) n1, trunc(dbms_random.value(0,1000)) val, trunc(dbms_random.value(0,1000)) qty, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1e4 ; create table mv1 ( n1, sum_val, sum_qty, count_rows ) compress pctfree 0 partition by range (n1) ( partition p_only values less than (maxvalue) ) as select n1, sum(val) sum_val, sum(qty) sum_qty, count(*) count_rows from t1 group by n1 ; create index mv1_i1 on mv1(n1) local pctfree 0; create materialized view mv1 on prebuilt table never refresh enable query rewrite as select n1, sum(val) sum_val, sum(qty) sum_qty, count(*) count_rows from t1 group by n1 ; create table mv_clone compress pctfree 0 nologging as select * from mv1 where rownum < 1 ; create index mvc_i1 on mv_clone(n1) nologging pctfree 0;
Now we can do a quick check to see that the materialized view will be used – but we have to allow “stale” data to let the optimizer use a prebuilt MV [Update: wrong – there are alternatives, see comments 2 and 7 below].
alter session set query_rewrite_integrity = 'stale_tolerated'; set autotrace on explain; select n1, count(*) ct, sum(qty)/count(*) my_avg from t1 where n1 in (150) group by n1 ; set autotrace off
Now we can insert some new data to the base table, and run through the procedure of refreshing the materialized view.
insert into t1 select n1 + 50, val + 50, qty + 50, padding from t1 where qty > 900 ; commit; truncate table mv_clone reuse storage; insert /*+ append */ into mv_clone select /*+ norewrite */ n1, sum(val) sum_val, sum(qty) sum_qty, count(*) count_rows from t1 group by n1 ; commit; begin dbms_stats.gather_table_stats( ownname => user, tabname =>'MV_CLONE', method_opt => 'for all columns size 1' ); end; / alter table mv1 exchange partition p_only with table mv_clone including indexes without validation ;
A couple of important details:
- I’ve included the “reuse storage” option with the truncate call because I know the data will always reuse the same amount of space.
- The “no_rewrite” hint on the SQL to refresh the clone is necessary, or Oracle would rewrite the query to use the (out of date) MV to populate the new MV
- The global (table-level) stats on the MV will be out of date after the exchange – you may want to address this, but they may be good enough. (In 12c you can set the preferences on the simple table to generate a synopsis so that if you also keep synopses on the partitioned table the global stats can be kept up to date.)
It’s interesting to note that 12c has enhanced the call to refresh materialized views with an “out of place” option. This actually does roughly what I’ve done above – it creates a copy of the materialized view in a separate table (with a name like RV$nnnnn – where nnnnn is the hexadecimal representation of the object_id of the original materialized view) – and then swaps the object ids of the old and new versions of the materialized views. Unfortunately it doesn’t seem to use an append strategy (though if you have indexes on the materialized view and you set them to nologging option they are recreated – after the new table has been populated – with the nologging option). Most significantly, though, you can’t use the “atomic” option to use this feature to refresh multiple materialized views to the same SCN.
As a general guideline – if you’re doing this kind of regular truncate/populate code (or drop/recreate) it’s a nice idea to use a separate tablespace for the objects which are constantly allocating and releasing space. Doing this probably won’t give you any performance benefits (though it’s possible that there may be some reduction in contention for space management blocks) but from a management perspective it’s nice to have a simple overview of the different characteristic behaviour of different types of object; it also gives you the ability to set up different storage management – uniform vs. autoallocate extents, ASSM vs. freelists.