Oracle Scratchpad

September 11, 2013

MV Refresh

Filed under: compression,Infrastructure,Oracle,Partitioning,Performance — Jonathan Lewis @ 8:47 pm BST Sep 11,2013

I have a fairly strong preference for choosing simple solutions over complex solutions, and using Oracle-supplied packages over writing custom code – provided the difference in cost (whether that’s in human effort, run-time resources or licence fees) is acceptable. Sometimes, though, the gap between simplicity and cost is so extreme that a hand-crafted solution is clearly the better choice. Here’s an idea prompted by a recent visit to a site that makes use of materialized views and also happens to be licensed for the partitioning option.

The client has one materialized view that is particularly large, but need only be refreshed once per day. Unfortunately the users may want to run queries while the MV is being refreshed so a “complete refresh” using the “truncate / insert append” option isn’t an option, and the call to dbms_mview.refresh results in a long and expensive (in terms of undo, redo and time) delete/insert cycle to replace all the data. Here’s a possible alternative that eliminates a huge amount of undo, redo and time, doesn’t seem to cost anything (beyond the - not so cheap - partitioning license), and doesn’t introduce complicated code.

In outline, you can do the following:

  • Create a materialized view on a prebuilt partitioned table with a single partition.
  • Create a simple clone of the materialized view table
  • Every 24 hours (or any other time you want) truncate and re-populate the clone then exchange it with the single partition

Queries that are running against the partitioned table will follow the data segment as the exchange takes place so continuity should be no problem. You can use the append and nologging options for the materialized view and its clone, eliminating lots of undo and redo; it’s even safe to compress the table, and the index will be the minimum size of the data. Here’s some sample code – first we drop and create the relevant objects – note the compress, pctfree and nologging used on the segments relating to materialized views:


drop materialized view mv1;
drop table mv1;
drop table mv_clone;
drop table t1;

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	mod(rownum-1, 1000)			n1,
	trunc(dbms_random.value(0,1000))	val,
	trunc(dbms_random.value(0,1000))	qty,
	rpad('x',100)				padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e4
;

create table mv1 (
	n1,
	sum_val,
	sum_qty,
	count_rows
)
compress
pctfree 0
partition by range (n1) (
	partition p_only values less than (maxvalue)
)
as
select
	n1,
	sum(val)	sum_val,
	sum(qty)	sum_qty,
	count(*)	count_rows
from
	t1
group by
	n1
;

create index mv1_i1 on mv1(n1) local pctfree 0;

create materialized view mv1
on prebuilt table
never refresh
enable query rewrite
as
select
	n1,
	sum(val)	sum_val,
	sum(qty)	sum_qty,
	count(*)	count_rows
from
	t1
group by
	n1
;

create table mv_clone
compress
pctfree 0
nologging
as
select	*
from	mv1
where	rownum < 1 ;

create index mvc_i1 on mv_clone(n1) nologging pctfree 0;

Now we can do a quick check to see that the materialized view will be used – but we have to allow “stale” data to let the optimizer use a prebuilt MV (this may prove to be a sticking point for some people).


alter session set query_rewrite_integrity = 'stale_tolerated';

set autotrace on explain;

select 	n1, count(*) ct, sum(qty)/count(*) my_avg
from 	t1
where 	n1 in (150)
group by 	n1
;

set autotrace off

Now we can insert some new data to the base table, and run through the procedure of refreshing the materialized view.


insert	into t1
select
 	n1 + 50, val + 50, qty + 50,
 	padding
from	t1
where	qty > 900
;

commit;

truncate table mv_clone reuse storage;

insert  /*+ append */ into mv_clone
select  /*+ norewrite */
        n1,
        sum(val)        sum_val,
        sum(qty)        sum_qty,
        count(*)        count_rows
from
        t1
group by
        n1
;

commit;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'MV_CLONE',
		method_opt 	 => 'for all columns size 1'
	);

end;
/

alter table mv1 exchange partition p_only with table mv_clone
including indexes
without validation
;

A couple of important details:

  • Setting “query_rewrite_integrity” to stale_tolerated allows the optimizer to use this pre-built MV for the rewrite. [Correction: see notes 2 and 7 below]
  • I’ve included the “reuse storage” option with the truncate call because I know the data will always reuse the same amount of space.
  • The “no_rewrite” hint on the SQL to refresh the clone is necessary, or Oracle would rewrite the query to use the (out of date) MV to populate the new MV
  • The global (table-level) stats on the MV will be out of date after the exchange – you may want to address this, but they may be good enough. (In 12c you can set the preferences on the simple table to generate a synopsis so that if you also keep synopses on the partitioned table the global stats can be kept up to date.)

It’s interesting to note that 12c has enhanced the call to refresh materialized views with an “out of place” option. This actually does roughly what I’ve done above – it creates a copy of the materialized view in a separate table (with a name like RV$nnnnn – where nnnnn is the hexadecimal representation of the object_id of the original materialized view) – and then swaps the object ids of the old and new versions of the materialized views. Unfortunately it doesn’t seem to use the truncate/append strategy, or create the new object nologging (though perhaps you could force the latter by choice of tablespace), but most significantly you can’t use the “atomic” option to use this feature to refresh multiple materialized views to the same SCN.

As a general guideline – if you’re doing this kind of regular truncate/populate code (or drop/recreate) it’s a nice idea to use a separate tablespace for the objects which are constantly allocating and releasing space. Doing this probably won’t give you any performance benefits (though it’s possible that there may be some reduction in contention for space management blocks) but from a management perspective it’s nice to have a simple overview of the different characteristic behaviour of different types of object; it also gives you the ability to set up different storage management – uniform vs. autoallocate extents, ASSM vs. freelists.

13 Comments »

  1. Jonathan,

    Thanks for sharing this problem and your interesting approach to solve it. Couple of questions, if you don’t mind
    1. Is it safe to assume that the MV was/could not be “fast refreshed” ?
    2. I was wondering whether DBMS_REDEFINITION can be used in place of partitioning (and EXCHANGE PARTITION) in this approach. I suspect it may be more expensive (in terms of resource utilization) than the EXCHANGE PARTITION but partitioning is still an extra-cost option.

    Comment by Narendra — September 12, 2013 @ 10:08 pm BST Sep 12,2013 | Reply

    • Narenda,

      1) Yes, I believe it wasn’t considered appropriate
      2) Without working through the details I can’t work out whether it would be possible to subvert the routines that create the replacement table – but it might be possible. In terms of work, though it would be very similar because it’s “create table, insert/append” then modify the data dictionary. The main benefit would come, as you say, from not having to pay the partitioning license.

      Comment by Jonathan Lewis — September 13, 2013 @ 7:51 pm BST Sep 13,2013 | Reply

  2. Hi Jonathan,

    thanx for the interesting post. I want to add, that you can use “ALTER MATERIALIZED VIEW mv1 CONSIDER FRESH” to be able to use query rewrite not only in stale_tolerated but also in trusted query_rewrite_integrity mode.

    Cheers Mathias

    Comment by Mathias Zarick — September 13, 2013 @ 3:49 pm BST Sep 13,2013 | Reply

  3. Jonathan,

    your example already contains all the relevant information but perhaps it’s worth noting that for the use in partition exchange the clone has to be a complete structural clone: different compression settings for an index for example may lead to “ORA-28665 table and partition must have same compression attribute” – so the tuning of segment design should be completed before the exchange is implemented (of course that’s always a good idea, I assume …).

    The article is a relief: some years ago I used a similar strategy (but not with regular materialized view) to refresh dimension tables and was suprised that I could not find similar examples – and so I feared that there was a major flaw in the design that I didn’t realize…

    Martin

    Comment by Martin Preiss — September 14, 2013 @ 12:01 am BST Sep 14,2013 | Reply

  4. […] и чтобы не забылось, – ссылка на пост Дж.Льюиса MV Refresh, где маэстро описывает экономичный (в плане redo/undo) […]

    Pingback by 12c: о пользе Temporary Undo и реализации out-of-place обновления матвью в 11g | Oracle mechanics — September 16, 2013 @ 9:24 pm BST Sep 16,2013 | Reply

  5. Good idea. I had used the method of having two tables “behind” one synonym. Users query the synonym pointing to table A while table B is being refreshed. Once the B refresh is complete, the synonym switches from table A to table B. Obviously the refresh’s can use TRUNCATE and APPEND. But not Query Rewrite — which is the advantage an MV has.

    Comment by Hemant K Chitale — September 18, 2013 @ 7:36 am BST Sep 18,2013 | Reply

  6. Hi Jonathan,
    I can vague remember the combination of truncate reuse storage with (parallel) insert append did cause problems effectively not reusing the space and always appending new extents. Not sure if it was in Oracle 8 or 9.
    Test show that in 11 this combination works fine – nice to know I can use it again:)
    Can you confirm?
    Thanks,
    Jaromir D.B. Nemec

    Comment by Jaromir D.B. Nemec — September 18, 2013 @ 8:31 am BST Sep 18,2013 | Reply

    • Jaromir,

      I don’t recall seeing this bug – but if it was reported for 8 or 9 it’s likely that a report was raised and the bug fixed. There have been various anomalies with excess space usage on parallel insert/append, of course, because of the way that every parallel execution slave behaves as if it’s the sole creator of the data segment. Various people (including me) have written notes about that behaviour as it has evolved – I think Christian Antognini may be the most recent commentator on that issue.

      Comment by Jonathan Lewis — September 20, 2013 @ 9:30 am BST Sep 20,2013 | Reply

  7. Hi Jonathan,
    just a brief correction:

    A Materialized View on Prebuilt Table does not need QUERY_REWRITE_INTEGRITY=stale_tolerated in general.
    See here for an example:

    SQL> create table t as select 1 as col1 from dual;
    
    Table created
    
    SQL> create table mv as select min(col1) as mincol1 from t;
    
    Table created.
    
    SQL>  create materialized view mv on prebuilt table enable query rewrite as  select min(col1) as mincol1 from t;
    
    Materialized view created.
    
    SQL> alter session set query_rewrite_integrity=enforced;
    
    Session altered.
    
    SQL> select /*+ rewrite_or_error */ min(col1) from t;
    select /*+ rewrite_or_error */ min(col1) from t
                                                  *
    ERROR at line 1:
    ORA-30393: a query block in the statement did not rewrite
    
    
    SQL>  alter session set query_rewrite_integrity=trusted;
    
    Session altered.
    
    SQL> select /*+ rewrite_or_error */ min(col1) from t;
    
     MIN(COL1)
    ----------
             1
    
    SQL> exec dbms_mview.refresh('MV','C')
    
    PL/SQL procedure successfully completed.
    
    SQL> alter session set query_rewrite_integrity=enforced;
    
    Session altered.
    
    SQL> select /*+ rewrite_or_error */ min(col1) from t;
    
     MIN(COL1)
    ----------
             1
    

    Kind regards
    Uwe

    Comment by Uwe Hesse — September 20, 2013 @ 11:44 am BST Sep 20,2013 | Reply

    • Uwe,

      Thanks for the demonstration.

      Mathias made the same note in an earlier comment – but I was so distracted by his reminder of “consider fresh” for a specific view that I failed to make any comment about it.

      Comment by Jonathan Lewis — September 20, 2013 @ 1:51 pm BST Sep 20,2013 | Reply

  8. Thank you for making my code look nice. I forgot to copy
    SQL> create table t as select 1 as col1 from dual;

    CONSIDER FRESH is also not generally needed for MVs on prebuilt tables. qed :-)

    Comment by Uwe Hesse — September 20, 2013 @ 2:03 pm BST Sep 20,2013 | Reply

    • Uwe,

      I’ve added the “create table” to the original comment.
      I’ll have to add a comment at the end of the article summing up the two comments.

      Comment by Jonathan Lewis — September 20, 2013 @ 2:18 pm BST Sep 20,2013 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,514 other followers