Oracle Scratchpad

August 27, 2014

In-memory Consistency

Filed under: 12c,in-memory,Oracle — Jonathan Lewis @ 7:00 pm GMT Aug 27,2014

A comment on one of my early blogs about the 12c in-memory database option asked how Oracle would deal with read-consistency. I came up with a couple of comments outlining the sort of thing I would look for in a solution, and this note is an outline on how I started to tackle the question – with a couple of the subsequent observations. The data is (nearly) the same as the data I generated for my previous article on the in-memory database (and I’m running 12.1.0.2, of course):


create table t1 nologging
as
select  *
from    all_objects
where   rownum <= 50000
;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

begin
        dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

alter table t1
        inmemory priority high memcompress for query low
        inmemory memcompress for query high (object_type)
;

In this case I’ve made the inmemory priority high and I haven’t set any column to “no inmemory” although I have made one column different from the rest (v$_im_column_level doesn’t get populated unless there is some variation across columns). I have to say I couldn’t get very consistent behaviour in terms of when the data finally got into memory with this table creation – possibly something to do with using “alter table” rather than “create table” – but a second “alter table t1 inmemory;” seemed to do the trick if Oracle was playing hard to get.

Once I’d checked that the table was in memory I collected performance figures from v$mystat and v$session_event for the following query:


select
        /* Test Run */
        last_ddl_time
from
        t1
where   t1.created > trunc(sysdate)
and     t1.object_type = 'TABLE'
and     t1.subobject_name is not null
;

Once I was satisfied that the in-memory option was working correctly, I went through the following steps:

  • Session 1: set transaction read only;
  • Session 1: run the query and collect performance figures
  • Session 2: do several small, committed, updates, modifying a total of 30 or 40 random rows
  • Session 2: Flush the buffer cache – so that we can see future block acquisition
  • Session 1: re-run the query and collect performance figures – compare and contrast

The effect of the “set transaction read only;” was to force the session to do some extra work in the second execution of the query to make the data read-consistent back to the start of the “transaction”. The results were as follows (don’t forget that some of the numbers will relate to the action of collecting the performance figures):


First execution
===============
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,680
DB time                                                                      1
non-idle wait count                                                          4
consistent gets                                                              3
consistent gets from cache                                                   3
consistent gets pin                                                          3
consistent gets pin (fastpath)                                               3
logical read bytes from cache                                           24,576
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
table scans (long tables)                                                    1
table scans (IM)                                                             1
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,677
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
SQL*Net message to client                             9           0           0.00        .000           0
SQL*Net message from client                           9           0           0.44        .049       8,408

Second Execution
================
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,728
DB time                                                                      1
non-idle wait count                                                         35
enqueue requests                                                             2
enqueue releases                                                             2
physical read total IO requests                                             29
physical read total multi block requests                                    24
physical read total bytes                                            6,987,776
cell physical IO interconnect bytes                                  6,987,776
consistent gets                                                             92
consistent gets from cache                                                  92
consistent gets pin                                                         44
consistent gets pin (fastpath)                                               5
consistent gets examination                                                 48
logical read bytes from cache                                          753,664
physical reads                                                             853
physical reads cache                                                       853
physical read IO requests                                                   29
physical read bytes                                                  6,987,776
consistent changes                                                          48
free buffer requested                                                      894
CR blocks created                                                           41
physical reads cache prefetch                                              824
physical reads prefetch warmup                                             713
shared hash latch upgrades - no wait                                        43
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
file io wait time                                                        3,861
data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41
table scans (long tables)                                                    1
table scans (IM)                                                             1
table scan rows gotten                                                   2,803
table scan blocks gotten                                                    41
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,636
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan rows cache                                                          48
IM scan blocks cache                                                        41
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150
bytes received via SQL*Net from client                                   1,772
SQL*Net roundtrips to/from client                                            4

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
Disk file operations I/O                              2           0           0.01        .003           0
db file sequential read                               5           0           0.01        .001           0
db file scattered read                               24           0           0.38        .016           0
SQL*Net message to client                            10           0           0.01        .001           0
SQL*Net message from client                          10           0           0.76        .076       8,408

There’s quite a lot of stats which probably aren’t interesting – and there’s one detail that is important but doesn’t appear (at least not clearly) and that’s the fact that the table in question had about 6,800 blocks below its highwater mark.

So, what do the stats tell us? The most obvious change, of course, is that we had to do some physical reads to get a result set: 24 multiblock reads and 5 single block reads (the latter from the undo tablespace). This is echoed in the session stats as 853 “physical reads cache” from 29 “physical read IO requests”. We can then see the specific read-consistency work (in two ways – with a third close approximation):

consistent changes                                                          48
CR blocks created                                                           41

data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41

IM scan rows cache                                                          48
IM scan blocks cache                                                        41

We applied 48 undo change vectors to fix up 41 blocks to the correct point in time and used them to read 48 rows – the last pair of figures won’t necessarily match the first two pairs, but they do give us a measure of how much data we had to acquire from the cache when trying to do an in-memory scan.

The number 41 actually appears a couple more times: it’s “table scan blocks gotten” (which might seem a little odd since we got far more than 41 blocks by multiblock reads – but we only really wanted 41), and it’s also the change (downwards) in “session logical reads – IM”. Even when Oracle does a pure in-memory query it calculates the number of blocks it would have been reading and reports that number as “session logical reads” and “session logical reads – IM” – so there’s another way to get confused about buffer visits and another statistic to cross-check when you’re trying to work out how to calculate “the buffer hit ratio” ;)

After the first read the scattered reads all seemed to be 32 blocks of “intermittent” tablescan – perhaps this is a measure of the number of blocks that are compressed into a single in-memory chunk (for query low), but perhaps it’s a side effect of the “physical reads prefetch warmup” that Oracle may do when the cache has a lot of empty space. I’ll leave it as an exercise to the reader to refine the test (or think of a different test) to determine whether it’s the former or latter; it’s quite important to find this out because if Oracle is tracking change at the “in-memory chunk” rather than at the block level then a small amount of high-precision change to an in-memory table could result in a relatively large amount of “redundant” I/O as a long-running query tried to stay read-consistent.

August 24, 2014

In-memory Aggregation

Filed under: 12c,in-memory,Oracle,Performance — Jonathan Lewis @ 8:05 pm GMT Aug 24,2014

The title of this piece is the name given to a new feature in 12.1.0.2, and since I’ve recently blogged about a limitation of the in-memory option I thought I’d pick this feature as the next obvious thing to blog about. This is a bit of a non sequitur, though, as the feature seems to have nothing whatsoever to do with the in-memory option; instead it’s a cunning mechanism combining aspects of the star-transformation (but without the bitmap indexes), Bloom filters, and “group-by” placement to minimise the cost of aggregation over high-volume joins.

Here’s a small data set I’ll use to demonstrate the feature:

create table towns
as
select
        rownum                                          id,
        trunc(dbms_random.value(1,51))                  id_state,
        rpad(dbms_random.string('U',3),12)              name,
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        all_objects
where
        rownum <= 2000
;

alter table towns add constraint to_pk primary key(id);
create index to_i1 on towns(name);

create table people(
        id_town_work    number(6,0)     not null
                constraint pe_fk_wo references towns,
        id_town_home    number(6,0)     not null
                constraint pe_fk_ho references towns,
        dummy1          varchar2(10),
        dummy2          varchar2(10),
        padding         varchar2(110)
);

insert /*+ append */  into people
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(dbms_random.value(1,2001)),
        trunc(dbms_random.value(1,2001)),
        lpad(rownum,10),
        lpad(rownum,10),
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'Towns',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'People',
                method_opt       => 'for all columns size 1'
        );
end;
/

I have a “large” table of people, and people can live in one town and work in another. Towns are in states and I’m interested in a report about people who live in one specific state but work in another (e.g. New Hampshre vs. Massachusetts). There are a couple of “padding” columns to represent the data associated with each town and person that I might want in a report. To keep things simple I haven’t extended the query out to select the name of the state. Here’s the query I might use to get the report I want:

select
        wt.padding,
        ht.padding,
        max(pe.padding)
from
        towns   wt,
        towns   ht,
        people  pe
where
        wt.id_state     = 1
and     pe.id_town_work = wt.id
and     ht.id_state     = 2
and     pe.id_town_home = ht.id
group by
        wt.padding,
        ht.padding
;

You might expect something like the following as the execution plan:

-------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |    40 |  7600 |   179   (6)| 00:00:01 |
|   1 |  HASH GROUP BY       |        |    40 |  7600 |   179   (6)| 00:00:01 |
|*  2 |   HASH JOIN          |        |    40 |  7600 |   178   (6)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL | TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|*  4 |    HASH JOIN         |        |  2000 |   248K|   173   (6)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL| TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| PEOPLE |   100K|  6250K|   165   (4)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PE"."ID_TOWN_HOME"="HT"."ID")
   3 - filter("HT"."ID_STATE"=2)
   4 - access("PE"."ID_TOWN_WORK"="WT"."ID")
   5 - filter("WT"."ID_STATE"=1)

The order of operation (row source generation) is: 3,5,6,4,2,1 – we build a hash table from the towns in state 2; build a hash table from the towns in state 1; scan the people table and probe the state 1 hash table, any row that survives is used to probe the state 2 hash table, and the rows that survive the second probe are aggregated to produce the answer.

When you do this type of thing with very large data sets one of the potential performance threats comes from the volume of data you have to aggregate. As we’ve joined the three tables the row length grows significantly before we finally aggregate (admittedly my data set is small, and the number of rows we’re going to aggregate also appears to be very small according to the predictions). There’s also (in the early stages at least) the potential for passing a very large number of rows from the fact table through the first (and possibly subsequent) hash join, doing a lot of work to eliminate the rows you don’t need.

In 12c the optimizer can choose to minimise both these threat points using “vector transformation”. (The name may also reflect the possibility that the code path will take advantage of vector processing (SIMD) operations if they’re available in the CPU.) Here’s the execution path I got when I added the /*+ vector_transform(@sel$1) */ hint to my query – it’s not sensible for this tiny data set, of course, but the hint is a way of learning what Oracle can do:

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                           |    71 | 15975 |   184   (6)| 00:00:01 |

|   1 |  TEMP TABLE TRANSFORMATION    |                           |       |       |            |          |

|   2 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6661_31399B |       |       |            |          |
|   3 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED| :KV0000                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|   6 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6662_31399B |       |       |            |          |
|   7 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   8 |     KEY VECTOR CREATE BUFFERED| :KV0001                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  9 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|  10 |   HASH GROUP BY               |                           |    71 | 15975 |   172   (6)| 00:00:01 |
|* 11 |    HASH JOIN                  |                           |    71 | 15975 |   171   (5)| 00:00:01 |
|  12 |     TABLE ACCESS FULL         | SYS_TEMP_0FD9D6662_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|* 13 |     HASH JOIN                 |                           |    71 | 11218 |   169   (5)| 00:00:01 |
|  14 |      TABLE ACCESS FULL        | SYS_TEMP_0FD9D6661_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|  15 |      VIEW                     | VW_VT_C444E4CB            |    71 |  6461 |   167   (5)| 00:00:01 |
|  16 |       HASH GROUP BY           |                           |    71 |  5112 |   167   (5)| 00:00:01 |
|  17 |        KEY VECTOR USE         | :KV0000                   |    71 |  5112 |   167   (5)| 00:00:01 |
|  18 |         KEY VECTOR USE        | :KV0001                   |  2000 |   132K|   167   (5)| 00:00:01 |
|* 19 |          TABLE ACCESS FULL    | PEOPLE                    |   100K|  6250K|   165   (4)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("WT"."ID_STATE"=1)
   9 - filter("HT"."ID_STATE"=2)
  11 - access("ITEM_10"=INTERNAL_FUNCTION("C0") AND "ITEM_11"="C2")
  13 - access("ITEM_8"=INTERNAL_FUNCTION("C0") AND "ITEM_9"="C2")
  19 - filter(SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_HOME",:KV0001) AND
              SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_WORK",:KV0000))

There are three critical components to this plan: first, we create a couple of “Key Vectors” from the towns table, then we use those key vectors while scanning the people table and aggregate a minimal data set, finally we join back to the data associated with the key vectors. Reprising my introductory paragraph: the creation and use of the key vectors is similar to the Bloom filter approach; the final join-back is similar to the strategy used in Star Transformations (especially the ones where temp tables appear), and the key vector allows the high-volume fact data to be aggregated as much as possible before adding extra row-length from the dimensions.

In outline Oracle does the following:

  • scan the towns table to extract the id, and padding columns for id_state = 1 / work town – this produced 50 rows with my data set
  • manipulate the result to extract the distinct values of padding, and give each value a unique numeric identifier – this is the information that goes into the temp table (with one extra column) – this produced 10 rows
  • manipulate the result again to produce an in-memory array of (town.id, temp_table.identifier) – this is the key vector, containing 50 elements.

The second temp table and key vector for (id_state = 2 /work town ) will be created in the same way.

As the fact table is scanned Oracle can apply the key vectors very efficiently (we hope) to pick out the people rows that would be involved in the final aggregate and associate with each relevant row the two padding identifiers that belong to that row (this step is a bit like doing 2 hash joins – but presumably much more efficient; Bloom filtering does something very similar). After selecting the minimum number of rows we can aggregate them on the  two padding identifiers (an example of the “aggregate early”/”place group by” principle – aggregate before joining); finally we join back to the two temporary tables to translate the short padding identifiers into the long padding values (just as we do in star transformations with temporary table transformation).

Strangely we aggregate again after the join-back. I don’t think it’s necessary in this case because I’m fairly sure that the join back is on a unique set of columns – but perhaps this is a generic strategy allowing for variations in the mechanism, including such things as cases where the vector transform is only applied to a subset of the dimension tables.

Technically you could almost emulate this strategy in any version of Oracle (and I probably have at various times over the last few years) with the major limitation that the “KEY VECTOR USE” operations at lines 17 and 18 would have to be replaced with hash joins; no doubt, though, the major CPU saving of this approach is the difference between consecutive hash joins and what appears to be (from the execution stats) concurrent vector filtering. At some point – if a client needs the extra performance edge before they get to 12c – I’ll have to see if I can engineer an example in 11g that emulates the whole plan but uses Bloom filtering to approximate the key vector filtering.

 

August 15, 2014

In-memory limitation

Filed under: 12c,in-memory,Oracle — Jonathan Lewis @ 8:51 pm GMT Aug 15,2014

I’ve been struggling to find time to have any interaction with the Oracle community for the last couple of months – partly due to workload, partly due to family matters and (okay, I’ll admit it) I really did have a few days’ holiday this month. So making my comeback with a bang – here’s a quick comment about the 12.1.0.2 in-memory feature, and how it didn’t quite live up to my expectation; but it’s also a comment about assumptions, tests, and inventiveness.

One of the 12.1.0.2 manuals tells us that the optimizer can combine the in-memory columnar storage mechanism with the “traditional” row store mechanisms – unfortunately it turned out that this didn’t mean quite what I had hoped; I had expected too much of the first release. Here’s a quick demo of what doesn’t happen, what I wanted to happen, and how I made it happen, starting with a simple definition (note – this is running 12.1.02 and the inmemory_size parameter has been set to enable the feature):


create table t1 nologging
as
select	*
from	all_objects
where	rownum <= 50000
;

alter table t1 inmemory
no inmemory (object_id, object_name)
inmemory memcompress for query low (object_type)
-- all other columns implicitly inmemory default
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

begin
	dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

rem
rem	Needs select on v$_im_column_level granted
rem

select
	table_name,
	column_name,
	inmemory_compression
from
	v$im_column_level
where	owner = user
and	table_name = 'T1'
order by
	segment_column_id
;

explain plan for
select
	last_ddl_time, created
from
	t1
where	t1.created > trunc(sysdate)
and	t1.object_type = 'TABLE'
and	t1.subobject_name is not null
;

select * from table(dbms_xplan.display);

All I’ve done at this point is create a table with most of its columns in-memory and a couple excluded from the columnar store. This is modelling a table with a very large number of columns where most queries are targeted at a relatively small subset of the data; I don’t want to have to store EVERY column in-memory in order to get the benefit of the feature, so I’m prepared to trade lower memory usage in general against slower performance for some queries. The query against v$im_column_level shows me which columns are in-memory, and how they are stored. The call to explain plan and dbms_xplan then shows that a query involving only columns that are declared in-memory could take advantage of the feature. Here’s the resulting execution plan:

-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |     1 |    27 |    73   (9)| 00:00:01 |
|*  1 |  TABLE ACCESS INMEMORY FULL| T1   |     1 |    27 |    73   (9)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - inmemory("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))
       filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

Note that the table access full includes the inmemory keyword; and the predicate section shows the predicates that have taken advantage of in-memory columns. The question is – what happens if I add the object_id column (which I’ve declared as no inmemory) to the select list.  Here’s the resulting plan:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    32 |  1818   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    32 |  1818   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T1"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1"."OBJECT_TYPE"='TABLE' AND "T1"."CREATED">TRUNC(SYSDATE@!))

There’s simply no sign of an in-memory strategy – it’s just a normal full tablescan (and I didn’t stop with execution plans, of course, I ran other tests with tracing, snapshots of dynamic performance views etc. to check what was actually happening at run-time).

In principle there’s no reason why Oracle couldn’t use the in-memory columns that appear in the where clause to determine the rowids of the rows that I need to select and then visit the rows by rowid but (at present) the optimizer doesn’t generate a plan to do that. There’s no reason, though, why we couldn’t try to manipulate the SQL to produce exactly that effect:


explain plan for
select
        /*+ no_eliminate_join(t1b) no_eliminate_join(t1a) */
        t1b.object_id, t1b.last_ddl_time, t1b.created
from
        t1 t1a, t1 t1b
where   t1a.created > trunc(sysdate)
and     t1a.object_type = 'TABLE'
and     t1a.subobject_name is not null
and     t1b.rowid = t1a.rowid
;

select * from table(dbms_xplan.display);

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    64 |    74   (9)| 00:00:01 |
|   1 |  NESTED LOOPS               |      |     1 |    64 |    74   (9)| 00:00:01 |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |     1 |    31 |    73   (9)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1   |     1 |    33 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))
       filter("T1A"."SUBOBJECT_NAME" IS NOT NULL AND
              "T1A"."OBJECT_TYPE"='TABLE' AND "T1A"."CREATED">TRUNC(SYSDATE@!))

I’ve joined the table to itself by rowid, hinting to stop the optimizer from getting too clever and eliminating the join. In the join I’ve ensured that one reference to the table can be met completely from the in-memory columns, isolating the no inmemory columns to the second reference to the table. It is significant that the in-memory tablescan is vastly lower in cost than the traditional tablescan – and there will be occasions when this difference (combined with the knowledge that the target is a relatively small number of rows) means that this is a very sensible strategy. Note – the hints I’ve used happen to be sufficient to demonstrate method but I’d be much more thorough in a production system (possibly using an SQL baseline to fix the execution plan).

Of course, this method is just another example of the “visit a table twice to improve the efficiency” strategy that I wrote about a long time ago; and it’s this particular variant of the strategy that allows you to think of the in-memory columnar option as an implementation of OLTP bitmap indexes.

August 3, 2014

Analogy – 2

Filed under: 12c,in-memory,Oracle — Jonathan Lewis @ 1:41 pm GMT Aug 3,2014

I suggested a little while ago that thinking about the new in-memory columnar store as a variation on the principle of bitmap indexes was quite a good idea. I’ve had a couple of emails since then asking me to expand on the idea because “it’s wrong” – I will follow that one up as soon as I can, but in the meantime here’s another angle for connecting old technology with new technology:

It is a feature of in-memory column storage that the default strategy is to store all columns in memory. But it’s quite likely that you’ve got some tables where a subset of the columns are frequently accessed and other columns are rarely accessed and it might seem a waste of resources to keep all the columns in memory just for the few occasional queries. So the feature allows you to de-select columns with the “no inmemory({list of columns})” option – it’s also possible to use different degrees of compression for different columns, of course, which adds another dimension to design and planning – but that’s a thought for another day.

So where else do you see an example of being selective about where you put columns ?  Index Organized Tables (IOTs) – where you can choose to put popular columns in the index (IOT_TOP) segment, and the rest in the overflow segment, knowing that this can give you good performance for critical queries, but less desirable performance for the less important or less frequent queries. IOTs allow you to specify the (typically short) list of columns you want “in” – it might be quite nice if the same were true for the in-memory option, I can imagine cases where I would want to include a small set of columns and exclude a very large number of them (for reasons that bring me back to the bitmap index analogy).

 

July 27, 2014

Analogy

Filed under: 12c,in-memory,Oracle — Jonathan Lewis @ 8:02 am GMT Jul 27,2014

So 12.1.0.2 is out with a number of interesting new features, of which the most noisily touted is the “in-memory columnar storage” feature. As ever the key to making best use of a feature is to have an intuitive grasp of what it gives you, and it’s often the case that a good analogy helps you reach that level of understanding; so here’s the first thought I had about the feature during one of the briefing days run by Maria Colgan.

“In-memory columnar storage gives you bitmap indexes on OLTP systems without the usual disastrous locking side effects.”

Obviously the analogy isn’t perfect … but I think it’s very close:  for each column stored you use a compression technique to pack the values for a large number of rows into a very small space, and for each stored row you can derive the rowid by arithmetic.  In highly concurrent OLTP systems there’s still room for some contention as the session journals are applied to the globally stored compressed columns (but then, private redo introduces some effects of that sort anyway); and the “indexes” have to be created dynamically as tables are initially accessed (but that’s a startup cost, it’s timing can be controlled, and it’s basically limited to a tablescan).

Whatever the technical hand-waving it introduces – thinking of the in-memory thing as enabling real-time bitmaps ought to help you think of ways to make good use of the feature.

 

 

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 4,521 other followers