Oracle Scratchpad

November 6, 2013

12c In-memory

Filed under: 12c,compression,Indexing,Infrastructure,Oracle — Jonathan Lewis @ 6:53 pm GMT Nov 6,2013

I wrote a note about the 12c “In-Memory” option some time ago on the OTN Database forum and thought I’d posted a link to it from the blog. If I have I can’t find it now so, to avoid losing it, here’s a copy of the comments I made:

Juan Loaiza’s presentation is probably available on the Oracle site by now, but in outline: the in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.

The intent is to help systems which are mixed OLTP and DSS – which sometimes have many “extra” indexes to optimise DSS queries that affect the performance of the OLTP updates. With the in-memory columnar copy you should be able to drop many “DSS indexes”, thus improving OLTP response times – in effect the in-memory stuff behaves a bit like non-persistent bitmap indexing.

Updated 18th Oct 2013:

I’ve been reminded that I think the presentation also included some comments about the way that the code also takes advantage of “vector” (SIMD) instructions at the CPU level to allow the code to evaluate predicates on multiple rows (extracted from the column store, not the row store) simultaneously, and this contributes to the very high rates of data scanning that Oracle Corp. claims.

The presentation from Juan Loaiza was still unavailable at the time of publishing this blog note (3rd Nov 2013). If it does become available as part of the Open World set of presentations it should be at this URL.


  1. Hi Jonathan Lewis,

    any idea whats the compression ration on the memory based tables. its been the rumor that compression is about 3 to 4x not 10x as in oracle exadata HCC

    Comment by Amir — January 4, 2014 @ 7:23 am GMT Jan 4,2014 | Reply

  2. Amir,

    I’ve signed a non-disclosure agreement, so I can’t tell you anything that isn’t already in the public domain.
    If you’ve heard a rumor about 3 to 4x, it’s possible that it came from someone who saw Juan Loaiza’s presentation – which, unfortunately, is still not available at the URL I’ve quoted.

    Comment by Jonathan Lewis — January 5, 2014 @ 10:22 am GMT Jan 5,2014 | Reply

  3. Hi Jonathan,
    Larry has created quite a buzz with his announcement of “in memory option”. It seems to me that this feature is still under development. Technical information on how it operates and how to implement it, is missing in 12.1 documentation. Oracle has put a 2009 IMDB cache white paper on its site where it describes this option, making the matters worse . Here is a link to that As per your description of it being a component of SGA means that it is quite different from times ten db or its IMDB cache

    Can you give a hint about when it is coming or if it is already there then how to use it.

    Comment by Gopesh Sharma — January 21, 2014 @ 9:46 pm GMT Jan 21,2014 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by