Oracle Scratchpad

March 20, 2009

Hotsos 2009

Filed under: CBO,Oracle,Statistics — Jonathan Lewis @ 7:43 am GMT Mar 20,2009

So much for my belief that I’d have some quiet time for catching up with a little internet gossip while attending Hotsos 2009.

The days were busy and I crashed out at about 7:00 pm (local time) each evening and was asleep by 7:30 pm and up by 2:00 a.m – then I spent the morning (until breakfast) writing up notes I had taken the day before. So hardly a moment for blogging or answering questions on OTN.

Best topic of Hotsos (for me, at any rate – there were lots of very good presentations): Amit Poddar’s presentation on the new “approximate NDV” mechanism that Oracle 11g uses to do a one pass, accurate, estimate of the number of distinct values in a column (no more massive sorts for “count(distinct)” and how it manages to keep a “synopsis” for each partition of a partitioned table so that there is no need to sample the entire table when you need to recalculate table-level statistics.

Amit Poddar’s website is no longer online, but he has given me permission to publish the material, so here are the links to a pair of pdf files: the presentation (1.6MB), and the white paper (3.45MB).

The mathematics is brilliant, and I’m going to have to review my previous strategy for stats collection as a consequence.

The upside to starting the day at 2:00 am in Dallas, by the way – no jet lag when I got home !

Update Dec 2010:

Just in from an OTN thread and Greg Rahn; there is a bug relating to synopses and approximate NDV that shows up with partitioned tables and incremental stats – leading to very long stats collection time. The bug number is 8310339, and Greg Rahn recommends applying the fix for bug 8719831.

Update Jan 2012:

A recent OTN note highlights a problem with the Approximate NDV code when collecting statistics on external tables. MOS notes 1290722.1 or 1305127.1 are  relevant.

 

38 Comments »

  1. Jonathan,

    I also attended that presentation and liked it. I would also like to thank you for your presentations and training class. This was my first Hotsos attendance and the first time to hear you give a presentation. Excellent is the only word I can use to describe it. Your topics were enjoyable and has started me to think in other directions when it comes to the CBO, hints and Oracle as a whole.

    Thank you,
    Mark

    Comment by Mark Davidson — March 20, 2009 @ 6:49 pm GMT Mar 20,2009 | Reply

    • Mark,

      Thanks for the comment – it’s nice to know you enjoyed the sessions, and I like to hear that I’ve helped people view things differently.

      I have to say that I don’t often feel that I’ve done a good job with my presentations, but I felt really pleased with that “Hints on Hints” one I did at Hotsos. I’ll be doing it again at Collaborate in one of the two-hour ‘Expert Session’ slots – so it will be interesting to see if it goes just as well the second time around.

      Comment by Jonathan Lewis — March 22, 2009 @ 1:05 am GMT Mar 22,2009 | Reply

  2. I didn’t attend Amits presentation as I was speaking in the other room at the same time. But I read his paper later on and he was very, very thorough in his work! Very good stuff. And yes, there are some very clever people working in Oracle development team :)

    Comment by Tanel Poder — March 21, 2009 @ 4:29 pm GMT Mar 21,2009 | Reply

  3. I also liked his presentation

    Comment by David — March 21, 2009 @ 9:35 pm GMT Mar 21,2009 | Reply

  4. Hi jonathan,

    I think you meant

    “and how it manages to keep a “synopsis” for each partition of a partitioned table so that there is no need to scan the entire table when you need to recalculate statistics for a single partition.”

    “and how it manages to keep a “synopsis” for each partition of a partitioned table so that there is no need to scan the entire table when you need to caclulate global NDV.”

    For recalculating statistics for a single partition oracle scans the partition again. Only global NDV (statistics) is maintained incrementatlly (via synopes merging).
    Even for tables with subpartitions Oracle does not seem to extend this synopses merging algorithm to gather partition NDV by merging subpartition synopses. But implementing a similar incremental maintainence of partition statistics from sub partition statistcs can easily be implemented using dbms_sqltune_internal.gather_sql_stats procedure.
    By the way both the paper and presentation can be downloaded from http: //www.oraclegeek.net [Ed: no longer available]

    Comment by Amit Poddar — March 21, 2009 @ 10:02 pm GMT Mar 21,2009 | Reply

    • Amit,

      You’re correct – the comment was supposed to be highlighting the fact that the global NDV could be derived efficiently after you had recalculated the statistics (and NDV in particular) for a single partition. [Now corrected]

      Thanks for supplying the link to the paper and presentation.

      (Sorry that you had some trouble getting your reply into the blog – for reasons that I can’t guess, you attempts kept getting marked as spam.)

      Comment by Jonathan Lewis — March 22, 2009 @ 1:09 am GMT Mar 22,2009 | Reply

  5. Amit Poddar’s paper above is indeed a great piece of work, very detailed and scientifically designed.

    I’m especially enjoying the statistical discussion and proof about the estimation of NDV by sampling. It looks like, interestingly and perhaps not surprisingly, yet another variation of the good old “select without replacement” scenario; I hope that I will be able to find the time in the near future to compare this estimation algorithm with the similar ones already implemented by the CBO…

    Comment by Alberto Dell'Era — March 27, 2009 @ 10:38 am GMT Mar 27,2009 | Reply

  6. Jonathan,

    Is it a usual practice for you to write up notes you had taken the day before?

    This is an interesting hint to me about how to spend the time on conferences. I guess it has something to do with how our brains learn.

    Comment by Todor Botev — March 31, 2009 @ 9:25 am BST Mar 31,2009 | Reply

    • Todor,

      Yes. When I listen to a presentation I jot down all sorts of notes; some are about what’s been said, some about what might also be true and some about related topics that might overlap.

      Writing the notes up shortly afterward means (a) I can still read the writing (and remember what I was thinking about), and (b) I can distribute the notes across the various documents, references, and scripts that I maintain so that the information is in the right place. It’s also a time when I run a few of the tests that the presentation has suggested as further points of investigation.

      Comment by Jonathan Lewis — April 15, 2009 @ 9:58 am BST Apr 15,2009 | Reply

  7. Alberto,

    I finally managed to read the full paper last night. The way Oracle scales up a sampled NDV by inverting the mechanism you explained for the ‘select count(distinct)’ concept is brilliant and (like so much of what they do) so obvious AFTER it’s been explained well.

    Comment by Jonathan Lewis — April 18, 2009 @ 7:21 am BST Apr 18,2009 | Reply

  8. Jonathan,

    I’ve read the two articles re. NDV estimation. I surely must be missing something in the following situation:

    1. Let’s assume an ideal hash function without collisions
    2. Let’s assume that a column contains N+N/2 unique distinct values where N is the hash table size.

    As I understand from reading the algorithm description, after the hash table is full, half of the table is discarded and upon the algorithm completion the HT size is multiplied by two power number of discards to obtain an NDV approximation.

    So, with n=N+N/2, after the first discard we have 2*(N/2), after the algorithm completion we have 2*N (because the HT size is N) which is 30% overestimate over 3/2N. What am I missing.

    Thanks.

    Comment by Val — April 21, 2009 @ 3:05 pm BST Apr 21,2009 | Reply

  9. Hi val,

    When the synopsis is full oracle splits the domain into two parts and only keeps the value that belongs to top half. Hash values that belong to the bottom half are rejected. Moreover any new hash values that map to the lower half of the domain do not make into the synopsis.

    Taking your example:

    Lets say the synopsis size = N
    Number of distinct values = 3N/2

    Now we start reading column values and hashing them into our domain. When we have read N column values our synopsis is full.

    On reading N+1th value we split our domain. Assuming perfect hashing function we discard half the hash values i.e. we have N/2 hash values in our synopsis.

    Now there are N/2 more values to be read. Assuing perfect hashing function, out of these N/2 column values N/4 values will map to top half of the domain and N/4 will map to the bottom half of the domain. Since the domain has already been split once, only hash values that map to the top half are put in the synopsis i.e. Out of N/2 values only N/4 values make into the synopsis. So at the end of the scan we have

    N/2 + N/4 values = 3N/4 values in our synopsis

    so NDV = 3N/4 * 2**(number of splits) = 3N/4 * 2**1 = 3N/2

    Amit

    Comment by Amit Poddar — April 21, 2009 @ 10:53 pm BST Apr 21,2009 | Reply

  10. The point I think you are missing is that after the first split, out of N/2 Values only N/4 values will make into the synopsis half of them will map to the lower half of the domain which is being rejected after the first split

    Comment by Amit Poddar — April 22, 2009 @ 2:19 am BST Apr 22,2009 | Reply

  11. Amit,

    “out of N/2 Values only N/4”

    Yes, that’s exactly what I missed when I took a look at the pseudo code. I should have read the white paper more carefully.

    So, essentially what happens is that ultimately (NDV – NDV/2^s) random unbiased bit hash strings are discarded where s is the number of arbitrarily chosen bits (they just choose a bit pattern starting with MSB=1, then MSB-1 = 1 and so on). If you could estimate s so that NDV/2^s fits in the synopsis, you could count in one pass without re-adjusting s.

    E.g. for a 10 million possibly unique values with a synopsis size of 16K you might have chosen to fix 18 bits (say 0xffc00000) to fit all the hashes.

    Comment by Val — April 22, 2009 @ 2:16 pm BST Apr 22,2009 | Reply

  12. Amit,

    Thanks for taking the time to respond to Val’s question.

    It occurred to me recently that not only does the strategy involve the “selection with replacement” formulae used for join selectivity, but it seems likely that the strategy for spilling partitions of the build table table in a hash join also plays a part.

    It seems likely that when building the hash table, the hash results are handled in a way that allows Oracle to use the top N bits of the hash value to indicate the partition that a value belongs to when it has to overflow to disc (e.g. for 8 partitions, we track the top 3 bits as the partition id when spilling).

    In fact, a similar sort of trick probably comes into play when distributing data across a hash partitioned table – if the number of partitions isn’t a power of two and the partition you would use doesn’t exist you chop the top bits off the partition number until you get to a partition number that does exist.

    Comment by Jonathan Lewis — April 22, 2009 @ 3:28 pm BST Apr 22,2009 | Reply

  13. Hi jonathan,

    The size of the hash table in the hash join condition is variable i.e. it completely depends on the data source being hashed. So it makes sense to follow a partition strategy since the size is variable and chances are high that something will spill to disk (or oracle does not now know untill runtime whether it has to spill or not).

    But in this case the hash table (synopsis) size is fixed at 16384 hash values. Morover each hash value will be of fixed size i.e. 64 bits (8 bytes). So that hash table size is fixed at 16384*8=130KB.

    So the synopsis size (for a column) will be fixed at 130KB (i.e. in no condition the size will change).
    That implies oracle can easily keep this in memory. So I do not think that spilling to disk comes into picture here.

    Yes I think oracle most probably keeps the synopsis as a linked list in the memory where each list corresponds to number of leading bits set as 1. So that during the split it can easily find out all the values that need to be discarded.

    For more hints on the physical implementation there is a white paper by P B gibbons. (As far as I can find he holds the patent for this algorithm) In his paper he talks about some of the possible efficient physical implemnetations. You can search on google with (Gibbons + distinct sampling) to find the paper. You can also look for his patent at http://patentstorm.us

    amit

    Comment by Amit Poddar — April 22, 2009 @ 5:09 pm BST Apr 22,2009 | Reply

  14. Hi val,

    “If you could estimate s so that NDV/2^s fits in the synopsis, you could count in one pass without re-adjusting s. E.g. for a 10 million possibly unique values with a synopsis size of 16K you might have chosen to fix 18 bits (say 0xffc00000) to fit all the hashes”

    Yes the domain size (number of bits in the hash value) can be anything. The only condition is that
    1. It should big engough to avoid hash collisions.

    Oracle has chosen 64 bits as their hash values so that they can address 2^64 distinct hash values (theoritically) without collisions.

    amit

    Comment by Amit Poddar — April 22, 2009 @ 5:17 pm BST Apr 22,2009 | Reply

  15. “selection with replacement”

    Yes bernoulli trials are the basis of scale up of NDV in both cases. (Classic row sampling and distinct sampling). And when you perform N bernoulli trials you effectively are doing selection with constant probability i.e each selection is independent of each other. So based on that definition yes it is effectively selection with replacement

    amit

    Comment by Amit Poddar — April 22, 2009 @ 5:39 pm BST Apr 22,2009 | Reply

  16. “Oracle has chosen 64 bits as their hash values so that they can address 2^64 distinct hash values (theoritically) without collisions.”

    32 bit is not too bad. With perfect hash and million rows, the expected number of collision is about a hundred. Of course, with 64 bits, the theoretical number is negligible.

    Comment by Val — April 23, 2009 @ 3:22 am BST Apr 23,2009 | Reply

  17. […] – and you might argue that it’s better to make it a deliberate choice. Given the new ‘approximate NDV’ feature in 11.2 it might not be an expensive option to […]

    Pingback by I wish (2) « Oracle Scratchpad — June 18, 2010 @ 6:14 am BST Jun 18,2010 | Reply

  18. […] to note that space utilisation isn’t considered a threat in 11g when looking at the ‘synopsis’ approach of creating the ‘approximate NDV’ for columns. The difference may be due to the passage of time, of course, on the other hand the […]

    Pingback by Frequency Histogram 4 « Oracle Scratchpad — December 31, 2010 @ 3:38 pm GMT Dec 31,2010 | Reply

  19. Hi All,

    Is the paper by Amit Poddar available at any other site, the original site http: //www.oraclegeek.net is having some issues

    Regards,
    VS

    Comment by VS — July 4, 2011 @ 1:46 pm BST Jul 4,2011 | Reply

    • Jonathan,

      You can publish the paper on this site if you so desire. I am fine by it. I cannot maintain my own site due to personal reasons. This would be the best public place for the paper to be published in the alternative.

      amit

      Comment by poddar007Amit Poddar — December 14, 2011 @ 2:07 pm GMT Dec 14,2011 | Reply

  20. […] on the 11g Approximate NDV. There are links to a white paper and the presentation slides on my posting about Hotsos 2009 where I saw him give the […]

    Pingback by Approximate NDV « Oracle Scratchpad — December 14, 2011 @ 2:28 pm GMT Dec 14,2011 | Reply

  21. Thanks Jonathan ……

    … and Amit, of course!I mentioned during my SPM presentation that Amit Poddar’s excellent paper and presentation on One-Pass Distinct Sampling and Incremental Statistics had gone missing from the web, but Jonathan is now hosting the files on his ori…

    Trackback by Doug's Oracle Blog — December 15, 2011 @ 3:18 am GMT Dec 15,2011 | Reply

  22. […] ? In view of the room for error, it might be sensible to implement the counts only as part of the approximate NDV mechanism that only runs if you choose the auto sample size option, and always seems to do a full […]

    Pingback by I Wish « Oracle Scratchpad — December 19, 2011 @ 11:56 am GMT Dec 19,2011 | Reply

  23. […] instability – especially since Oracle will take automatic sample sizes. With 11g and the “approximate NDV”  the instability and workload is likely to be reduced – provided you’re not collecting […]

    Pingback by Usage Stats « Oracle Scratchpad — January 24, 2013 @ 7:00 pm GMT Jan 24,2013 | Reply

  24. […] it when the number of hash values reaches 16,384. When he closed his website he allowed me to publish the original presentation and white paper (64 pages) on my blog so I really should have remembered that the algorithm is still so […]

    Pingback by Webinar questions | Oracle Scratchpad — June 14, 2013 @ 4:41 pm BST Jun 14,2013 | Reply

  25. […] 11g Oracle gave us the option for using an “approximate NDV (number of distinct values)” for rapid an accurate collection of basic column stats. In 12c […]

    Pingback by 12c histograms | Oracle Scratchpad — July 14, 2013 @ 7:12 pm BST Jul 14,2013 | Reply

  26. […] NDV for a column in a nutshell (for a full description see Amit Poddar’s paper): for each row, hash the stored value to something between 0 and 2^64. When the number of different […]

    Pingback by 12c Histograms pt.2 | Oracle Scratchpad — July 30, 2013 @ 9:01 pm BST Jul 30,2013 | Reply

  27. […] sample – which is the whole table if you’re using 11g with the auto_sample_size (hence approximate NDV) enabled. In the case of the poster on OTN they’d added a function-based index – which […]

    Pingback by Virtual Stats | Oracle Scratchpad — January 16, 2014 @ 8:34 am GMT Jan 16,2014 | Reply

  28. […] the past I have enthused mightily about the benefits of the approximate NDV mechanism and the benefit of using auto_sample_size to collect statistics in 11g; however, as so […]

    Pingback by Auto Sample Size | Oracle Scratchpad — March 2, 2014 @ 6:39 pm GMT Mar 2,2014 | Reply

  29. […] 12c the introduction of the “approximate NDV” strategy to collecting frequency histograms, and the introduction of the “Top-frequency” […]

    Pingback by Histogram Tip | Oracle Scratchpad — September 4, 2015 @ 8:33 am BST Sep 4,2015 | Reply

  30. […] frequency and top-N histograms which can be generated “free of charge” while the basic “approximate NDV” scans are taking place to gather stats. Gathering histograms in 12c is much safer than it used to […]

    Pingback by Histogram Upgrade | Oracle Scratchpad — December 2, 2016 @ 3:02 pm GMT Dec 2,2016 | Reply

  31. […] big difference, of course, is that 12c is gathering on a 100% sample using the variation of the approximate_ndv mechanism – so it will always find the right number of values if a frequency histogram is […]

    Pingback by Histogram Upgrade – 2 | Oracle Scratchpad — June 6, 2017 @ 6:50 am BST Jun 6,2017 | Reply

  32. […] actually an “obvious” (once you’ve thought about it for a bit) consequence of the approximate_ndv() algorithm for gathering […]

    Pingback by Column Groups | Oracle Scratchpad — October 21, 2020 @ 12:14 pm BST Oct 21,2020 | Reply

  33. […] in the past (i.e. some time between September 2012 and Dec 2021) that with the appearance of the approximate NDV functionality Oracle could apply the same functionality to gathering index stats while calculating […]

    Pingback by Minimum stats | Oracle Scratchpad — December 7, 2021 @ 3:04 pm GMT Dec 7,2021 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.