Oracle Scratchpad

September 18, 2012

Minimum stats

Filed under: Indexing,Oracle,Statistics — Jonathan Lewis @ 5:06 pm BST Sep 18,2012

Jump to the Dec 2021 update for “index auto_sample_size”

Occasionally I come across complaints that dbms_stats is not obeying the estimate_percent when sampling data and is therefore taking more time than it “should” when gathering stats. The complaint, when I have seen it, always seems to be about the sample size Oracle chose for indexes.

There is a simple but (I believe) undocumented reason for this: because indexes are designed to collate similar data values they are capable of accentuating any skew in the data distribution, which means a sample taken from a small number of leaf blocks can be highly misleading as a guide to the whole index – so Oracle aims for a minimum sample size for gathering index stats.

I’ve found remnants of a note I wrote on comp.databases.oracle.server in December 2004 which claims that this limit (as of Oracle 9.2) was 919 leaf blocks – and I have a faint memory of discovering this figure in an official Metalink (MOS) note. I can’t find the note any more, but it’s easy enough to set up a test to see if the requirement still exists and if the limit is still the same. Here’s a test I ran recently on 11.2.0.3 using an 8KB block size:

rem
rem     Script:         index_sample.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2012
rem
rem     Last tested
rem             21.3.0.0
rem             19.11.0.0       Now using approx_count_distinct()
rem             12.2.0.1
rem             11.2.0.3

create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4  -- > comment to avoid wordpress format issue
)
select
        rownum                          id,
        trunc(dbms_random.value(1,101)) n1,
        lpad(rownum,12,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid wordpress format issue
;

create index t1_i0 on t1(n1, lpad('0',70,'0')) pctfree 00;
create index t1_i1 on t1(n1, lpad('1',70,'1')) pctfree 10;
create index t1_i2 on t1(n1, lpad('2',70,'2')) pctfree 20;
create index t1_i3 on t1(n1, lpad('3',70,'3')) pctfree 30;
create index t1_i4 on t1(n1, lpad('4',70,'4')) pctfree 40;
create index t1_i5 on t1(n1, lpad('5',70,'5')) pctfree 50;
create index t1_i6 on t1(n1, lpad('6',70,'6')) pctfree 60;
create index t1_i7 on t1(n1, lpad('7',70,'7')) pctfree 70;
create index t1_i8 on t1(n1, lpad('8',70,'8')) pctfree 80;
create index t1_i9 on t1(n1, lpad('9',70,'9')) pctfree 90;

alter session set events '10046 trace name context forever, level 4';

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                cascade          => true,
                estimate_percent => 1,
                method_opt       => 'for all columns size 1'
        );
end;
/


I’ve created a table with 100,000 rows, and then created 10 indexes, each with a different pctfree so that their leaf block counts vary quite dramatically. After enabling event 10046 (at level 4, in case I needed to check some bind values) I’ve called dbms_stats.gather_table_stats with the cascade option set to true to ensure that index stats are collected, and with an explicit sample size (estimate_percent) of 1%.

The content of the trace file (extracted by a call to tkprof) is very revealing. The index with pctfree set to zero shows the following:


select /*+  no_parallel_index(t, "T1_I0")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad
  no_expand index(t,"T1_I0") */ count(*) as nrw,count(distinct
  sys_op_lbid(65128,'L',t.rowid)) as nlb,count(distinct
  hextoraw(sys_op_descend("N1")||sys_op_descend("SYS_NC00005$"))) as ndk,
  sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
 "TEST_USER"."T1" t where "N1" is not null or "SYS_NC00005$" is not null

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.46       0.00       1065       1066          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.48       0.00       1065       1066          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT GROUP BY (cr=1066 pr=1065 pw=0 time=0 us)
    100000     100000     100000   INDEX FULL SCAN T1_I0 (cr=1066 pr=1065 pw=0 time=0 us cost=1066 size=8636378 card=100423)(object id 65128)

As you can see there is no sample clause applied to the index, and Oracle has examined all 100,000 rows. For all the other indexes, though, the reference to the table includes a sample clause. Stripping out just the sample clause, and reporting them in order of the setting of pctfree, this is what we see:

 "TEST_USER"."T1" sample block ( 95.3319502075,1)  t
 "TEST_USER"."T1" sample block ( 83.9729532164,1)  t
 "TEST_USER"."T1" sample block ( 73.7797045601,1)  t
 "TEST_USER"."T1" sample block ( 62.4320652174,1)  t
 "TEST_USER"."T1" sample block ( 52.2634212921,1)  t
 "TEST_USER"."T1" sample block ( 40.8953364187,1)  t
 "TEST_USER"."T1" sample block ( 29.5384417588,1)  t
 "TEST_USER"."T1" sample block ( 19.3164620817,1)  t
 "TEST_USER"."T1" sample block (  7.9547815248,1)  t

As the size of the index (as forced by pctfree) increases we see a sample clause being introduced, and the size of the sample decresing as the size of the index grows. What does this do to the number of blocks examined – the following list shows the Fetch line for each index, again in order of increasing index size:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Fetch        1      0.28       0.00       1202       1153          0           1
Fetch        1      0.18       0.00       1365       1145          0           1
Fetch        1      0.17       0.00       1553       1133          0           1
Fetch        1      0.14       0.00       1831       1164          0           1
Fetch        1      0.21       0.00       2180       1186          0           1
Fetch        1      0.17       0.00       2766       1163          0           1
Fetch        1      0.12       0.00       3797       1142          0           1
Fetch        1      0.10       0.00       5724       1129          0           1
Fetch        1      0.17       0.00       5352       1147          0           1

Reading down the list you can see that the number of blocks examined (query) is consistently within a few percent of 1,140. Based on these figures it seems likely that there is still a minimum requirement, but it’s a couple of hundred blocks higher than the figure I reported in 2004.

You will have noticed, of course, the interesting feature that the number of buffer gets (query + current) is smaller than the number of blocks read (disk). This is a side effect of the execution plan Oracle has chosen to sample the index, and one of the cases where Oracle will be reporting various session statistics about “prefetched” blocks and “prefetched blocks aged out before use” in particular.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT GROUP BY (cr=1129 pr=5724 pw=0 time=0 us)
     18683      18683      18683   INDEX SAMPLE FAST FULL SCAN T1_I8 (cr=1129 pr=5724 pw=0 time=0 us cost=174 size=1668228 card=19398)(object id 65136)

When sampling with a table scan, or index fast full scan, Oracle will do multi-block reads to pick up many adjacent blocks in a file, but may choose to examine just a few of them. The rationale for this is simple – if you want to collect (say) every fifth block in a segment – because of a 20% sample size – then a single read request of 16 blocks will get 3 of the blocks you need, and this might be faster then issuing three read requests for individual blocks – it’s a cost-based decision. If Oracle does a 16 block read to get three blocks the tkprof line would show: disk = 16, query = 3.

Update (Dec 2021)

I’ve mentioned in the past (i.e. variious occasions between September 2012 and Dec 2021) that with the appearance of the approximate NDV functionality Oracle could apply the same functionality to gathering index stats while calculating the number of leaf blocks and distinct keys.In reply to a comment I made on an Oracle blog post “How does AUTO_SAMPLE_SIZE work in Oracle Database 11g?” Nigel Bayliss reported a one-off patch for 12.2.0.1:  27268249 ENHANCE INDEX NDK STATISTICS GATHERING with the following observation:

Yes, there are cases where index sampling does not work well for composite indexes. The work-around is to create a column group on the same columns as the index and use gather_table_stats. Alternatively, there is a recent one-off fix – 27268249. This patch changes the way NDV is calculated for indexes on large tables. No column group is required. It’s available for 12.2.0.1 at the moment, but note that it cannot be backported. As you might guess, it’s significantly slower than an index block sample, but it’s still very fast. Nevertheless, you might start to ‘feel’ the difference if you need to gather stats on lots of very large indexes. It probably goes without saying: this is not a patch to apply to a production system without thorough testing because some SQL execution plans might change. Thanks for the question.

I’ve finally got around to doing a little testing of this feature in 19.11 (though it’s probably available in earlier versions of 19c) the results from running the the index_sample.sql script above change when you get to the two largest indexes. Here’s the SQL executed for t1_i8:

select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I8")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I8") */ count(*) as nrw,
  approx_count_distinct(sys_op_lbid(78184,'L',t.rowid)) as nlb,
  approx_count_distinct(sys_op_combined_hash("N1","SYS_NC00013$")) as ndk,
  sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
 "TEST_USER"."T1" t where "N1" is not null or "SYS_NC00013$" is not null

As you can see, Oracle has used the “new” approx_count_distinct() function to count the number of distinct keys and leaf blocks (sys_op_lbid). You might also notice that Oracle has switched from using concatenation of the “descending” (sys_op_descend()) values of the two columns in the index to using the sys_op_combined_hash() function that it uses for generating values for column groups – presumable that’s significantly more efficient than the string manipulation needed for concatenation.

We can also see that (at least for this index – and the larger t1_i9 index) Oracle did a fast full scan of the whole index rather than taking a sample:


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.10       0.11       5946       5964          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.11       0.12       5946       5964          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 112     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE APPROX (cr=5964 pr=5946 pw=0 time=117813 us starts=1)
    100000     100000     100000   INDEX FAST FULL SCAN T1_I8 (cr=5964 pr=5946 pw=0 time=99030 us starts=1 cost=759 size=8600000 card=100000)(object id 78184)


Note that the operation is no longer “INDEX SAMPLE FAST FULL SCAN”, the cr value matches the pr value, and the Rows columns tell us that we’ve used all 100,000 entries in the index.

So, for this case at least, an improvement in the stats collection. This does suggest the need for further testing of course (left to the interested reader):

  • is the non-sampled operation used across the board, or is there a cost-based decision to switch back to sampling,
  • if cost-based is it possible to infer the probable rules for picking one strategy over the other, could it be affected by:
    • the segment size
    • the number and/or length of columns
    • the current estimate of number of rows
    • the current estimate of leaf blocks

Update Aug 2022

I’ve just noticed a reference to bug 33427856 in my daily email from Oracle. In summary the bug says:  “Enhancement 27268249 to index stats collection is a bug.”

Bug 33427856 – Gathering statistics for indexes on large tables in 19c may be time-consuming (Doc ID 33427856.8)

The symptoms of the bug are:

  • Uses INDEX FAST FULL SCAN instead of SAMPLE FAST FULL SCAN
  • Stats gathering query select list includes “approx_count_distinct(sys_op_lbid(77198,’L’,t.rowid)) as nlb”

The bug note also reports that the select list includes “null as ndk” – but that just means it’s a single column index (or an index mirrored by a column group) where the table stats are up to date so the required value can be copied across from the user_tab_cols view.

See this posting for a few more notes on the enhancement

 

 

8 Comments »

  1. I’m curious as to how the same analysis might come out using AUTO_SAMPLE_SIZE

    Comment by Rich Headrick — September 19, 2012 @ 1:52 am BST Sep 19,2012 | Reply

  2. Hi Sir,
    Does having no stats or stale stats affect the performance of a insert statement?
    insert into a values()

    Comment by yasir — February 28, 2013 @ 11:09 am GMT Feb 28,2013 | Reply

    • Yasir,

      Interesting question.
      I wouldn’t have thought so (at least, as far as basic Oracle tables go).

      If you enable the 10053 trace against an “insert into table () values()” then you will see that the optimizer code path does get visited, but it doesn’t seem to say anything about object-level statistics.

      Comment by Jonathan Lewis — February 28, 2013 @ 12:22 pm GMT Feb 28,2013 | Reply

  3. […] sample size is much bigger than expected – but that’s probably related to the normal “select 1,100 leaf blocks strategy”. Because of the skew in the pattern of deleted values it’s possible for the sample size in […]

    Pingback by Strange Estimates. | Oracle Scratchpad — October 30, 2019 @ 1:10 pm GMT Oct 30,2019 | Reply

  4. […] is that the execution plan shows us that the index fast full scan was a SAMPLE and only analyzed (a fairly typical) 1,132 blocks out of 16,000 and 400,000 rows out of 6 million This looks a bit of a threat, of course; but […]

    Pingback by Index Upgrade | Oracle Scratchpad — April 11, 2022 @ 4:55 pm BST Apr 11,2022 | Reply

  5. […] Minimum Stats (Sept 2018): auto_sample_size on an index tends to mean 1,140 blocks […]

    Pingback by Statistics catalogue | Oracle Scratchpad — April 11, 2022 @ 5:11 pm BST Apr 11,2022 | Reply

  6. […] Minimum Stats (Sept 2012): auto_sample_size on an index tends to mean 1,140 blocks […]

    Pingback by Indexing Catalogue | Oracle Scratchpad — April 11, 2022 @ 5:13 pm BST Apr 11,2022 | Reply

  7. […] my case the indexes were all fairly small – smaller than the critical number of blocks that trigger sampling methods – so Oracle examined every block and row in the index, leading to (a check in) index usage […]

    Pingback by Index Usage – 3 | Oracle Scratchpad — April 2, 2024 @ 4:35 pm BST Apr 2,2024 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.