Oracle Scratchpad

February 26, 2016

Partition Limit

Filed under: Infrastructure,Oracle,Partitioning — Jonathan Lewis @ 2:19 pm GMT Feb 26,2016

A tweet from Connor McDonald earlier on today reminded me of a problem I managed to pre-empt a couple of years ago.

Partitioning is wonderful if done properly but it’s easy to get a little carried away and really foul things up. So company “X” decided they were going to use range/hash composite partitioning and, to minimise contention and (possibly) reduce the indexing overheads, they decided that they would create daily partitions with 1,024 subpartitions.

This, in testing, worked very well, and the idea of daily/1024 didn’t seem too extreme given the huge volume of data they were expecting to handle. There was, however, something they forgot to test; and I can demonstrate this on 12c with an interval/hash partitioned table:


rem     Script:         pt_interval_hash_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2016

column start_date new_value m_start_date
select to_char(trunc(sysdate),'dd-mon-yyyy') start_date from dual;

create table interval_hash (
        d1      date            not null,
        n1      number(10,0)    not null,
        v1      varchar2(100)
)
tablespace test_8k_assm_2
partition by range (d1) interval (numtodsinterval(1,'DAY'))
subpartition by hash (n1) subpartitions 1024
        (
                partition p0 values less than (to_date('&m_start_date','dd-mon-yyyy'))
        )
;

insert into interval_hash values (trunc(sysdate) + 1023, 1, rpad('x',100));
insert into interval_hash values (trunc(sysdate) + 1022, 1, rpad('x',100));
insert into interval_hash values (trunc(sysdate) + 1021, 1, rpad('x',100));
commit;

I’ve started the partition range on trunc(sysdate), so it’s easy to pick a few very important future dates for insertion.

So what do you think is going to happen as I try to insert those three rows ?


insert into interval_hash values (trunc(sysdate) + 1023, 1, rpad('x',100))
            *
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions


insert into interval_hash values (trunc(sysdate) + 1022, 1, rpad('x',100))
*
ERROR at line 1:
ORA-14299: total number of partitions/subpartitions exceeds the maximum limit


1 row created.

One of them succeeds – the other two fail with different error messages.

The limit on the number of segments allowed for a single table (or index) is 2^20 – 1 (i.e. 1,048,575); so with 1,024 hash subpartitions I’m only allowed 1,023 partitions. I think that we get two different error messages because Oracle can work out that the first subpartition that it would create for sysdate + 1023 will exceed the segment limit, but it doesn’t “realise” that it’s going to go beyond the limit with sysdate + 1022 until it’s too late and it’s started working on creating the segments.

In fact, in its attempt to create the sysdate + 1022 partition Oracle actually temporarily allocated 1GB of space (I had a 1MB uniform extent size for the tablespace) and updated various bits of the data dictionary generating a total of 13,000 redo entries and 2.3MB of redo – for a single row insert!

So – if you want 1,024 hash partitions you’re only allowed 3 years of daily partitions. If you want a longer history you need fewer hash partitions (and don’t forget to stick to a power of 2) or longer durations for each partition. If 3 years is sufficient, though, make sure you test how long it takes to drop old partitions from the start of a table that is getting close to the limit.

Upidate Aug 2019

Here’s a link to a note by Martin Bach reminding us that you will see Oracle error ORA-14300 if you try to insert a row with its partition key column set to null.

7 Comments »

  1. Interesting case. Thanks for sharing.

    Comment by Nikolay Kovachev — February 27, 2016 @ 7:49 am GMT Feb 27,2016 | Reply

  2. […] recently been reminded of a blog post I wrote a couple of years ago that discussed the issue of running into the hard limit of 2^20 -1 as the number of segments for a […]

    Pingback by Keeping Intervals | Oracle Scratchpad — March 15, 2018 @ 8:03 am GMT Mar 15,2018 | Reply

  3. Hi Jonathan,

    In your experience, what is the maximum number of total partitions across all tables that you have seen before partition metadata starts impacting workload performance ? Is there an approximate number where you might start seeing degraded performance due to partition metadata ?

    Thanks,
    -Rohit

    Comment by Rohit — February 5, 2020 @ 4:54 am GMT Feb 5,2020 | Reply

    • Rohit,

      That’s an interesting question, and I don’t think I can answer it.

      I have seen odd effects – probably due to bugs – in the past with older versions of Oracle where there’s been a parsing overhead with only a few thousand partitions in a single table, typically with Oracle trying to construct “average” statistics for a multi-partition query when some partitions are missing stats completely.

      I don’t think I’ve seen a case where there’s been a performance problem that was due purely to the volume of metadata information in the dictionary cache created by the total number of (sub)partitions across all (sub)partitioned tables. Obviously a system with many partitioned tables with many partitions per table should probably be a system with a very large memory which had been set up to allow the shared pool to be large enough to keep everything safely in the dictionary cache; but if the memory wasn’t sufficient it would probably be possible to get into a state where the dictionary cache was constantly subject to losing and re-reading (meta-)data. Checking v$rowcache and the dictionary cache latches would give you some clues that that was the problem.

      UPDATE: And by coincidence, one of the bugs referenced in today’s email of Oracle Hot Topics from MOS was: Bug 28165271 : 12C UPGRADE : LARGE TABLE HAS ALL PARTITIONS SCANNED FOR MISSING HISTOGRAMS
      Not directly relevant to the question, but a good example of the things that can have an unexpected impact when the number of partitions in a table is high and you haven’t been a virtuous DBA.

      Comment by Jonathan Lewis — February 5, 2020 @ 1:10 pm GMT Feb 5,2020 | Reply

      • Thanks Jonathan. I will share the findings after we have run all our tests (with a large enough shared pool).

        -Rohit

        Comment by Rohit — February 6, 2020 @ 12:00 am GMT Feb 6,2020 | Reply

        • Update: With a total of ~20M hash partitions our test workload continues to perform within normal bounds. Another group reported that after ~5M list partitions performance started to degrade (I haven’t verified this myself)

          Comment by Rohit — February 28, 2020 @ 12:28 am GMT Feb 28,2020

        • Rohit,

          Thanks for the update.

          If there is a significant difference between your findings and the finding of the other group it would be interesting to see if this is affected by the number of parritions per table rather than the absolute number of partitions in the sytem; and whether they have a large enough shared pool.

          Another thought that crossed my mind re-reading the reference to histograms – with lots of partitions you could end up with a huge space requirements in SYSAUX for the historic stats for histograms and synopses for partitioned tables; and a lot of extra memory required in the dictionary cache for historgams.

          A quick check on the v$rowcache statistics for the other group might be interesting.

          Comment by Jonathan Lewis — February 28, 2020 @ 12:21 pm GMT Feb 28,2020


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.