Oracle Scratchpad

June 14, 2012

PX and system allocation

Filed under: fragmentation,Infrastructure,Oracle,Parallel Execution — Jonathan Lewis @ 5:00 pm BST Jun 14,2012

A few years ago (2007) I wrote about a problem that could appear when you mixed parallel execution with system managed extent allocation. A couple of years later I added a note that Christian Antognini had observed a patch in that addressed the specific issue I had raised. Today, thanks to an email exchange with Christo Kutrovsky of Pythian, I can report that there is a variation of this issue still available even in

The basic problem is that you can end up with a very large number of very small extents, leading to poor performance in parallel queries and a significant waste of space in a data segment. Here’s a simple, though not particularly realistic, way to demonstrate the problem.

rem     Script:         autoallocate_bug.sql
rem     Author:         J P Lewis / Christo Kutrovsky
rem     Dated:          June 2012
rem     Last Tested

drop table t3;
drop table t2;
drop table t1;

create table t1
select * from all_objects where rownum <= 50000;

create table t2
select * from all_objects where rownum <= 50000;

create table t3
select * from all_objects where rownum <= 5000;

alter table t1 parallel(degree 2);
alter table t2 parallel(degree 2);
alter table t3 parallel(degree 2);

alter session force parallel dml;

insert into t1 select * from t3;

insert into t2 select * from t3;

-- Now repeat the last 4 statements a couple of dozen times.

It’s not visible in the create table statement, but tablespace I’m using has an 8KB blocksize, uses ASSM (automatic segment space management) and system-managed extent allocation. I’ve set the session to force parallel DML, but only because my objects are very small, and you might find that if you’ve previously enabled automatic parallel tuning then the optimizer will decide the objects are too small to bother running in parallel. (In Christo’s case the relevant SQL was handling large objects, but producing small result sets.) After executing the insert statements 15 times I ran a simple query against view user_extents, with the following results:

break on segment_name skip 1

        segment_name, extent_id, blocks
        segment_name in ('T1','T2')
order by
        segment_name, extent_id

------------------------ ---------- ----------
T1                                0          8
                                  1          8
                                  2         16
                                  3          8
                                  4         16
                                  5         16
                                  6         16
                                 58         16
                                 59         16
                                 60          8
                                 61         16

------------------------ ---------- ----------
T2                                0          8
                                  1          8
                                  2          8
                                  3         16
                                  4         16
                                  5          8
                                  6         16
                                 58         16
                                 59         16
                                 60          8
                                 61          8

124 rows selected.

Normally you expect to stop generating very small (i.e. 64KB) extents after the first 16 and carry on at 1MB at a time. In my case though, I’ve got a problem because I’ve inserted data in parallel. When I first raised this issue it related to the fact that even when inserting a large volume of data the parallel execution slaves still started by working through 16 small extents – and Christian Antognini’s article described the change that had appeared in to address this issue. The difference in the latest example is that my parallel query slaves are only inserting a small amount of data anyway, so each slave generates a couple of small extents for that data.

The circumstances in which you might get into this state are, perhaps, fairly rare; but there are scenarios where you could end up with a very large number of these small extents. For example, if at the end of each day you merge a fairly large amount of data into a very big table you might run a parallel merge statement – because you’re processing a lot of data – that does a lot of updates and a small number of inserts. Every time you run the merge you could be adding another batch of small extents without realising what’s happening unless you happen to look.

Unfortunately there’s no (good) way to work around this problem – if you do a parallel insert you end up with each PX slave working on its own private extents, and when the query coordinator finally connects all those extents to the main data segment you end up with a certain amount of messiness and empty space below the high water mark. The degree to which you waste space and introduce small extents is then a matter of luck (and the degree of parallelism).


The space management code behaves differently for partitioned tables – but I believe Christo will be writing something about this on the Pythian blog in the near future, and will link to it when is appears.


  1. Looks like every slave add space and “go straight without talking to any other” process.

    As in Oracle every query slave is a separate session, it looks like the space management process can not understand when it is allocating “space” for a normal session and when it is working in a more complex (parallel) scenario.

    An interesting point of reflection when you see big tables without a reason.

    Comment by lascoltodelvenerdi — June 18, 2012 @ 9:48 pm BST Jun 18,2012 | Reply

  2. Is this July 4, 2012 post by Christo Kutrovsky the one you had in mind regarding space management code for partitioned tables?
    “Interval Partitioning and Parallel Query Limit Access Paths”

    Comment by lonnyniederstadt — December 5, 2013 @ 3:44 am GMT Dec 5,2013 | Reply

    • lonnyniederstadt,

      Thanks for the link; interesting article, but not the one. I’ve tried doing a seach on the Pythian site to see if he’s written it up and published it but couldn’t find a match.
      It’s a problem where a repeated parallel merge that does a small insert (parallel for large update) ends up generating a large number of level 3 bitmap blocks (the think you usually find only in the segment header block.)

      Comment by Jonathan Lewis — December 10, 2013 @ 10:21 am GMT Dec 10,2013 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: