Oracle Scratchpad

July 19, 2010

Fragmentation 3

Filed under: ASSM,fragmentation,Infrastructure,Oracle — Jonathan Lewis @ 7:05 pm BST Jul 19,2010

This note is part three of a four-part series, and covers Table fragmentation. The whole series is as follows

  1. Introduction – with links to parts 2 – 4
  2. Disk and Tablespace Fragmentation
  3. Table Fragmentation – this bit
  4. Index Fragmentation

3. Table “fragmentation”.

In the introduction we discussed one type of table fragmentation that doesn’t (usually) matter – the fragmentation of a table into multiple extents. Here’s a funny thought – ASSM (automatic segment space management) introduces another form of table fragmentation that usually doesn’t matter.

When a process checks the freespace bitmaps for an object and can’t find a block with enough free space for inserting new data, it will make some space available by formatting 16 blocks somewhere in the current extent (allocating a new extent if necessary). The location of those 16 blocks may be almost anywhere in the extent (limited by the fact that the starting point within the extent has to be based on multiples of 16). The location of the chunk, like the initial choice of block to use, is dictated by the process id (pid).  This means you could create a table in a tablespace using 1MB uniform extents and find that the first row you inserted formatted the last 16 blocks of the first extent and the row went into the last block of that extent.  This doesn’t really matter very much in general because (a) most I/O access is by block rather than tablescan so the location within extent doesn’t really matter much; (b) the “messiness” is generally restricted to just the last extent in the table; and (c) if you are doing tablescans Oracle has a mechanism dependent on using the “LHWM (low high water mark) and HHWM (high high water mark), for minimising the overhead of checking which 16 block chunks need special treatment.

The more significant, and common, type of table fragmentation comes about because data can be deleted and then we can worry about “sparsely populated blocks” . When you create a table you specify how much space should be left in each block for rows to be updated and (explicitly under freelist management or implicitly under ASSM) how much space should become free in a block before it should be considered as a target for new data to be inserted.

When you consider the possible scenarios that have to be addressed with data inserts, updates and deletes, you realise how difficult it can be for Oracle to write code that handles free space management at the block level in a timely and efficient manner; you also realise how difficult it can be (as a developer or DBA) to specify sensible space management limits to minimise performance problems in extreme cases. Ultimately it is possible for a table to end up with a significant amount of free space in each block – and then you have to start asking question like: how did that free space get there, is its presence causing a performance problem, are you going to be able to re-use it, are you going to be able to re-use it in a timely fashion, if you do manage to re-use it will that introduce a different type of performance problem.

Consider the scenario where you delete one year’s worth of data after running your system for five years – this probably results in the first 20% of the blocks in a table becoming completely empty. They will all go on the freelist or (bugs excepted) be flagged in the ASSM bitmaps as having free space, so you will reuse the space eventually; moreover if your application performance depends on your data being clustered by time of arrival the way the empty space refills will, broadly speaking, keep that clustering intact. By a quirk of implementation you will probably use the blocks in reverse order under freelist management and in forward order under ASSM (and a thought crosses my mind at that point to wonder about the impact the former will have on “low-cardinality” non-unique indexes). But, for quite some time, any tablescans you do will take about 20% longer than they need to, and you could be backing up larger data files than you really need to: these considerations might make you decide to rebuild the table (and its indexes) in a new tablespace – they might also make you consider the benefits of partitioning.

Imagine a scenario where you have an web-based sales system where users can fill baskets which ultimately become orders. Some users inevitably will fill baskets and then not place orders so you could have some background code running that  looks for old baskets that have not turned into orders and delete them from the database. (There are two design flaws in this scenario – but accidents like this often happen in production systems.) In this case you could end up with a table that suffers a constant stream of deletions in the recent past so that when you check the table you find that on average every block in the table has (say) 20% free space, except for the relatively small number of blocks that hold data about recent baskets.

If you are running ASSM the 20% of free space cannot be reused (unless you recreate the table) because the granularity of ASSM requires a minimum of 25% free space in a block before its status changes down from “full” to “0% – 25% free”; but if you are using freelist space management then you may have predicted the effect and set the pctused table storage parameter to a value a little above 80. With the pctused above 80, blocks would go back on the freelist as their free space climbed to 20% and the space would be reused. This sounds like a good thing, except there’s a new problem to consider.

If your application used to run efficiently because the baskets for the last few days were stored in the last N blocks of the table, you’ve now got a table where the pattern of deletes and re-use mean that that data is now spread over 5 * N blocks (because you keep freeing up 20% of blocks from the past – which means you’re going to have to find some extra memory for buffering the data if you want to keep the same level of efficiency.

The general conclusion is this: once you’ve got past the issues of dealing (once) with setting extent sizes for a table, any thoughts of “fragmentation” should be thought of as “sparsely populated blocks” – and there are patterns of data deletion that can result in table blocks being less than perfectly utilised. In some cases it is not possible to reclaim the empty space without rebuilding the table (and in all cases, partitioned tables may make this a viable option), but there are cases where the pattern of deletion can allow space to be reused – and then you may find that you don’t always want to re-use that space because using it can introduce a different kind of problem.

Update: I see that Nigel Noble has written an article about a bug in the way that ASSM checks free space maps that can result in significant space wastage in particular special cases.

[Further reading on Fragmentation]

18 Comments »

  1. > But, for quite some time, any tablescans you do will take about 20% longer than they need to

    I idly wondered whether this would still be true with oltp compression, and from
    http://www.oracle.com/technology/products/database/oracle11g/pdf/advanced-compression-whitepaper.pdf I kinda got the idea the answer would be yes, since it says it only compresses on fullness. That seems strange (well, overly biased against purges) to me, but I have no way to test or find out if there would be a way to tell compression “do your thing on these blocks.”

    Comment by joel garry — July 19, 2010 @ 10:23 pm BST Jul 19,2010 | Reply

  2. It is worth mentioning that all these issues can be solved by running ALTER TABLE… SHRINK SPACE, which fixes this type of fragmentation online (without locking the table). Since 10g Release 2 Oracle even runs Segment Advisor automatically to advise you which segments will benefit from it and how much storage will be freed after shrinking.
    So, it’s not that bad…

    Comment by Ofir Manor — July 20, 2010 @ 3:27 pm BST Jul 20,2010 | Reply

    • Ofir,

      The “shrink space” option for “alter table” is worth mentioning – but my final remark applies to that feature, too: “you may find that you don’t always want to re-use that space because using it can introduce a different kind of problem.”

      Apart from the volume of undo and redo generated, and the impact that shrinking can have on the indexes on the table, the “shrink space” option changes the data pattern quite significantly. If you were depending on the data cluster pattern matching the time of arrival, the row movement can erode the pattern in a similar (though nastier) fashion to the way that my example of setting a high value for pctused causes an increased data spread.

      Comment by Jonathan Lewis — July 24, 2010 @ 11:40 am BST Jul 24,2010 | Reply

  3. Thanks Ofir. However, you might want to search the docs for Restrictions on the shrink_clause

    Segment shrink is not supported for tables with function-based indexes, domain indexes, or bitmap join indexes…

    You cannot specify this clause for a compressed table.

    Of course, I haven’t tried it for those cases, and it is always possible the docs are wrong.

    Oddly enough, this morning I got an ETL project thrown at me by a sqlsomeone who apparently doesn’t know about “merge” and expects a large rolling window of data with a complex key to be deleted before inserts. But then again, some people think merge is bad, too. We live in interesting times.

    I had to say “uh, how about you just tell me what you want to wind up with.”

    Comment by joel garry — July 20, 2010 @ 11:48 pm BST Jul 20,2010 | Reply

  4. Thanks Jonathan, excellent post like always.

    Comment by Francisco Munoz Alvarez — July 22, 2010 @ 8:39 pm BST Jul 22,2010 | Reply

  5. […] 2-What is table level fragmentation? Jonathan Lewis-Fragmentation-3 […]

    Pingback by Blogroll Report 16/07/2010 – 23/07/2010 « Coskan’s Approach to Oracle — July 23, 2010 @ 3:17 pm BST Jul 23,2010 | Reply

  6. […] Jonathan also continues his fragmentation series with an explanation of table fragmentation and its causes. […]

    Pingback by Log Buffer #196, A Carnival of the Vanities for DBAs | The Pythian Blog — July 23, 2010 @ 9:09 pm BST Jul 23,2010 | Reply

  7. If your system design relies on your data being clustered, aren’t you just asking for trouble? Because, as you say, circumstances change.

    Comment by John Seaman — July 26, 2010 @ 12:30 am BST Jul 26,2010 | Reply

  8. John,

    If your system performance depends on the data being clustered and you don’t realise this then you haven’t designed your system, you’ve just let it happen – and that’s asking for trouble.

    If your system is designed to meet the requirements of the users, and that design is dependent on clustering the data in a certain way then that aspect of the design should be recognised, declared and the costs and benefits analysed properly.

    If the requirements change so much that a deliberately specified data clustering strategy becomes a liability then you’re right it can be hard work to re-engineer – but that’s not a good excuse for doing it wrong in the first place.

    The logical extreme of the concern you have voiced is the approach which says: “Let’s build a really bad system now so that we can change it any way we like later on.” (I’d like to think that this is a rare occurrence, but experience tells me it’s not – although I doubt if the designers expressed their intention the way I just did.)

    Comment by Jonathan Lewis — July 26, 2010 @ 11:50 am BST Jul 26,2010 | Reply

  9. […] by Fragmentation 3 « Oracle Scratchpad — September 6, 2010 @ 12:57 pm UTC Sep 6,2010 | […]

    Pingback by Fragmentation 1 « Oracle Scratchpad — September 6, 2010 @ 1:00 pm BST Sep 6,2010 | Reply

  10. Dear Sir,

    Nice article.

    I would like to understand the performance impact of having sparsely populated blocks. How can we recognise that our database is suffering from poor performance because of Sparsely Populated Blocks? What will be impact if we’re using Locally Managed Tablespaces?

    Thanks in Advance.

    Regards,
    Santosh Kumar

    Comment by Santosh Kumar — September 26, 2010 @ 3:56 pm BST Sep 26,2010 | Reply

    • Sparsely populated blocks can only be a performance problem if you have to read more blocks to get a given amount of data than you would if the blocks were better packed. So if you can determine that your system is basically properly configured and your SQL is as efficient as it should be, but your I/O load is still higher than it should be then you may be able to infer that you are suffering from sparesly populated blocks.

      If that is the case then you need to identify the objects that are suffering from the problem and work out if there is a viable way to improve the packing of the blocks without disrupting the clustering of the data.

      Note that the sparsely populated blocks may be ones that stay in memory (causing a memory shortage for other data blocks that results in excess I/O on other objects), or they may be blocks that are constantly recycled through memory. A check for popular objects subject to a large amount of I/O may give you clues about the latter – the former may only be identifiable through knowledge of the data and application: in both cases checking row_length x row_count compared to block allocation may give you some idea of space wastage.

      Comment by Jonathan Lewis — September 29, 2010 @ 3:09 pm BST Sep 29,2010 | Reply

  11. Jonathan Lewis,
    Will the fragmentation occurs within partitioned tables as well because of massive updates, inserts and deletes? If yes, is your below sql can be used for the calculation for the actual blocks required in ASSM by replacing dba_tables with dba_tab_partitions ?

    select
        blocks, num_rows, avg_row_len, pct_free,
        ceil(num_rows * avg_row_len / (8000 * ((100 - pct_free)/100))) blocks_needed
    from
        all/dba_tab_partitions
    where
        table_name = 'T1';
    

    Comment by Peter — January 26, 2014 @ 7:00 pm GMT Jan 26,2014 | Reply

    • Peter,

      That looks like a simple query I published here – so you’ve got a number of warnings that go with it.
      If you remember to bear those warnings in mind then the SQL is valid for each partition (or subpartition) separately.

      Comment by Jonathan Lewis — January 26, 2014 @ 10:45 pm GMT Jan 26,2014 | Reply

  12. […] below the high water mark, though – so we might assume that this indicated some degree of “fragmentation”, and if we were suffering from a compulsion to minimise “wasted” space this excess 21 […]

    Pingback by Fragmentation ? | Oracle Scratchpad — December 13, 2016 @ 7:37 am GMT Dec 13,2016 | Reply

  13. […] Fragmentation 3 […]

    Pingback by How to identify table fragmentation and remove it ? — December 18, 2018 @ 1:42 pm GMT Dec 18,2018 | Reply

  14. […] are further details on how ASSM operates in this blog note that I wrote a few years […]

    Pingback by ASSM | Oracle Scratchpad — January 29, 2020 @ 1:44 pm GMT Jan 29,2020 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.