Here’s a simple piece of SQL that could, in theory, compare the current size of a table with the size it might be after a call to “alter table move” – and it’s followed by the results for a table that’s current in the database that I’m looking at:
select blocks, num_rows, avg_row_len, pct_free, ceil(num_rows * avg_row_len / (8000 * ((100 - pct_free)/100))) blocks_needed from user_tables where table_name = 'T1' ; BLOCKS NUM_ROWS AVG_ROW_LEN PCT_FREE BLOCKS_NEEDED ---------- ---------- ----------- ---------- ------------- 25 1000 22 10 4
The total space required (allowing, possibly, for odd details of row and column overheads) is the row length multiplied by the number of rows in the table. The code assumes the table is in a tablespace with a block size of 8KB (and allows about 200 bytes for general block overhead). Finally we allow for the space that will be left by the current setting of PCTFREE if we rebuild the table at the current setting. It’s only intended to be a rough guideline, and in the case of this tiny table, we can see the arithmetic seems to be quite reasonable – we need about 22,000 bytes (probably with some small errors) so 4 blocks of 8KB seems to be in the right ballpark.
The table is currently reporting 20 blocks allocated, though – so we might assume that this indicated some degree of “fragmentation”, and if we were suffering from a compulsion to minimise “wasted” space this excess 16 blocks (or 400%, to give it a different perspective) might incline us to rush into rebuilding. (Note: this is only intended as a tiny example, but the principle it’s going to describe could apply on a much larger scale, making the discrepancy look as if it’s worth some action.)
Before rushing into action, though, you would probably do a few checks – so here’s some extra information: the tablespace is locally managed with uniform extents of 1MB and an 8KB blocksize, but isn’t using ASSM; I have only just created the table, inserted the rows, and run dbms_stats.gather_table_stats() to collect the statistics; the Oracle version is 188.8.131.52 (which I picked because this posting is about a recent question on OTN from someone who is using 8i and may get the urge to rebuild a 60GB table). So here’s the kicker:
SQL> set feedback off; SQL> SQL> alter table t1 move; SQL> SQL> execute dbms_stats.gather_table_stats(user,'t1'); SQL> SQL> select 2 blocks, num_rows, avg_row_len, pct_free, 3 ceil(num_rows * avg_row_len / (8000 * ((100 - pct_free)/100))) blocks_needed 4 from 5 user_tables 6 where 7 table_name = 'T1' 8 ; BLOCKS NUM_ROWS AVG_ROW_LEN PCT_FREE BLOCKS_NEEDED ---------- ---------- ----------- ---------- ------------- 23 1000 22 10 4
So what’s the missing ingredient? The table has a CLOB column, with storage enabled in row, and dbms_stats (for this version of Oracle) hasn’t included the length of the CLOB (which I happen to have used to store just 100 bytes of data per row) in its calculation of row length. Switch to the (now deprecated) analyze command and Oracle gets the answer right – reporting the avg_row_len as 162 bytes.
If you want to play about with the example, here’s the code to generate the data:
create table t1( v1 varchar2(10), -- l long, l clob, v2 varchar2(10) ); insert into t1 select lpad(rownum,10,'0'), rpad('x',100,'x'), lpad(rownum,10,'0') from all_objects where rownum <= 1000 ;
You’ll notice that I’ve included the option for using a LONG instead of a CLOB – the OP was, after all, running 184.108.40.206 so if they had any large character columns they were much more likely to be LONGs than LOBs – and the side effects of changing column type and version of Oracle are worth investigating. You might, for example try repeating the exercise in 220.127.116.11, which still does it wrong, but differently.
In 18.104.22.168, the avg_row_len on my system came out as 225 (when the actual space used per row was 162 bytes), because the code Oracle uses to determine the length of the LOB column seems to include an estimate of the LOB locator component that is a few dozen bytes larger than it ought to be. If you’re trying to use things like avg_col_len and avg_row_len to work out whether or not you have a lot of available space below the HWM, then LOBs make life difficult- it looks like the only way to find out how much space your rows occupy in the table segment is still to use the (deprecated) analyze command (and that doesn’t tell you about the LOBs that have ended up in the LOB segment).
In passing, if you still have LONGs in the latest versions of Oracle then dbms_stats still loses sight of them completely when calculating the row and column length.