Here’s a little example of how technology catches you out.
You have a table of 1,000,000 blocks in your system and from time to time it is necessary to run a tablescan through it. It’s not nice, but it only happens once a day, and it’s not worth building an index to create an alternative access path. On the plus side, since the table is much larger than 2% of the number of buffers in the cache, the scans don’t cause much damage to the cache because Oracle performs large scans by constantly recycling a few buffers from the tail-end of the LRU (least recently used) list.
[Update: based on the note written by Dion Cho (see comments list) that "few" buffers is likely to be 25% of the default cache in any recent copy of Oracle - the 2% has changed its meaning dramatically since 8i - and I haven't been keeping up.]
[Update again: Some notes I wrote a couple of years laters about small tables and the variation in treatment depending on size]
Then one day you decide to treat yourself by licensing the partitioning option, and you partition this big table into 128 separate pieces. The next time the tablescan runs, it thrashes your buffer cache to death, because each partition is now only 8,000 blocks long, which just happens to be a little less than that critical 2% – so the blocks are loaded to the mid-point rather than the end-point of the LRU list. Bad luck – the critical 2% relates to the size of the individual segments, not to the size of the table.
And another thing:
The fact that a “small” table essentially means one that is less than 2% of the size of the buffer cache is fairly well known. But is that 2% of:
- The blocks in the standard default cache
- The number of blocks in the keep, recycle and standard default caches combined
- The number of blocks in all the default caches combined – by number of blocks
- The combined memory size of all the default caches, divided by the size of the default block
- The number of blocks in all the caches combined – by number of blocks
- The combined memory size of all the caches, divided by the size of the default block
And if you are using ASMM (automatic SGA memory management) does the 2% vary as the buffer cache changes, or does it fix itself at the size set by the hidden parameter __db_cache_size when the database starts up, or at some figure dictated by the setting of the sga_target parameter.
It’s amazing how many little details you have to check when you start looking a little closely – and one day I may have to look that closely, but tonight’s not the night.