sometime back I was back experimenting to understand hash partitioning algo. why should we use “power of 2″ partitions.

please follow below link

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:981835183841#692992200346988718

Regards

Saurabh

i am still confused

please explain more clearly

thanks

There is always nice things you teach us everyday. I question regarding this would be lets say we have a table with hash partition 5. Which is inadequate as i learned today. But if we

select * from t1 partition(SYS_P2161);

Will this going to touch some other partitions for getting data. If yes then how can we get this information that which partitions it’s touching.

]]>I was pointing out that 7 is not a power of 2.

So I think, Jonathan made a “running” example in the sense that you have to track the change if you want to really understand it.

Anyway thank you for your long replay and always pay attention when the count star from 0 and not from 1! ;-)

Bye,

Antonio

The way I would summarize is, you only get even distribution when you use power of 2 partitions. Every time you add a hash partition, you split ONLY one existing into two new ones.

Regardless whether you add them one by one, or create them from scratch, the result will be the same. (Except high water mark been higher?)

With 1 partition , add a new one (split), you get 2 (power of 2) with 50%/50%.

With 2 partitions, add a new one (split), you get 3 (not power of 2) with 50%/25%/25% split

With 3 partitions, add a new one (split), you get 4 (power of 2) with 25%/25%/25%/25% split

With 4 partitions, add a new one (split), you get 5 (not power of 2) with 25%/25%/25%/13%/12% split

etc…

The idea behind this algorithm is when you add a new partitions, you wont copy data from ALL partitions, just split 1.

With this (oracle) algorithm, if you have a 1000 GiB table with 30 partitions you would have:

1000 / 32 = 31.25

28 partitions of 31.25 GiB

2 partitions of 62.50 GiB (twice the “common” size)

An add partitions command would result in:

Reading 1 of the big partitions (62.50 GiB) and writing out a new 31.25 GiB partition. (I guess the existing partition will have it’s rows deleted – but let’s skip that part for now).

So total IO (without deletion fact) 62.50 + 31.25 = 100 GiB.

With a “pure” hashing algorithm, the same table (1000 GiB) with 30 partitions would have:

30 partitions, each 33.3 GiB in size.

An add partition will have to:

Read ALL 30 partitions (1000 GiB) and write out a new 32.2 GiB partition.

Total IO (without deletion) would be: 1000 Gib + 32.2 = 1032.2 GiB. About 10 fold more.

For this reason, ora_hash cannot be used “as is” to predict which partition data would go to. I am sure Tanel or Christian could write some bit-offset edited version that will give the exact location.

Caveat for ora_hash, to split data in 8 buckets (power of 2) you have to use ora_hash([id],7) because the bucket is base 0 (not base 1).

]]>