Oracle Scratchpad

November 21, 2009

ora_hash function

Filed under: Oracle,Partitioning,Performance,Troubleshooting — Jonathan Lewis @ 10:59 am GMT Nov 21,2009

A few weeks ago I wrote a note demonstrating the way in which Oracle’s strategy for hash partitioning is engineered to give an even data distribution when the number of partitions is a power of two. In one of the comments, Christo Kutrovsky pre-empted my planned follow-up by mentioning the hashing function ora_hash() that appeared in 10g.

This function takes up to three parameters: the input value (which can be of any base type), the maximum hash bucket value (the minimum value is implicitly zero) and a seed value (default zero) for the hash function. So, for example, ora_hash(‘fred’,1023) will return an integer value between zero and 1023.

The ora_hash() function is similar to the packaged function dbms_utility.get_hash_value({string},{low value},{hash size}) that appeared in 8i, although the older function only handles character expressions (using an implicit conversion for dates and numbers), requires you to specify the minimum value and hash table size, and doesn’t allow you to introduce a seed value. [Corrected thanks to note 10 below].

In fact, for character inputs, the ora_hash() and get_hash_value() functions can be mapped to each other very easily by the relationship:

dbms_utility.get_hash_value(string, 1, N) = 1 + ora_hash(string, N – 1, 0)

So, for example, we can run a little query like:

	1 + ora_hash('fred',1023,0)			ora_hash_plus_one,
	dbms_utility.get_hash_value('fred',1,1024)	get_hash_value

----------------- --------------
              904            904

One important feature of the ora_hash() function is that it seems to be the function used internally – with a zero seed – to determine which partition a row belongs to in a hash partitioned table. Of course, as I pointed out in the previous article, you have to tailor the use of the function to the number of partitions you have declared – rounding up to the nearest power of two (and subtracting one) for the “max bucket” input, then adding one to the result, then dropping the highest bit of the result if the number of partitions is not a power of two.

Here’s a little demonstration, based on a table with six partitions (i.e. poor choice of partition count), running under

create table t1 (
partition by hash(v1)
partitions 6
	rownum <= 32768

execute dbms_stats.gather_table_stats(user,'t1')

select	partition_position, partition_name, num_rows
from	user_tab_partitions
where	table_name = 'T1'
order by

column partition_name new_value m_pt

select	partition_name
from	user_tab_partitions
where	table_name = 'T1'
and	partition_position = 3

	ora_hash(v1, 7)				ora_hash,
	dbms_utility.get_hash_value(v1,1,8)	get_hash_value,
	dbms_mview.pmarker(rowid)		pmarker,
	t1 partition( &m_pt )
group by
	ora_hash(v1, 7),

You’ll see that I’ve collected stats on the table then reported each partition’s position, name, and number of rows . This is the result I got from that query:

------------------ -------------------- ----------
                 1 SYS_P608                   3989
                 2 SYS_P609                   4055
                 3 SYS_P610                   8356
                 4 SYS_P611                   8320
                 5 SYS_P612                   4083
                 6 SYS_P613                   3965

As you might expect from the previous article, two of my partitions (numbers 3 and 4) are twice the size of the rest. If I were to issue a ‘split partition’ call then partition 3 would split (roughly 50/50) into partitions 3 and 7, and on a second ‘split partition’ call partition 4 would split (again 50/50) into partitions 4 and 8.

To demonstrate the effect of the ora_hash() function and how it relates to hash partitioning I’ve then run a query that references partition 3 by name (using an earlier query to convert a partition position into a partition name).

You’ll notice that my call to ora_hash() uses the value seven for the maximum hash bucket – this is because (a) my ideal partition count is eight (the smallest power of two that is not less than six), and (b) although I want values between 1 and 8 the function will be generating values between zero and 7. Given the relationship between ora_hash() and dbms_utility.get_hash_value() – and given that I’m applying the functions to a character string – I’ve also called the get_hash_value() function to show that its behaviour really is consistent with ora_hash() across the entire data set. Here are the results from the final query:

---------- -------------- ---------- ----------
         2              3      86154       4194
         6              7      86154       4162

You can see that both functions have split my set of values into two distinct sets in exactly the same way – values that would go into a new partition three on a “split partition”, and values that would go into partition seven. And the split is roughly 50/50 (to within 1%).

As a little bonus, I’ve also included a call to dbms_mview.pmarker() in the query. This was a function that Oracle introduced in 9i (I think) to allow more refresh options for materialized views on partitioned objects when you used partition maintenance operations on them. The function returns an identifier for the partition of a partitioned table that the rowed belongs to. (In fact all it is effectively doing is returning the data_object_id from dba_objects for the containing partition).

Once you’re aware of how ora_hash() works you may be able to find ways to take advantage of it. Occasionally I’ve been asked if it’s possible to work out which partition some data is going to go into – now you know that the answer is yes.

To be investigated: Does the ora_hash() function play any part in the distribution of data across hash clusters ? (Probably not – the manuals make some comments about rounding up the supplied number of hashkeys to the next larger  prime number, which suggests that a different hashing algorithm is used.

To be confirmed: Does the ora_hash() function plan any part in parallel query distribution by hash ? One quick test suggests not – but the method of use in this case may be more subtle than the “obvious” guess that I used in my test.


  1. Jonathan,
    The ora_hash function doesn’t seem to be used for data distribution across partitions (hash function 1) in a hash join.

    Comment by ksrivenu — November 21, 2009 @ 8:51 pm GMT Nov 21,2009 | Reply

  2. Does Note 427803.1 still apply in 11g? Sorry, I have no 11g instance to test.
    In 10.2, ora_hash() is non-deterministic over LOBs, LONGs, and some UDTs (User-defined types).


    Comment by Flado — November 23, 2009 @ 8:45 am GMT Nov 23,2009 | Reply

  3. Test snippet:

    set long 10
    with src as (select to_clob('42') val from dual connect by level<=5)
    select val,ora_hash(val,7) from src;

    Comment by Flado — November 23, 2009 @ 9:02 am GMT Nov 23,2009 | Reply

    • Flado,
      Running your example in produces different results in different rows, and different results across executions – possibly the code is hashing the address of the LOB locator rather than the content of the LOB.

      Interestingly, inserting the data into a table and then querying the table, the results for each execution will repeat for a second or so and then change – which makes it look as if there is a time-related component sliding into the activity as well.

      It’s lucky that you can’t hash partitions on longs, lobs and UDTs (user-defined types), isn’t it ? ;)

      Comment by Jonathan Lewis — November 23, 2009 @ 1:25 pm GMT Nov 23,2009 | Reply

  4. […] so often a question came up on the server news group that has an answer in a blog note I’d written just a few of days earlier . The question was simply asking for a way of counting the number of […]

    Pingback by Counting « Oracle Scratchpad — November 25, 2009 @ 7:11 pm GMT Nov 25,2009 | Reply

  5. hi jo,
    I still do not understand how the hash algorithm divides the data into each partition. how to calculate the hash algorithm to distribute the data into each partition. can you explain with a simple calculation??

    Comment by eko — December 3, 2009 @ 11:05 am GMT Dec 3,2009 | Reply

    • Eko,

      There are many hashing algorithms, and I don’t know which one Oracle uses (of even if they use one of their own invention).

      A simplistic (and often not very successful) algorithm would be to use the mod() function – so for numeric data and partitioning 4 ways Oracle could (but only in theory) be taking mod(partition_key, 4) to get a number between zero and three. Interstingly, Oracle used to have internal partition numbers 0 to N-1 in tabpart$ which were exposed as 1 to N in dba_tab_partitions.

      If you created a hash-partitioned table with 3 partitions (number from zero to two) using this scheme, then Oracle would still use mod(,4) but “drop the top bit” in cases where the result exceeded the highest defined partition.

      You might like to consider the possibility that my name is not Jo before you ask any more questions.

      Comment by Jonathan Lewis — December 12, 2009 @ 10:33 am GMT Dec 12,2009 | Reply

  6. […] does ora_hash function works ? Jonathan Lewis-ora_hash function 2-How to gather specific histogram size for a column? Hemant K Chitale-SIZE specification for […]

    Pingback by Blogroll Report 20/11/2009-27/11/2009 « Coskan’s Approach to Oracle — December 18, 2009 @ 12:15 am GMT Dec 18,2009 | Reply

  7. I was dearly hoping to see how this worked for numeric partitioning keys… I tried the equivalent query for a table that’s hash-partitioned on a NUMBER column and the ORA_HASH values had no relationship to the partitions.

    Comment by Jason Bucata — January 11, 2010 @ 10:21 pm GMT Jan 11,2010 | Reply

  8. Thanks for the pointer! I tried your script and it worked as advertised. I went back to my old test query and it turns out I’d botched it… I’m working with range-hash composite partitioning, but I was selecting from a partition when I should have been selecting from a subpartition.

    Comment by Jason Bucata — January 12, 2010 @ 5:34 pm GMT Jan 12,2010 | Reply

  9. Jonathan,

    sometime back I was experimenting to understand hash partitioning & ora_hash funtion. why should we use “power of 2″ partitions.

    I had posted my analysis @

    The central idea is as mentioned below; kindly let know if it makes any sense to you.

    Consider, these 4 variables (part_col, no_of_part, bucket_size, prev_2_power)

    part_col be candidate partitioning column

    no_of_part is the number of partitions we are considering

    bucket_size is derived from no_of_part as follows:

    if no_of_part is powers of 2(2,4,8,16…) then
    bucket_size = no_of_part – 1

    else if no_of_part not powers of 2 then
    bucket_size = “next higher power of 2” – 1
    for eg, if you have 24 no_of_part then (32 -1) = 31

    prev_2_power is no_of_part if its a power of 2 number else its previous power of 2. for ex. for 24 partitions, 16 would be prev_2_power

    Partition Number =

    ora_hash(part_col, bucket_size)+ 1 > no_of_part
    ora_hash(part_col, bucket_size)+ 1 – prev_2_power
    ora_hash(part_col, bucket_size)+ 1 /* +1 since bucket start with 0 but partitions start with 1 */


    Comment by svdixit — January 14, 2010 @ 11:40 am GMT Jan 14,2010 | Reply

  10. […]dbms_utility.get_hash_value({expression},{min},{max})[…]

    in fact the third parameter of get_hash_value is not maximum, but count of buckets, so there is a simpler equivalence between the two functions:

    ora_hash(string,buckets-1) = dbms_utility.get_hash_value(string,0,buckets)

    Comment by mik — February 2, 2010 @ 8:21 pm GMT Feb 2,2010 | Reply

    • mik,

      Thanks for the correction.
      As it says in $ORACLE_HOME/rdbms/admin/dbmsutil.sql where the function is defined:

        function get_hash_value(name varchar2, base number, hash_size number)
          return number;
        --  Compute a hash value for the given string
        --  Input arguments:
        --    name  - The string to be hashed.
        --    base  - A base value for the returned hash value to start at.
        --    hash_size -  The desired size of the hash table.
        --  Returns:
        --    A hash value based on the input string.

      Your comment also highlighted a typo I had in my original statement of equivalence – which I’ve now corrected so that it is consistent with the subsequent demonstration.

      Comment by Jonathan Lewis — February 3, 2010 @ 11:23 am GMT Feb 3,2010 | Reply

  11. […] script demonstrating that ora_hash works with numeric partition keys, published in response to this comment. Comments […]

    Pingback by Simple scripts « Oracle Scratchpad — February 6, 2010 @ 5:43 pm GMT Feb 6,2010 | Reply

  12. Jonathan,

    Can ora_hash function for comparing two large & wide tables? if so how reliable that would be? Is it possible to get false positive/negative?

    select ora_hash(c1||c2|c3|to_char(c4)….cn) hv1
    from t1

    select ora_hash(c1||c2|c3|to_char(c4)….cn) hv2
    from t2

    and then compare hv1 and hv2 to see if rows are same or not?

    Comment by Dharmendra — February 27, 2010 @ 2:27 pm GMT Feb 27,2010 | Reply

    • Sorry to intrude, but I would question the utility of this approach. What stops you from comparing the tables directly? I mean,

      where t1.c1||t1.c2||…|| = t2.c1||t2.c2||…||

      should be faster than

      where f(t1.c1||t1.c2||…||||t2.c2||…||

      no matter what f() you use. And

      where t1.c1=t2.c1 and t1.c2=t2.c2 and ... and

      would be the best way to do it – no concerns about hash collisions or the maximum length of a varchar2 (4000 bytes, everything above will be converted to CLOB, which you don’t want to happen if using ora_hash()), no need to find a separator for the concatenation (‘AB’||’CD’=’ABC’||’D’ even though ‘AB’!=’ABC’ and ‘CD’!=’D’), no need for type conversion, a possibility to benefit from existing indexes and partition-wise joins. You’d have to take special care of NULLs, though.

      That said, using pre-calculated hashes to quickly find potentially changed rows is a very sound idea. But that’s not “comparing two large & wide tables” in the general sense – you’ll need an additional column for the hash of the row in one of the two tables (the “old version”). Then, as new data comes in, you compute the same hash and compare that to the stored value, as in (assuming ‘§’ has no chance of appearing in the data)

      where old_data.primary_key=new_data.primary_key and

      You might use the dbms_obfuscation_toolkit.MD5() function instead of ora_hash().
      And yes, hash collisions do happen, so you cannot treat the hash as a key of your row. Use it only in the first step of your algorithm to quickly ignore the rows that haven’t changed.

      Hope this helps.

      Comment by Flado — February 28, 2010 @ 10:28 am GMT Feb 28,2010 | Reply

      • Flado,

        Nice response.
        I think there’s only one topic to add – comparison with nulls: you can’t compare rows with old.col1 = new.col1 because of the problem of one or other value being null, so you’d probably want to use sys_op_map_nonnull(old.col1) != sys_op_map_nonnull(new.col1).

        Comment by Jonathan Lewis — February 28, 2010 @ 1:11 pm GMT Feb 28,2010 | Reply

        • Thanks, Jonathan. I tried to keep the post as succinct as possible, that’s why I only mentioned the problem with NULLs in passing.
          Thanks for pointer to sys_op_map_nonnull() – is it a magic function that could still use an index on old.col1?


          Comment by Flado — February 28, 2010 @ 2:42 pm GMT Feb 28,2010

        • Flado,

          I can’t answer your last question – I’ve not yet come across a system where I needed to test the effects of mixing sys_op_map_nonnull() and indexes.

          I appreciate succinctness by the way – one of the hardest things about writing is the difficulty of drawing the line between so short you miss the detail and so long that the point gets lost in the detail. I think your comment got the right amount of detail in the right amount of text.

          Comment by Jonathan Lewis — February 28, 2010 @ 2:53 pm GMT Feb 28,2010

    • Dharmendra.

      I think Flado has covered all the high points on this topic – note in particular his comment about collisions because ‘ab’ || ‘cd’ would collide with ‘abc’|| ‘d’, and the need to introduce a separator.

      Comment by Jonathan Lewis — February 28, 2010 @ 1:07 pm GMT Feb 28,2010 | Reply

  13. Jonathan,
    No magic, it appears. I ran some quick tests in SQL Workshop on and found that

    where sys_op_map_nonnull(c1)=sys_op_map_nonnull('42')

    will only use a function-based index on sys_op_map_nonnull(c1) and not a normal one. The result type appears to be RAW (based on HEXTORAW() and RAWTOHEX() calls in the plan predicates)


    Comment by Flado — February 28, 2010 @ 3:57 pm GMT Feb 28,2010 | Reply

  14. Thanks Flado & Jonathan for you answers..

    Regarding sys_op_map_nonnull(old.col1) != sys_op_map_nonnull(new.col1), can’t I use simple NVL function instead?

    Comment by Dharmendra — March 9, 2010 @ 9:32 pm GMT Mar 9,2010 | Reply

    • Dharmendra,
      If you want to use nvl() in this case, what value are you going to use when the column is null ?
      Will you (and everyone who every uses an Oracle database) always be able to find a value that isn’t also a legal value for the column ?

      sys_op_map_nonnull() is designed to return a value that cannot legally appear in the column, so you can’t compare a null with a real column value and get a match by accident.

      Comment by Jonathan Lewis — March 10, 2010 @ 10:17 pm GMT Mar 10,2010 | Reply

      • It appears that sys_op_map_nonnull() has its own limitations:

        select 'fail' from dual where sys_op_map_nonnull(rpad('x',4000))!=sys_op_map_nonnull(null)
        ERROR at line 1:
        ORA-01706: user function result value was too large

        It appears to work by appending a zero byte (0x00) to whatever the binary representation of its argument is, and by returning hextoraw(‘FF’) in the case of a NULL argument:

        SQL>select sys_op_map_nonnull('') somn_null from dual;
          2     dump('A')      dump_a,  sys_op_map_nonnull('A')       somn_a,
          3     dump(chr(255)) dump_ff, sys_op_map_nonnull(chr(255))  somn_ff
          4  from dual;
        DUMP_A           SOMN_A     DUMP_FF          SOMN_FF
        ---------------- ---------- ---------------- ----------
        Typ=96 Len=1: 65 4100       Typ=1 Len=1: 255 FF00
        SQL>set echo off

        I’ve checked this behaviour on values of several types (number, date, varchar2, interval day to second, and a user-defined type).
        Of course, sys_op_map_nonnull() needs to append that zero byte to avoid matching a NULL value to the non-NULL value chr(255). Unfortunately, this renders it unusable for columns of type varchar2(4000). It would be interesting to see how Oracle deals with this problem internally :-)

        @Jonathan: BTW, I liked your seminar on indexing this week. Hope you got rid of that streak of bad luck :-)



        Comment by Flado — March 11, 2010 @ 10:42 am GMT Mar 11,2010 | Reply

        • Flado,
          That was a nice little investigation.

          The answer to “how does Oracle” deal with this problem internally is simple – it doesn’t.

          I found the function originally while sorting out some client problems with materialized view refresh – so I’ve just re-created one of my test cases with a varchar2(4000) column – this is what happens when the refresh tries to run:

          ERROR at line 1:
          ORA-12008: error in materialized view refresh path
          ORA-01706: user function result value was too large

          Thanks for the comment about the seminar – you should have said hello ! My bad luck seems to be over (at least nothing’s gone wrong today, but that’s because a client phoned up last night and postponed a meeting because they had to rebuild their database).

          Comment by Jonathan Lewis — March 11, 2010 @ 12:17 pm GMT Mar 11,2010

  15. I thought I’d just add that converting NUMBER or DATE values to their internal representations with (for example) DBMS_STATS.PREPARE_COLUMN_VALUES and converting the result to VARCHAR2 with (for example) UTL_RAW.CAST_TO_VARCHAR2 appears to allow the demonstrated equivalence between ORA_HASH and DBMS_UTILITY.GET_HASH_VALUE to be extended to these data types.

    Comment by Padders — June 20, 2012 @ 10:52 am BST Jun 20,2012 | Reply

  16. […] more recently the topic of the sys_op_map_nonnull() function came up in some comments about the ora_hash function – and I made a passing comment about the function appearing in materialized view refreshes […]

    Pingback by Surprises « Oracle Scratchpad — August 30, 2012 @ 5:55 pm BST Aug 30,2012 | Reply

  17. […] Consider, as a thought experiment (and as a warning), a table of product_deliveries which is hash partitioned by product_id with ca. 65,000 distinct products that have been hashed across 64 partitions. (Hash partitioning should always use a power of 2 for the partition count if you want the number of rows per partition to be roughly the same across all partitions – if you don’t pick a power of two then some of the partitions will be roughly twice the size of others.) […]

    Pingback by Hash Partitions | Oracle Scratchpad — March 13, 2019 @ 1:14 pm GMT Mar 13,2019 | Reply

  18. I have a parallel enabled pipelined table function which takes an input cursor using parallel 16.
    The function has PARALLEL_ENABLE (PARTITION my_cur BY HASH (my_id).

    All my artificial testing showed that this would have balanced processing of incoming data (at least for the shape of my test data).

    In production, it was generally good but occasionally it would be very unbalanced (e.g. logging showed that one particular process had threads 1 to 15 working with between 90 and 19,000 rows, whilst thread 16 processed 100,000).
    So, I played with ORA_HASH – ORA_HASH(my_id,15) – against the input data to see whether it might indicate why there was big skew but ora_hash hashed a balanced input so suggestive that this doesn’t use ORA_HASH (unless my test was wrong).
    The ids are generally a 2 character code + : + a 9 digit number, no nulls and an id is unique within the input dataset.

    Just mentioning …

    Comment by Dom Brooks — December 18, 2019 @ 11:26 am GMT Dec 18,2019 | Reply

    • P.S I’m probably going to add a random number to the incoming resultset and hash on that instead and see how that goes.

      Comment by Dom Brooks — December 18, 2019 @ 11:28 am GMT Dec 18,2019 | Reply

      • Dom,

        Thanks for the comment.

        If you ran the query for exactly the same input data set do you get exactly the same distribution every time ? If not that might indicate that this is a case where Oracle introduces a seed value (like the number of seconds since 1/1/1970)

        Jonathan Lewis

        Comment by Jonathan Lewis — December 18, 2019 @ 7:56 pm GMT Dec 18,2019 | Reply

        • It’s tricky :) because it’s prod, I can’t run the function there and I can’t get the data out elsewhere easily. Trying…

          Comment by Dom Brooks — December 19, 2019 @ 10:08 am GMT Dec 19,2019

  19. […] ora_hash and dbms_utility.get_hash_value can be mapped to each other (see […]

    Pingback by What is the algorithm used by the ORA_HASH function? – — April 26, 2023 @ 7:17 pm BST Apr 26,2023 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

%d bloggers like this: