If you’ve read my note on free buffer waits you will have seen a comment that, in x$bh, objects with obj (data object id) values in the region of 4 billion were undo segments. There is another numbering anomaly you need to be aware of.
Objects with obj value in the region of N * power(2,22) – i.e. multiples of roughly 4 million, are the physical instantiation of global temporary tables (GTTs) and their indexes. Oracle cannot use a real data_object_id as there isn’t one and several sessions may be using their own private copies of the same global temporary table definition).
For global temporary tables and their indexes obj is derived from the file id and block number of the start of the extent used to hold the data. Specifically, the obj value is power(2,22) * tempfile id + block number.
This leads to a worrying thought. I have already seen sites which regularly, and frequently, do DDL against partitioned tables; and a couple of sites I have visited are already using data_object_ids in the range of 1.2M to 1.8M – what happens when a real data_object_id hits the critical power(2,22) limit ? Is there any code in the buffer handler that will automatically treat it as a temporary object ?