Oracle Scratchpad

June 20, 2011

Optimisation

Filed under: Oracle,Performance,Tuning — Jonathan Lewis @ 6:20 pm BST Jun 20,2011

A question came up on Oracle-L recently about the difference in work done by the following two queries:

SELECT /*+ RULE */
	DOM_NAME
FROM
	DOMAINS,
	TABLE(CAST(:B1 AS DOMAIN_LIST)) DL
WHERE
	DOM_NAME = DL.COLUMN_VALUE
;

SELECT
	DOM_NAME
FROM
	DOMAINS
WHERE
	DOM_NAME IN (
		SELECT	COLUMN_VALUE
		FROM	TABLE(CAST(:B1 AS  DOMAIN_LIST))
	)
;

Before saying anything else, I should point out that these two queries are NOT logically equivalent unless you can guarantee that the table() operator returns a unique set of values – and Oracle doesn’t allow uniqueness to be enforced on collections.

However, the author of the question had ensured that the results from the table() operator contained no duplicates so the two versions of the query returned the same result set. The question is, why does one query do fewer buffer visits than the other – as evidenced by the results from sql_trace and tkprof.

Rule-based Join
===============

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0          0          0           0
Fetch        1      0.86       0.86          0     200047          0      115195
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.87       0.87          0     200047          0      115195

Rows     Row Source Operation
-------  ---------------------------------------------------
 115195  NESTED LOOPS  (cr=200047 pr=0 pw=0 time=6355 us)
  99704   COLLECTION ITERATOR PICKLER FETCH (cr=0 pr=0 pw=0 time=284 us)
 115195   INDEX RANGE SCAN DOM_NAME_IDX (cr=200047 pr=0 pw=0 time=3191 us)(object id 54309)

CBO with subquery
=================

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0          0          0           0
Fetch        1      0.78       0.78          0     157986          0      115195
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.80       0.80          0     157986          0      115195

Rows     Row Source Operation
-------  ---------------------------------------------------
 115195  NESTED LOOPS (cr=157986 pr=0 pw=0 time=4720 us cost=536 size=6141 card=267)
  99704   SORT UNIQUE (cr=0 pr=0 pw=0 time=417 us)
  99704    COLLECTION ITERATOR PICKLER FETCH (cr=0 pr=0 pw=0 time=259 us)
 115195   INDEX RANGE SCAN DOM_NAME_IDX (cr=157986 pr=0 pw=0 time=3353 us cost=2 size=21 card=1)(object id 54309)

Notice how the optimizer has unnested the subquery and introduced a “sort unique” – this demonstrates the optimizer’s recognition that the two queries are not logically identical. However, the number of rows from the collection is the same (99,704) and the number of rows after joining is the same (115,195) – the data sets have been rigged so that it is a fair comparison. So why does the explicit join take 200,047 buffer visits when the transformed  subquery approach take only 157,986 buffer visits.

The answer is that you can visit buffers without doing “logical I/Os” – and this benefit accrues most frequently to indexed access paths. Thanks to the way Oracle has sorted the collection before doing the join the nested loop follows the second index (dom_name_idx) in order – this increases the chance that index blocks that were used in the previous cycle of the loop will be re-used in the next cycle, which means that Oracle need not release the pins on the index blocks, but can revisit them incrementing the statitsic: “buffer is pinned count”.

I have to say that I haven’t proved that this is exactly what was happening, but when I suggested to the person who had asked the question that he re-run and check to see if the decrease in “session logical reads” was balanced by an increase in “buffer is pinned count” the answer seemed to confirm the hypothesis. [Correction (see comment #1 from Randolf): the results didn’t confirm the hypothesis – but I think that’ s because of the change in “pinning” that Randolf describes, so I’ll have to find a way to confirm the hypothesis some other time.]

7 Comments »

  1. Hi Jonathan,

    unfortunately the original content is mangled and hard to read but I believe if the STATS posted as one of the replies are carefully checked then it becomes obvious that the “buffer is pinned count” statistics actually do NOT increase, but decrease along with the logical I/O figures.

    That is something I meant to be blogging about for months – I believe the original example is done on 11g and 11g obviously introduced a new “buffer pinning” mechanism that seems not be reflected properly in the available statistics. It has nothing to do with the new “fastpath” consistent gets either, these are obviously about a new codepath used to pin the buffers (_fastpin_enable – “enable reference count based fast pins”), but this new “buffer pinning” can be seen even with the “fast pinning” disabled.

    The new optimisation only kicks in when the driving row source allows the inner row source to re-visit the same block again, which is the case here due to the introduced sorting. Basically you end up with statistics that do not explain the buffer visit necessary to complete the statement in such cases as far as I can see.

    I hope to manage to publish the posts sometime soon, but I still need to write everything up – the test cases and results are there already for quite a while.

    Randolf

    Comment by Randolf Geist — June 20, 2011 @ 8:04 pm BST Jun 20,2011 | Reply

    • Randolf,

      You’re right, of course. I knew what I was expecting to see, so I managed to read the results the wrong way round.
      So, yet another new operation where instrumentation has disappeared or been implemented incorrectly.

      Thanks for the correction.

      Comment by Jonathan Lewis — June 20, 2011 @ 8:26 pm BST Jun 20,2011 | Reply

    • Hi Randolf,

      just want to say I’d really like to see your post. I’ve seen this behavior too.

      Comment by Timur Akhmadeev — June 21, 2011 @ 7:02 am BST Jun 21,2011 | Reply

  2. What if you sorted the original collection in PLSQL code before using it in the query for the rule based one – would make it equivalent as it would visit the index blocks in the same order, and then the stats should be the same for both queries? I guess that still wouldn’t prove the theory, but it does add some weight to it.

    Comment by Stephen — June 20, 2011 @ 9:44 pm BST Jun 20,2011 | Reply

    • Stephen,

      That would certainly add weight to the hypothesis that the order matters. In fact I have replaced a reference to a table with an inline view that does: “select from table order by” in the past to make that type of benefit appear – you have to include a /*+ no_eliminate_oby */ hint from 10g onwards to keep the “order by” though.

      Comment by Jonathan Lewis — June 21, 2011 @ 5:41 am BST Jun 21,2011 | Reply

  3. I’ll be glad to see that write up Randolph…. I’ve been writing a chapter for a PLSQL book, and was all ready to prove an improved result in a demo with “buffer is pinned count”, and was most “disappointed” to see that whilst I still got the improvement, the pinned count was strangely absent in its contribution :-)

    Comment by Connor — June 21, 2011 @ 1:29 am BST Jun 21,2011 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.