Oracle Scratchpad

August 29, 2010

Fair Comparison

Filed under: Performance — Jonathan Lewis @ 6:19 pm BST Aug 29,2010

From time to time someone will post a question about query performance on the OTN database forum asking why one form of a query returns data almost immediately while another form of the query takes minutes to return the data.

Obviously there are all sorts of reasons – the optimizer is not perfect, and different transformations may take place that really do result in a huge differences in work done by two queries which return the same result set – but a very simple reason that can easily be overlooked is the front-end tool being used to run the query.

Because of its popularity, Toad is a common “culprit”. You type in your query, you hit the execute key or icon, the query runs, and the bottom of the screen shows a nice spreadsheet representation of your result set – but not necessarily immediately.

It’s important to remember that when you do this, Toad (or whatever your favourite GUI is) may only be showing you the first few rows of the result set. If you see a massive difference in apparent performance between two versions of a query, it may be that Oracle has used two different strategies to get the entire data set. For example – imagine your query is supposed to return 5,000 rows and there are two execution plans that the optimizer likes –

  • Plan A unnests an aggregate subquery, does complex view merging, then aggregates late. No data appears for three minutes and then the first 20 rows appear in the spreadsheet.
  • Plan B runs the same aggregate subquery as a filter subquery, the first 20 rows appear almost immediately.

Which query is the faster query? If your end-user is expected to look at nothing but the first 20 rows then Version B seems like a good bet. But if your end-user is expecting to get a report of the whole 5,000 rows you don’t know yet. You have to check how long it takes to get to the last row. When you do this you may find that version A takes another 5 seconds to return the rest of the data, and version B takes 20 minutes to return the rest of the data.

Of course if you’re driving a GUI tool to check for “last row” it may be a bit difficult to do a fair test of getting to the last row. If I want to do this sort of check I usually take advantage of SQL*Plus and autotrace with a framework like the following:

set autotrace traceonly statistics
set timing on
set arraysize N  -- where N is anything up to 5,000
{execute query}
set autotrace off

Picking this variant of autotrace from SQL*Plus results in the query being run to completion, maximum speed across the network (arraysize is large) and no resources lost formatting the output for the screen (traceonly).

It’s a reasonable, though not perfect, way of isolating the time it takes for the database to complete the query but the usual warnings about second execution and buffering effects apply, of course.

The output you get from an autotrace session will have the following appearance (the example is from just after startup):

SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> set arraysize 5000
SQL> select * from all_objects;

67786 rows selected.

Elapsed: 00:00:32.17

     101302  recursive calls
          3  db block gets
     193059  consistent gets
       1360  physical reads
        732  redo size
    2899998  bytes sent via SQL*Net to client
        558  bytes received via SQL*Net from client
         15  SQL*Net roundtrips to/from client
       1143  sorts (memory)
          0  sorts (disk)
      67786  rows processed


  1. I also found it useful to compare the elapsed time I get using “autotrace traceonly” to the elapsed time reported by my testview.sql script (below). The script gives me pure database time (first row and last row), and the comparison to “autotrace traceonly” yields the network transfer time.
    It might be nitpicking, but I think it’s always worth pointing out that “last row” timing includes the time needed to check that there are no more rows to return.
    Here’s the script. It’s pretty old (8i), which explains the ‘manual’ interval formatting. The only argument is the name of a file containing the query under test. “@&viewname” *must* be on a line by itself, with no leading whitespace.

    define viewname=&1
    set serverout on size 100000
    set timi on
        i integer:=0;
        stt number;
    	rsp number;
        for bz in (
    ) loop
    	if rsp is null then rsp:=dbms_utility.get_time-stt; end if;
        end loop;
        dbms_output.put_line(substr('Count: '||i,1,255));
        dbms_output.put_line('Elapsed:	'||
        dbms_output.put_line('First row:	'||

    Comment by Flado — August 30, 2010 @ 7:34 am BST Aug 30,2010 | Reply

    • Flado,

      Good idea. In addition to timing, this is a good way to run the query to completion in other tools than SQL*Plus. I do this a lot in SQL Developer so I can run dbms_xplan.display_cursor afterwards. I know there are equivalents to display_cursor in SQL Developer, but they can’t be pasted easily into blog comments / asktom questions :)

      Just a detail: Jonathan mentioned using different array sizes. With your method, the array size will be either 1 (in 9i or before) or 100 (in 10g or later).

      Comment by Stew Ashton — August 30, 2010 @ 12:25 pm BST Aug 30,2010 | Reply

      • Stew,
        Yeah, re-working the script to use SYSTIMESTAMP, better interval formatting, and BULK COLLECT has been on my list for some time. On the other hand, a precision of 0.01 seconds is more than enough for the types of queries I test, the formatting works “as is” for elapsed times of 1 day or less, and nowadays PL/SQL does the array fetches for me, so…
        It is by no means a clean experimental set-up, but it is nevertheless able to detect significant network throughput problems (or latency problems, if you do the AUTOTRACE TRACEONLY with a low array size), and that’s the only purpose I use it for.

        Comment by Flado — August 30, 2010 @ 2:03 pm BST Aug 30,2010 | Reply

    • Flado,

      Nice idea. It gets around the problem of the optimizer optimizing out redundant joins and column projections and getting different plans from the execution you’d see in production.

      With the PL/SQL optimizer constantly getting smarter, though, it would be interesting to see if PL/SQL eventually notices that you’re not using data from the cursor inside the loop and starts rewriting the cursor statement.

      Comment by Jonathan Lewis — August 30, 2010 @ 2:22 pm BST Aug 30,2010 | Reply

RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by