Oracle Scratchpad

April 27, 2008

Manual Optimisation

Filed under: Execution plans,Performance,Tuning — Jonathan Lewis @ 5:21 pm BST Apr 27,2008

[Forward to Manual Optimisation part 2]

Warning: The following note supplies an example of a mechanism that I believe to be safe in Oracle 8i, 9i, and 10g (except when hidden parameter _optimizer_ignore_hints is set to true) and is probably safe in 11g.

However, the SQL shown is expected to return data in a given order without a final “order by” clause, and it is Tom Kyte’s opinion  that it would be dangerous to use this code on a production system.  

I agree with Tom’s argument – to the degree that any hinted code or any code that depends on side-effects of the current mechanics should be treated with great caution and its use should be documented, kept to a minimum, and checked regularly.

The following question appeared in a recent post on the Oracle DBA Forum.

We have a developer here who thinks that DBAs can make things happen. His query is against a single table and his result set is around 300,000 records and he wants to sort this by two columns (transaction_id desc, id desc) where id is the primary key in the table. The query does not take a long time but the sort part is the problem. Is this doable or not. His query is:

select  *
from    event
where   log_id = 1
and     date_posted > trunc(sysdate-1)
order by
        transaction_id desc,
        id desc

Before you get too busy trying to speed up such a query, one of the important questions you have to ask in response is: “Why does anyone want to retrieve 300,000 rows from the database?”

There are valid reasons for doing so, of course. You may, for example, be driving a statistical analysis package that needs to acquire large volumes of raw data (though the order by then looks a little suspect); but no-one reads 300,000 lines of a report and if the front-end code is acquiring the data to summarise it then it’s probably better to summarise it in the database.

Let us assume, however, that (a) the front-end really does need to the get this data, (b) it does need to appear in sorted order, and (c) it is a relatively small extract from a much larger dataset. Is there any way to make this more efficient.

The main problem we have to face is that we have a range-based predicate that gets a lot of data combined with an order by clause that uses a different set of columns from the predicate list. This makes it difficult to optimize the query through a simple index. (If we could change the second predicate to date_posted = trunc(sysdate-1) then an index on (log_id, date_posted, transaction_id desc, id desc) would allow the optimizer to do an index range scan, access the table, and return the data in order without sorting).

Note (2nd May) – from this point onwards in the text I managed to change from ‘date_posted >’ to ‘transaction_date >=’ without spotting that I had done so. Given a conversation about it that appears in the comments, I have left this change in place.

We might be lucky, of course. If there is a strong correlation between log_id and transaction_date such that most of the data with log_id=1 also had transaction_date >= trunc(sysdate – 1), then we might get a reasonable response time by creating the index (log_id, transaction_id desc, id desc) – allowing Oracle to use an index range scan to access the data in the right order picking up a little more data (we hope) than we need, then discarding the (few, we hope) rows which fail the test on transaction_date.

Since the stated problem is one of sorting (rather than accessing) the data we won’t consider the options for creating a ‘fat index’ (see Book Reviews for an explanation of this term).

So what does that leave us with? We have been told that it is the sorting that is the problem – so possibly the number of columns in that “select *” is large making the total volume of data (although “only” 300,000 rows) very large. Is there any way we could minimise the sorting if we can’t build a “perfect” index ?

The answer is yes – although it may require a little lateral thinking.

In this case, we will need a slightly smaller and simpler index (no descending columns) than the four column index shown above: (log_id, date_posted, transaction_id, id) but we’re not going to use it in the standard way. Instead we’re going to take a two-pass approach to the problem. We start with the following SQL:

	rowid	rid
	log_id = 1
and	transaction_date >= trunc(sysdate - 1)
order by
	transaction_id	desc,
	id		desc

This gets us the rowids of all the rows that our main query wants, and because of the order by clause in this query, if we visited the rows in the order of our result set, we would be visiting them in exactly the order that would satisfy our original query.

Although we are still performing a sort operation we are sorting the smallest possible set of values that we could get away with, and once we have the rowids in order we can use them to visit the table with no further sorting. So all we have to do is stick this initial query into a non-mergeable  inline view:

		leading (ptr@main t1@main)
		use_nl(@main t1@main)
		rowid(@main t1@main)
			index(@rowids t1(log_id, transaction_date, transaction_id, id))
		rowid	rid
		log_id = 1
	and	transaction_date >= trunc(sysdate - 1)
	order by
		transaction_id	desc,
		id 		desc
	)	ptr,
	t1.rowid = ptr.rid

This query gets us the data we want, in the order we want, with the smallest possible sort. The cost/benefit analysis we have to do is simple: does the random access to the table that this execution path gives us result in less work than any other mechanism that gets the data and sorts entire rows. The answer to that question depends on the data volume and distribution.

You’ll notice the great stack of 10g hints – I have used this type of code on 8i and 9i in the past, but I’ve always put in a lot of hints to minimise the risk of Oracle doing something “clever” to spoil my plan. 10g introduces cost-based query transformation – and has far more options for unwrapping any clever tricks you introduce to SQL; so if you’re going to try messing about with “manual” optimisation you do need a lot of hints to block every new feature that might cause problems. (Highlighted following note 18 and its link to AskTom – see also “Rules for Hinting”)

The most important hint in this example is probably the no_eliminate_oby hint, which I have described in the past. Without this hint you may find that the optimizer decides that the order by clause adds no value to the inline view, and eliminates it – while still honouring the no_merge hint.

The execution plan you get from this query, running under in my case, is as follows:

| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |       |  3400 |  1470K|  3420   (1)| 00:00:42 |
|   1 |  NESTED LOOPS               |       |  3400 |  1470K|  3420   (1)| 00:00:42 |
|   2 |   VIEW                      |       |  3400 | 40800 |    17   (6)| 00:00:01 |
|   3 |    SORT ORDER BY            |       |  3400 |   106K|    17   (6)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN        | T1_I1 |  3400 |   106K|    16   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY USER ROWID| T1    |     1 |   431 |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):

As you can see the view operator tells us that we’ve created an ordered list from the inline view and then, for each rowid in the list, accessed the t1 table by “user” rowid – just as we wanted.

So – when you can’t do what you want with simple SQL, remember that you can always decompose a query into steps to reduce the size of the ‘hard work’, then use an intermediate result set (possibly joined back to the original table, as in my example) to collect the rest of the data. It’s a technique I have used successfuly to suppy short-term fixes to performance issues that some of my clients have had on their web-based applications.

[Forward to Manual Optimisation part 2]


  1. Hi Jonathan,
    great idea of only sorting the rowids. Will the result be still correct if you are accessing the outer table in parallel? Also a nice example how hints may change the order of a result.


    Comment by Wolfgang — April 27, 2008 @ 7:52 pm BST Apr 27,2008

  2. Wolfgang,

    Good question – paralellism would stop this from working the way I want to. I’ll have to see if I can make the driving data (i.e. the rowid list) distribute “serial to parallel” – which would return the results in an order I don’t want even if the access to the second occurrence of t1 used an indexed access path.

    I may have to add a no_parallel() hint to the code.

    Comment by Jonathan Lewis — April 27, 2008 @ 9:13 pm BST Apr 27,2008

  3. With the challenge of maintaining sets of hints to handle new optimizer features when upgrading, I suppose you could consider an “alter session” to temporarily downgrade the optimizer_features_enable to the “lowest common denominator” that gives you the features you need. It ought to be fairly future-proof.

    It’s not the sort of thing I usually think of in the middle of a program though, more of a system-level thing.

    Come to think of it, I suppose that cursors must be tagged with the optimizer feature level to avoid a plan being reused by another session using a lower feature set.

    Comment by David Aldridge — April 27, 2008 @ 11:20 pm BST Apr 27,2008

  4. If the predicates
    log_id = 1
    and transaction_date >= trunc(sysdate – 1)

    returned a large portion of the table, then “walking the table” via the sorted ROWIDs
    might [would] actually slow things down because we’d been doing very many single block
    reads. However, that would reduce the total size of the data set to sort and, therefore,
    improve the sort performance.

    Comment by Hemant K Chitale — April 28, 2008 @ 2:40 am BST Apr 28,2008

  5. Hi,

    I have no real bad experience with Oracle sorting large sets of data.
    Although I can accept that it is the bottleneck, I would still ask, how the bottleneck was found.
    I often face situations where people say : “My query is fast without the sorting”, and usually they talk about the response time and never really measure and compare the total runtimes.


    Comment by Andras — April 28, 2008 @ 7:04 am BST Apr 28,2008

  6. Wolfgang,

    I’ve been trying to get this query to run parallel and spoil the effect, but so far I haven’t had any success. That surprises me, as I had assumed it would be fairly easy to achieve.

    Good point – one of the strongest arguments I make for not using hints is that you have to check every hinted SQL statement on every upgrade to see that it’s still going to do the same thing.

    I guess that’s why Oracle has taken to including the /*+ optimizer_features_enable( ‘n.n.n’ ) */ in every outline from 10g onwards. That seems like a good idea in manual hinting, too, especially since it (a) may protect you on the upgrade, and (b) forces you to make a deliberate choice about the “upgrading” the hints when you upgrade the database.

    I did wonder whether using /*+ optimizer_features_enable( ‘9.2.0’ ) */ or setting the parameter to 9.2.0 at the session level would break this query, since it uses 10g hint formats (which would be syntactically wrong in 9i. Fortunately, setting the optimizer level doesn’s stop the optimizer from reading and recognising the 10g hints as hints.

    That’s always a point worth making in response to this type of question. To clarify – the point you’re making is that people tend to time the output from the appearance of the first row, not from the appearance of the last row.

    One of the strategies I use to make a fair comparison from SQL*Plus is simply to run the queries after issuing:

    set timing on
    set autotrace traceonly statistics

    This gets all the data from the database and throws it away without wasting time on formatting and displaying – so what you’re left with is the work done and time spent on the execute, fetch, and associated network traffic.

    Comment by Jonathan Lewis — April 28, 2008 @ 7:16 am BST Apr 28,2008

  7. Right, but I usually combine “autotrace traceonly statistics” with a PL/SQL loop to eliminate network traffic as well. That is, I do the PL/SQL first (server time + negligible client overhead), then autotrace (server time + network time + negligible client overhead) to measure network delays.
    Here’s my testview.sql. I find it very useful. It takes the name of a file with the “slow” SELECT-statement as its sole parameter:

    define viewname=&1
    set serverout on size 100000
    set timi on
        i integer:=0;
        stt number;
    	rsp number;
        for bz in (
    ) loop
    		if rsp is null then rsp:=dbms_utility.get_time-stt; end if;
        end loop;
        dbms_output.put_line(substr('Count: '||i,1,255));
        dbms_output.put_line('Elapsed:	'||to_char(trunc(sysdate)+trunc(stt/100)/3600/24,'HH24:MI:SS')||'.'||ltrim(to_char(mod(stt,100),'00')));
        dbms_output.put_line('First row:	'||to_char(trunc(sysdate)+trunc(rsp/100)/3600/24,'HH24:MI:SS')||'.'||ltrim(to_char(mod(rsp,100),'00')));

    Comment by Flado — April 28, 2008 @ 11:37 am BST Apr 28,2008

  8. Nice example. Transaction_date and Date_posted seems to have been used almost interchangeably somewhat.

    Comment by krish — April 29, 2008 @ 10:44 am BST Apr 29,2008

  9. A very nice example. :-)

    Comment by Asif Momen — April 30, 2008 @ 7:23 am BST Apr 30,2008

  10. @Jonathan

    My bugged brain strikes again. I read this article a few times, and still missed the idea :

    “We might be lucky, of course. If there is a strong correlation between log_id and transaction_date such that most of the data with log_id=1 also had transaction_date >= trunc(sysdate – 1)

    not only: I made a mistake too: you never write a condition like “date_posted >= trunc(sysdate-1)” but always “transaction_date >= trunc(sysdate – 1)”.

    Time for a brain update…or newer glasses…


    Comment by lascoltodelvenerdi — April 30, 2008 @ 11:57 am BST Apr 30,2008

  11. Well – I’ve been waiting all week for someone to point out that I’ve committed a cardinal sin here because I’m expecting the data to appear in the right order without using an order by clause.

    This morning I noticed a bunch of incoming hits from AskTom with a follow-up email from Tom Kyte.

    When Tom Kyte suggests you’re doing something which is “rather dangerous” you’ve got to listen – so there’s going to be another note (or two) this topic covering: nlj_batching, nlj_prefetch, outlines losing hints, pagination strategies, network traffic, and sorted hash clusters.

    In the meantime – think carefully about comments 1 and 3 above: how hard it is to make sure that your hints cover 100% of the cases you need to consider, and what happens on the upgrade (or patch).

    Comment by Jonathan Lewis — May 2, 2008 @ 11:29 am BST May 2,2008

  12. “lascoltodelvenerdi”,

    I missed the change from date_posted to transaction_date as well – despite reading it through a couple of times. I’ve added a note by the original code to point out this change to avoid further confusion for new readers.

    To clarify the comments about the ‘strong correlation’ let’s change the example again. Imagine a table which includes columns (order_date, delivered_yn) with an index declared with that column order.

    Assume it usually takes one or two days to deliver orders; so if you run a query with the predicate:

    where delivered_yn = 'N' and order_date > trunc(sysdate) - 1

    then you’ll do a range scan through the index, starting at the first index entry with ({“yesterday’s date”},’N’). Just about every entry you walk through from that point onwards will have delivered_yn = ‘N’, but there may be a few rows where an item was delivered surprisingly quickly.
    On the whole, although the index is not perfect it’s good enough because the wasted effort is going to be small. In some cases we would make Oracle use the “wrong” index because it is “good enough”.

    The example isn’t a perfect analogy – but I think it expresses the point I was trying to make well enough.

    Comment by Jonathan Lewis — May 2, 2008 @ 10:59 am BST May 2,2008

  13. Jonathan,

    sorry that I asked Tom at first and didn’t post my question here.
    I find your technique very useful, however I think you are really committing a cardinal sin (btw: I like cardinal sins !), so I just wonder where you get to know about these undocumented hints.
    no_eliminate_oby is nowhere documented, is it ?
    Thank you for sharing your knowledge anyway


    Comment by Sokrates — May 2, 2008 @ 6:32 pm BST May 2,2008

  14. @Jonathan

    I can not see where the sin is!

    The elimination of the “order by” was not one of the goal of your method?

    On a non parallel system, the order should be assured by the hint

    “index(@rowids t1(log_id, transaction_date, transaction_id, id))”

    or not? (Well the order of the rows will be the order of the index…)

    Comment by lascoltodelvenerdi — May 2, 2008 @ 8:10 pm BST May 2,2008

  15. Sokrates,

    The no_eliminate_oby hint is one of those ‘nearly documented’ hints that probably missed the documentation because of timing rather than deliberate exclusion. It’s listed in the v$sql_hints view in 11.1, and it appears somewhere in a trace file that I read.

    Tom’s comment is correct that you shouldn’t assume that any mechanism will stay unchanged and give you the same ordering effect – but I do have a counter-argument about this nested_loop with pre-ordered input that I will explain in a later post.

    There is a case that the final output need not be in order just because the inputs to the index probe are in the right order – so it is, in principle, possible that the final output could appear out of order; that’s the point that you generally make about ‘ordering without an order by’.

    Tom’s other worry, of course, is that the hints may not work in some situations as yet unspecified – for example on the next patch release; and this query is absolutely dependent on the hints to get the results in the right order.

    Comment by Jonathan Lewis — May 2, 2008 @ 9:20 pm BST May 2,2008

  16. Well, in my oppinion this is a solution for a very specific problem and it also shows very tellingly how hints can change the (oder of the) data. But it will work and solve exactely this problem.
    Jonathan never said that this is a general solution for everyone how has to sort a lot of data. Therefore I don’t think the warning was necessary – this example is not dangerous for people how know what they are doing. People who only copy examples out of the web and use it in there production system will not read the warning either.
    But of course no one can say you did not try everything to prevent them from doing silly things ;-)

    Regards Wolfgang

    Comment by Wolfgang — May 3, 2008 @ 2:47 pm BST May 3,2008

  17. @Wolfgang

    Yes, you are right but is better to be clear, specially for those with the motto “In cut’n’paste we trust”! :D :D


    This method: working with rowid instead of the “real data” can be applied to partitioned tables, I’m right?

    Comment by lascoltodelvenerdi — May 5, 2008 @ 10:21 am BST May 5,2008

  18. Sokrates,

    I see that I haven’t answered your question about how I discovered the no_eliminate_oby() hint.

    A little of the history is described in this post on my website. A point that I didn’t make was that the trace file also listed the outline for the plan – which included the eliminate_oby() hint.

    You can also see the outlines, without generating the trace files, by using an undocumented parameter in a call to dbms_xplan.display(), or if you want a smaller trace file simply setting event 10132 – from 10g, the 10132 also dumps the outline for a plan.

    Comment by Jonathan Lewis — May 5, 2008 @ 11:01 am BST May 5,2008

  19. Jonathan,

    thank you for posting the links


    Comment by Sokrates — May 6, 2008 @ 4:46 pm BST May 6,2008

  20. Hi Jonathan

    Not sure if you saw my post on asktom … basically my suggestion was to create a stored outline for the query and then put
    user_outlines WHERE NAME = ‘OUTLINE_FOR_JL_SORT’) into the FROM clause.

    Tom points out that we need to look at v$parameter to make sure that the values are such that the outline will actually be used, as well.

    Sound any good to you ?

    Best Regards
    Greg Solomon

    Comment by Greg Solomon — May 7, 2008 @ 12:09 pm BST May 7,2008

  21. Greg,

    A quick check shows that stored outlines (and profiles) still work in 10g even when hints have been disabled.

    However, there are two problems with your suggestion. First, I don’t know how to find out the current setting of my session’s value for “use_stored_outlines”, and that needs to be set to the correct category to make the outline apply. I don’t think it’s even in the structure under v$ses_optimizer_env.

    Secondly I think Tom’s point is the more generic one that the current set of hints (even when converted to a stored outline) may not be sufficient to make the plan appear after a future upgrade. This argument, by the way, is one that I use about any hinting anyway, so doesn’t really worry me in terms of being a “new” risk introduced by this code sample.

    Comment by Jonathan Lewis — May 8, 2008 @ 7:50 am BST May 8,2008

  22. The current system or session setting of use_stored_outlines can be seen by oradebug (oradebug dumpvar sga sgauso or oradebug dumpvar uga ugauso)

    It is given in the metalink note 613682.992.

    BTW with stored outlines being deprecated in 11g, how does SQL Plan management fit in all this.

    Comment by krish — May 9, 2008 @ 6:13 am BST May 9,2008

  23. Krish,

    Thanks for the information.

    The suggestion about stored outlines was Greg’s idea for trying to bypass the issue of hints being disabled by a parameter change – so isn’t really relevant to the original query.

    SPM, on the other hand, might have some interesting effects on hinted SQL (although in theory the effects should only be beneficial).

    Because of SPM, the optimizer may re-optimize a query whose performance seems to be bad because of variations in bound values. Most people who put hints into SQL don’t do it properly, get a bit lucky, and happen to get the plan they want to see. But if SPM tries to find different plans from time to time, then I guess there will be an increased chance of the optimizer using unexpected plans on hinted code – while still obeying all the hints that have been supplied.

    It’s an idea I’ll have to play aruond with at some point – but not just yet.

    Comment by Jonathan Lewis — May 9, 2008 @ 7:56 am BST May 9,2008

  24. […] Manual Optimisation – 2 Filed under: Performance, Tuning — Jonathan Lewis @ 1:13 pm UTC May 9,2008 Back to Manual Optimisation part 1 […]

    Pingback by Manual Optimisation - 2 « Oracle Scratchpad — May 9, 2008 @ 1:13 pm BST May 9,2008

  25. […] Jonathan Lewis @ 9:25 pm UTC Jul 13,2008 Towards the end of April, I published a note about manual optimisation,  and mentioned in one of the comments (#11) that as part of the discussion of the (slightly […]

    Pingback by Sorted Hash Clusters « Oracle Scratchpad — July 13, 2008 @ 9:25 pm BST Jul 13,2008

  26. I have a similar optimization issue involving many 100,000s of already time-ordered rows. In my situation, the rows are not stored in the database at all, but are returned from a pipelined table function reading from a socket. I’d like to be able to use analytics on the data, but without imposing an unnecessary window sort (and without storing the data in a table first). I am searching for something like a keyword or pragma that would declare the pipelined table function’s implicit ordering so the optimizer could make use of it. I would settle for something like the hints you use in the article here. Ideally, oracle would let me declare the order of the data returned from the table function, and then just enforce the ordering with an exception similar to doing a CREATE INDEX … NOSORT … where the sort step is replaced with a validation.

    Any ideas?

    Comment by kerry — September 2, 2008 @ 3:41 pm BST Sep 2,2008

  27. […] little series started from a note I wrote about manual optimisation where I took advantage of a sort operation in a non-mergeable view to produce sorted data from a […]

    Pingback by Manual Optimisation 3 « Oracle Scratchpad — December 23, 2008 @ 8:07 pm GMT Dec 23,2008

  28. Jonathan, any chance you can help me figure out how to refactor my (relatively simple) query to apply your methodologies? I posted on the Oracle forums here: It’s a pagination query that I think could be extremely fast (even in a very large table) if I could get this to work… I gave your hints a shot, but still get the same results, although its probably attributed to my lack of experience using these hints properly. Thanks!

    Comment by RyanH — February 9, 2010 @ 3:11 pm GMT Feb 9,2010

  29. […] method is just another example of the “visit a table twice to improve the efficiency” strategy that I wrote about a long time ago; and it’s this particular variant of the strategy that […]

    Pingback by In-memory limitation | Oracle Scratchpad — August 15, 2014 @ 8:51 pm BST Aug 15,2014

RSS feed for comments on this post. TrackBack URI

Website Powered by