Oracle Scratchpad

December 24, 2009

Holiday Quiz

Filed under: sorting — Jonathan Lewis @ 7:36 pm GMT Dec 24,2009

I have a table with one million rows, there are no indexes on the table. The table has a column called sortcode which has no nulls, and has been generated in a highly random way so that no value appears more than four times. Consider the following queries:

select 
        sortcode
from    t1
order by 
        sortcode
;

select  sortcode 
from
        (
        select  sortcode
        from    t1 
        order by 
                sortcode
        )
where
        rownum <= 10
;

How many rows are sorted in each of these two queries – and roughly how much memory would you expect Oracle to use ?

Addendum: in the light of comment #2, assume sortcode is char(6).

24 Comments »

  1. It must be million rows sorted for each but cannot guess memory :( Willing to see your answer after Christmas. Enjoy your holiday.

    Comment by coskan — December 24, 2009 @ 10:21 pm GMT Dec 24,2009 | Reply

  2. Million rows in first case and 10 rows in second case (isn’t that what you said here https://jonathanlewis.wordpress.com/2008/09/08/pagination/#comment-32731 ?)
    Not sure how can we guess how much memory oracle will use (to sort, I suppose) when the data type of sortcode is not known? Won’t the data type affect the memory consumption?

    Comment by Narendra — December 24, 2009 @ 11:27 pm GMT Dec 24,2009 | Reply

    • Jonathan,

      You are killing me. :)
      I just tried to run the test and the results left me “shattered”… Here are the details:
      First the spool file:
      ======================================================
      SQL> select * from v$version ;

      BANNER
      —————————————————————-
      Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
      PL/SQL Release 10.2.0.1.0 – Production
      CORE 10.2.0.1.0 Production
      TNS for Linux: Version 10.2.0.1.0 – Production
      NLSRTL Version 10.2.0.1.0 – Production

      SQL> create table t (sortcode varchar2(10)) ;

      Table created.

      SQL> REM I am trying to populate the table with the specified criteria.
      SQL> insert /*+ APPEND */ into t select dbms_random.string(‘a’,1)||ROUND(dbms_random.value(1,420000), 0) from dual connect by level commit ;

      Commit complete.

      SQL> select max(cnt) from (select count(*) cnt from t group by sortcode) ;

      MAX(CNT)
      ———-
      4

      SQL> REM Now I will enable the trace as well as check V$MYSTAT for no. of rows sorted
      SQL> alter session set events ‘10046 trace name context forever, level 12’ ;

      Session altered.

      SQL> select b.name, a.value from V$MYSTAT a, V$STATNAME b where a.statistic# = b.statistic# and b.name like ‘sort%’ ;

      NAME VALUE
      —————————————————————- ———-
      sorts (memory) 204
      sorts (disk) 0
      sorts (rows) 1159

      SQL> REM first query in the quiz

      SQL> select sortcode from t order by sortcode ;

      — Output removed

      SQL> select b.name, a.value from V$MYSTAT a, V$STATNAME b where a.statistic# = b.statistic# and b.name like ‘sort%’ ;

      NAME VALUE
      —————————————————————- ———-
      sorts (memory) 205
      sorts (disk) 0
      sorts (rows) 1001159

      SQL> REM now another query
      SQL> select sortcode from (select sortcode from t order by sortcode) where rownum <= 10
      ;
      — Output removed

      SQL> select b.name, a.value from V$MYSTAT a, V$STATNAME b where a.statistic# = b.statistic# and b.name like ‘sort%’ ;

      NAME VALUE
      —————————————————————- ———-
      sorts (memory) 206
      sorts (disk) 0
      sorts (rows) 2001159

      SQL> spool off
      =====================================================

      The V$ views show that both the sqls sorted 1 million rows (but is it so?)

      So then the trace details:
      =====================================================
      select sortcode
      from
      t order by sortcode

      call count cpu elapsed disk query current rows
      ——- —— ——– ———- ———- ———- ———- ———-
      Parse 1 0.00 0.00 0 1 0 0
      Execute 1 0.00 0.00 0 0 0 0
      Fetch 66668 5.70 5.09 0 1768 0 1000000
      ——- —— ——– ———- ———- ———- ———- ———-
      total 66670 5.70 5.09 0 1769 0 1000000

      Misses in library cache during parse: 1
      Optimizer mode: ALL_ROWS
      Parsing user id: 52

      Rows Row Source Operation
      ——- —————————————————
      1000000 SORT ORDER BY (cr=1768 pr=0 pw=0 time=9759508 us)
      1000000 TABLE ACCESS FULL T (cr=1768 pr=0 pw=0 time=10000070 us)

      Elapsed times include waiting on following events:
      Event waited on Times Max. Wait Total Waited
      —————————————- Waited ———- ————
      SQL*Net message to client 66668 0.00 0.32
      SQL*Net message from client 66668 0.20 127.59
      ********************************************************************************

      select sortcode
      from
      (select sortcode from t order by sortcode) where rownum <= 10

      call count cpu elapsed disk query current rows
      ——- —— ——– ———- ———- ———- ———- ———-
      Parse 1 0.00 0.00 0 2 0 0
      Execute 1 0.00 0.00 0 0 0 0
      Fetch 2 0.34 0.34 0 1768 0 10
      ——- —— ——– ———- ———- ———- ———- ———-
      total 4 0.34 0.34 0 1770 0 10

      Misses in library cache during parse: 1
      Optimizer mode: ALL_ROWS
      Parsing user id: 52

      Rows Row Source Operation
      ——- —————————————————
      10 COUNT STOPKEY (cr=1768 pr=0 pw=0 time=343142 us)
      10 VIEW (cr=1768 pr=0 pw=0 time=343058 us)
      10 SORT ORDER BY STOPKEY (cr=1768 pr=0 pw=0 time=343025 us)
      1000000 TABLE ACCESS FULL T (cr=1768 pr=0 pw=0 time=10000088 us)

      Elapsed times include waiting on following events:
      Event waited on Times Max. Wait Total Waited
      —————————————- Waited ———- ————
      SQL*Net message to client 2 0.00 0.00
      SQL*Net message from client 2 0.00 0.00
      =====================================================

      The "Row-source Operation" section for second query shows "Sort order by stopkey" step producing only 10 rows. Does that not mean sort took place only on 10 rows?
      But then which one is correct? :(

      p.s. Sometimes, I think I should believe in "Don Burleson" and "Don Lewis" (from oracle forums) argument that Oracle is software and not a science…
      :)

      Comment by Narendra — December 27, 2009 @ 7:48 pm GMT Dec 27,2009 | Reply

  3. It would seem that both SQL statements would need to sort 1,000,000 rows. In the second case, the entire row source would need to be sorted so that the first 10 rows sorted in ascending order may be determined, and once sorted, all but the first 10 rows could be discarded before being passed beyond the inline view.

    The question of how much memory would be needed is a bit more difficult to determine. What if the default character set requires more than one byte per character? If each character requires a single byte, I would estimate for an optimal execution 6 bytes per ROWID plus 7 bytes per SORTCODE for each of the 1,000,000 rows – roughly 13MB for both methods. The actual memory used by the first SQL statement is a bit higher for an optimal execution. The actual memory used for the second SQL statement for an optimal execution is significantly less – is it possible that Oracle uses an optimization, such as maintaining something like a linked list with 10 sorted elements when the ROWNUM predicate is specified outside the inline view? Of course, if multi-pass executions are permitted, both methods could require 64KB (or less).

    Comment by Charles Hooper — December 25, 2009 @ 2:32 am GMT Dec 25,2009 | Reply

    • A couple hours after I posted my reply I realized that Oracle needs to allocate memory for an array pointer for each row to be sorted in memory – so that it is able to jump to the memory position holding each data item to be sorted. On a 32 bit platform this should be a 32 bit unsigned integer (4 bytes). On a 64 bit platform this should be a 64 bit unsigned integer (8 bytes). Thus, the amount of memory required should be roughly 6 bytes for the ROWID plus 7 bytes for the SORTCODE value (assuming the default character set requires a single byte) plus 4 bytes or 8 bytes – so 17 bytes or 21 bytes per row multiplied by the number of rows.

      For the first SQL statement, this calculation seems to be very close. On 32 bit Oracle DBMS_XPLAN reports 17MB used, and on 64 bit Oracle DBMS_XPLAN reports 21MB used when an optimal execution is permitted.

      DBMS_XPLAN showed that roughly 2MB was required for an optimal execution for the second SQL statement. But, does this memory requirement change for older Oracle database releases (maybe the optimization is a recent addition)?

      Comment by Charles Hooper — December 25, 2009 @ 12:11 pm GMT Dec 25,2009 | Reply

  4. Hmmm in the first case, all the 1,000,000 rows and in the second, just 10 due to predicate pushing! Not sure about the memory consumption sir :-) .

    Merry Christmas and happy holidays to you sir!

    regards
    Aman….

    Comment by Aman Sharma — December 25, 2009 @ 3:18 am GMT Dec 25,2009 | Reply

    • Predicate pushing is when Oracle moves the predicate into the inline view. This can’t be done in this case as it would change the semantics completely!

      This is an optimisation as described by Jonathan in the link given by the respondent of comment #2.

      Comment by Colin 't Hart — December 25, 2009 @ 8:53 am GMT Dec 25,2009 | Reply

  5. not sure!, I think both queries use memory no different, don’t they?

    Query1:
    Execution Plan
    ———————————————————-
    Plan hash value: 2148421099

    ———————————————————————————–
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    ———————————————————————————–
    | 0 | SELECT STATEMENT | | 922K| 11M| | 4781 (2)| 00:00:58 |
    | 1 | SORT ORDER BY | | 922K| 11M| 35M| 4781 (2)| 00:00:58 |
    | 2 | TABLE ACCESS FULL| T1 | 922K| 11M| | 368 (3)| 00:00:05 |
    ———————————————————————————–

    Query 2:
    Execution Plan
    ———————————————————-
    Plan hash value: 270731910

    —————————————————————————————-
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    —————————————————————————————-
    | 0 | SELECT STATEMENT | | 10 | 130 | | 4781 (2)| 00:00:58 |
    |* 1 | COUNT STOPKEY | | | | | | |
    | 2 | VIEW | | 922K| 11M| | 4781 (2)| 00:00:58 |
    |* 3 | SORT ORDER BY STOPKEY| | 922K| 11M| 35M| 4781 (2)| 00:00:58 |
    | 4 | TABLE ACCESS FULL | T1 | 922K| 11M| | 368 (3)| 00:00:05 |
    —————————————————————————————-

    Predicate Information (identified by operation id):
    —————————————————

    1 – filter(ROWNUM<=10)
    3 – filter(ROWNUM<=10)

    By the way, Merry X'Mas and Happy New Year -)

    Comment by Surachart Opun — December 25, 2009 @ 3:53 am GMT Dec 25,2009 | Reply

  6. As to memory used I agree with Charles Hooper.

    First case causes all 1M rows be sorted.

    As to second case, it depends on..

    .. accessable transformations (e.g. pushed predicate) of the query by the optimizer
    and
    .. (after pushed predicate) algorithm of sorting. If there are sort and save first ten rows (10-list) from the very beginning, then each residual row of table is checked (and is replaced if needed) with that 10-list, then the number of sorted rows is 10.

    Comment by Vladimir — December 25, 2009 @ 8:55 am GMT Dec 25,2009 | Reply

  7. in the second case , only 10 units of sizeof(sortcode) memory is necessery.
    but in the first case, one million is necessery.

    Comment by 奶妈来了 — December 25, 2009 @ 10:26 am GMT Dec 25,2009 | Reply

    • If there is a index on sortcode column, then in the second case, only needs 10 unit of sizeof(sortcode), but now there is no index, the cbo can not use stop key to optimize it.

      so I think both case need similar memory and will sort similar number of rows.

      Comment by jametong — December 25, 2009 @ 12:12 pm GMT Dec 25,2009 | Reply

  8. With first query 1000000 rows are sorted. First query consumed something around 22M on 10GR2 running at 64-bit platform with “version 2 sort” for optimal sorting. Second query consumes just around 2048 bytes of memory (I suppose that performance and memory consumption can vary due row distribution in 2-nd query).

    Comment by Vyacheslav Rasskazov — December 25, 2009 @ 12:20 pm GMT Dec 25,2009 | Reply

    • What I also see from my own test is that 8 sorts(memory) for first case and 1 sorts (memory) for the second case with the similar memory consumption values.

      Comment by coskan — December 25, 2009 @ 3:48 pm GMT Dec 25,2009 | Reply

  9. I think….

    In the first case, all the 1 million rows will be sorted.

    In the second case, the sort area will contain 10 rows sorted. Each row in the table will be traversed but only 10 rows will be sorted at a time, with the new row included or discarded,depending on the sortcode value.

    Not sure about the exact calucualtion of memory needed but case 1 will take much . In case 2, sorting 10 rows in RAM will be far less.

    Comment by Santosh Upadhyay — December 25, 2009 @ 5:18 pm GMT Dec 25,2009 | Reply

  10. Here are my two cents:

    1) How many rows are sorted in each of these two queries

    Query-1 will sort all 1M rows while the query-2 will sort ONLY 10 rows. Both the queries will read all 1M rows but query-2 will be able to limit number of rows to sort by using “SORT ORDER BY STOPKEY” step.

    Rows Row Source Operation
    ——- —————————————————
    10 COUNT STOPKEY (cr=1645 pr=701 pw=0 time=537221 us)
    10 VIEW (cr=1645 pr=701 pw=0 time=537190 us)
    10 SORT ORDER BY STOPKEY (cr=1645 pr=701 pw=0 time=537169 us)
    1000000 TABLE ACCESS FULL T1 (cr=1645 pr=701 pw=0 time=4000117 us)

    2) roughly how much memory would you expect Oracle to use ?

    Query-1 uses around 12MB of memory while query-2 uses less than 1 MB. (from v$sesstat view)

    Jonathan, this seems not fair. You are enjoying your holidays and making us work on this assignment. ;)

    Comment by Asif Momen — December 26, 2009 @ 8:11 am GMT Dec 26,2009 | Reply

  11. Since this is a quiz for the holidays and it is Jonathan who is raising the question, I could imagine there are more subtle things to consider.

    E.g. he says “I have a table with one million rows, there are no indexes on the table”, but what if the table was the index (IOT)?

    Since SORTCODE “has no nulls” it could be the leading column of a composite primary key – so an IOT is a possible construct to avoid sort operations.

    If you use now a suitable NLS_SORT = binary setting for the assumed CHAR(6) column, both statements could go along without any sort operation at all and no memory required for this.

    What about an indexed cluster? Again the index is not on the table, but on the cluster… It’s a bit splitting hairs but just throwing out some ideas.

    It is correct that with a heap table the second query can use a “SORT ORDER BY STOPKEY” operation that only needs to keep the top N rows as it goes along the unsorted row source and therefore requires a far smaller workarea size for the sort operation.

    Randolf

    PS: And no, my presentation above doesn’t (directly) cover this particular question

    Comment by Randolf Geist — December 26, 2009 @ 8:40 pm GMT Dec 26,2009 | Reply

    • I think that Randolf has a point here – there must be more variables to consider. What if table T1 is range partitioned on the SORTCODE column? I put together a test case that shows that with range partitioning, only 12 rows will be sorted for the second SQL statement (the number varies based on the granularity of the partitioning) – without range partitioning, the second SQL statement sorts 1,000,000 rows. In either case, DBMS_XPLAN on 11.1.0.7 reports that the second SQL statement uses 2MB of memory, even though the actual amount used will likely be less (possibly close to what my other responses suggest).

      Comment by Charles Hooper — December 28, 2009 @ 4:37 pm GMT Dec 28,2009 | Reply

  12. We must assume that this is a regular heap table with no tricks, otherwise we could also assume that there is a MV that gives directly the response in 1 gets.

    Comment by Bernard Polarski — December 28, 2009 @ 9:02 am GMT Dec 28,2009 | Reply

    • Hm, interesting idea. Have you tried that? It would be interesting to see how this MV definition is supposed to look like to give the response directly in 1 gets, in particular for the first query, but also for the second one?

      Comment by Randolf Geist — December 28, 2009 @ 1:15 pm GMT Dec 28,2009 | Reply

  13. Well.

    The first query is quite clear: Oracle read all the table, sort all the rows. Memory usage depends on Oracle version as different version of Oracle uses different sort algorithms, as the column is of 6 bytes (I’m assuming 1 char=8 bit) then you have more or less 6*1.000.000, so 6M of RAM, but if some kind of “hash sort” is used, this can be lower.

    As stated by many here Oracle (from 9 and up?) can do a “SORT ORDER BY STOPKEY” so in the second case, it will use an array of dimension 10 (the “rownum” we want in the example) and sort/replace only 10 rows (full table scan still apply). Memory used 10*6=60 bytes.

    Bye,
    Antonio

    Comment by lascoltodelvenerdi — December 28, 2009 @ 10:26 am GMT Dec 28,2009 | Reply

  14. […] sorting, trace files — Jonathan Lewis @ 6:29 pm UTC Dec 28,2009 Tags: sorting I posted a little holiday quiz – timed to appear just before midnight (GMT) on 24th December – that asked about the […]

    Pingback by Short Sorts « Oracle Scratchpad — December 28, 2009 @ 6:31 pm GMT Dec 28,2009 | Reply

  15. […] elimination in some  versions of Oracle.  [Update: I see Charles Hooper thought of, and tested, that idea while I was typing up this […]

    Pingback by Short Sorts | Oracle Scratchpad — July 15, 2018 @ 9:30 am BST Jul 15,2018 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.