Oracle Scratchpad

July 14, 2018

Quiz Night

Filed under: Execution plans,Oracle,Performance,sorting — Jonathan Lewis @ 7:07 pm GMT Jul 14,2018

Here’s a question prompted by a recent thread on the ODevCom database forum – how many rows will Oracle sort (assuming you have enough rows to start with in all_objects) for the final query, and how many sort operations will that take ?


drop table t1 purge;

create table t1 nologging as select * from all_objects where rownum < 50000;

select owner, count(distinct object_type), count(distinct object_name) from t1 group by owner;

Try to resist the temptation of doing a cut-n-paste and running the code until after you’ve thought about the answer.

And the answer is:

It was nice to see a few ideas being volunteered in response to this question; I think that getting a diverse set of comments makes a nice point about how it’s always worth spending a little time to think along the lines of: “If I do X how might Oracle handle it”. Having the ideas before trying to check the effects can make it a lot easier to understand what’s happening and, sometimes, how to take advantage of what Oracle does to improve the way you design a query.

The first point to make, as Michael D O’Shea  pointed out in comment #2, is that computer systems don’t usually “sort” data – they tend to create pointers to data and shuffle the pointers in some way. In Oracle’s case “sorting” used to mean inserting pointers into a balanced binary tree, and aggregating used to be a case of accumulating values at the leaf nodes of the insertion tree. Then in 10g Oracle introduced a new sorting algorithm that often works more efficiently than the binary insertion tree. I’m still going to refer to the binary tree method as “sorting”, though.

Looking at the query we can see that there is no “order by” clause so it’s possible that Oracle will do whatever it does using hash aggregation throughout and no sorting, but that leaves open the question of how a hash table on owner can also record a distinct count of both object_type and object_name because every single owner hash bucket would have to link to its own hash tables for object_type and object_name and do a sort of “recursive hash aggregation” which starts to sound a little complicated. Maybe the alternative suggested by Kaley in comment #1 is closer to the truth – maybe Oracle just “buckets” all the data by owner and then sorts within each owner twice to do the count distincts, but then we’re still going to be hanging on to a lot of data, doing a two-level open-ended process.

Having waved hands for a little bit to try and head in the direction of possible solutions we need to look for clues that tell us whether we ought to eliminate or refine some of our guesses. There are several bits of information we could look at and running the query (although I asked you not to) is the next step we have to take. But when we run the query we want to see the session statistics, pick up the actual execution plan with rowsource execution statistics, and enable the 10032 and 10033 (sort) traces. So let’s fold the query into a longer script, something like:


set linesize 255
set trimspool on
set pagesize 60

set serveroutput off
alter session set statistics_level = all;

execute snap_my_stats.start_snap

alter session set events '10032 trace name context forever';
alter session set events '10033 trace name context forever';

select owner ... etc.

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set events '10033 trace name context off';
alter session set events '10032 trace name context off';

execute snap_my_stats.end_snap

To avoid blurring around the edges we may have to isolate the three different tests – the query against dbms_xplan.display_cursor(), for example, is obviously going to have some impact on the session stats – and we may then want to run each test twice in succession so that any warm-up or parsing activities don’t confuse the issue. It would also be a good idea to run the tests after creating a new session in case there are some distracting side effects from creating the data set. But with these details addressed, here are a few results:

First the execution plan (I got these results from 12.2.0.1, all recent versions of Oracle behave similarly):


----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |     16 |00:00:00.34 |    1018 |       |       |          |
|   1 |  SORT GROUP BY     |      |      1 |     16 |     16 |00:00:00.34 |    1018 |  5014K|  1445K| 4456K (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |  50000 |  50000 |00:00:00.07 |    1018 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Oracle uses a SORT GROUP BY, not a HASH GROUP BY, and the indications are that we used about 4.4MB of memory to sort 50,000 rows. Then there’s the session statistics:


Name                                                     Value
----                                                ----------

session uga memory max                               3,255,648
session pga memory max                               3,211,264

table scan rows gotten                                  50,000

sorts (memory)                                               1
sorts (rows)                                           150,000

This says we did just one sort operation and sorted 150,000 rows using an excess of 3.2MB over the starting pga/uga (rather than the 4.4MB suggested by the plan); possibly the variation in apparent memory usage could be explained by the way that Oracle allocates reasonably large chunks to grow the PGA when you grow a workarea but since I’m taking deltas there’s also some scope for being misled by the change in maximum pga/uga memory.

Finally the 10032 trace file shows the following (we didn’t spill to disc, so the 10033 trace wasn’t triggered):


---- Sort Parameters ------------------------------
sort_area_size                    4562944
sort_area_retained_size           4562944
sort_multiblock_read_count        7
max intermediate merge width      38

---- Sort Statistics ------------------------------
Input records                             150000
Output records                            49907
Total number of comparisons performed     1945863
  Comparisons performed by in-memory sort 1945863
Total amount of memory used               4562944
Uses version 1 sort
---- End of Sort Statistics -----------------------

These figures are ones we have to trust – it seems we really did do one sort operation of 150,000 rows, and we did allocate 4.5MB of memory. There’s an obvious guess for the 150,000 input rows – Oracle has turned every row into three rows – the original row, a row to count the object_type, and a row to count the object_name – and with that in mind maybe we will be able to make sense of getting an output of 49,907 rows using 4.5M of memory. Let’s create a query that could produce the right numbers but with a differently arranged output:


select distinct owner, 0, null        from t1
union all
select distinct owner, 1, object_type from t1
union all
select distinct owner, 2, object_name from t1
order by 1, 2, 3
;

For my data set this query produced 49,907 rows of output (which is a number we wanted to see) and here are the first 9 rows of output – followed by the first row of output from the original query:


OWNER                    0 NULL
--------------- ---------- ---------------------
APPQOSSYS                0

APPQOSSYS                1 SYNONYM
APPQOSSYS                1 TABLE

APPQOSSYS                2 DBMS_WLM
APPQOSSYS                2 WLM_CLASSIFIER_PLAN
APPQOSSYS                2 WLM_FEATURE_USAGE
APPQOSSYS                2 WLM_METRICS_STREAM
APPQOSSYS                2 WLM_MPA_STREAM
APPQOSSYS                2 WLM_VIOLATION_STREAM


OWNER           COUNT(DISTINCTOBJECT_TYPE) COUNT(DISTINCTOBJECT_NAME)
--------------- -------------------------- --------------------------
APPQOSSYS                                2                          6

Spot the pattern ? All I have to do after this “union all with simple sort” is walk the result set in order accumulating distinct counts as I do so.

But what about the memory requirements ? Check back to the original 10032 trace file, it reported 4562944 bytes as the total memory needed, but it also reported using a “Version 1” sort – so before I ran my union all query I set “_newsort_enabled”=false, to get the following 10032 trace details:


---- Sort Statistics ------------------------------
Input records                             49907
Output records                            49907
Total number of comparisons performed     730667
  Comparisons performed by in-memory sort 730667
Total amount of memory used               4562944
Uses version 1 sort
---- End of Sort Statistics -----------------------

The memory used is exactly the number I wanted to see. (The  version 2 sort got exactly the same result using 3.5MB of memory, so I don’t know why it’s not used at this point – but maybe that’s because the implementation isn’t quite what I think.)

So, hypothesis (to date, until a reader shows me that my answer is incomplete – or wrong):

As the “SORT GROUP BY” operation accepts each row from the “TABLE ACCESS FULL” operation it converts each row into N rows (one base row and one for each column for which there is a count(distinct)). The base row holds just the set of “group by” columns and is tagged with a zero, each subsequent row holds the “group by” columns, a tag value to identify which column it carries, and one of the “count(distinct)” columns. Oracle then operates the normal “group by” aggregation mechanism but is actually aggregating on the “group by” columns plus the “tag” column. So each leaf node in the binary tree ends up holding {owner_value, tag_value, count}, and once all the data has been aggregated into the binary tree Oracle can walk the tree and perform a pivot to turn three rows for each owner value into a single row.

If you start thinking about nasty scenarios you will realise that the upshot of this implementation (if my hypothesis is correct) is that if you have a query with a long “group by” list, and several columns where there are lots of distinct values for each combination of the “group by” list then the volume of the B-tree could actually be much larger than the volume of the original table, and the amount of memory and CPU needed to build that tree (before collapsing it) could be huge.

Footnote:

There is one special case with this count(distinct …) query. If you have only ONE distinct operation in the query Oracle can use the “distinct aggregation” transformation with “view merging” to produce a completely different plan.

December 1, 2013

Rowids

Filed under: Infrastructure,Oracle,sorting — Jonathan Lewis @ 11:26 am GMT Dec 1,2013

I have, in the past, used the dbms_rowid package to create rowids from block addresses (typically faking the first and last rowids that could appear in an extent); but I’ve just been sent a piece of information by Valentin Nikotin that’s going to make me go back and check whether what I’ve done with the package will always give me the correct results. Here’s a little demonstration code that highlights the issue:

(more…)

September 18, 2013

Distributed Sets

Filed under: distributed,Oracle,Performance,sorting — Jonathan Lewis @ 6:14 pm GMT Sep 18,2013

In an earlier post I’ve described how a distributed query can operate at a remote site if it’s a simple select but has to operate at the local site if it’s a CTAS (create as select) or insert as select. There’s (at least) one special case where this turns out to be untrue … provided you write the query in the correct fashion. I discovered this only as a result of doing a few experiments in response to a question on the OTN database forum.

(more…)

December 28, 2009

Short Sorts

Filed under: Infrastructure,Performance,sorting,trace files,Tuning — Jonathan Lewis @ 7:29 pm GMT Dec 28,2009

I posted a little holiday quiz – timed to appear just before midnight (GMT) on 24th December – that asked about the number of rows sorted and the memory used for queries like:

select sortcode
from
        (
        select sortcode
        from   t1
        order by
                sortcode
        )
where
        rownum <= 10
;

The number and variety of the responses was gratifying. It’s always interesting to see how many important little details appear as people start to tackle even fairly straight-forward questions like this.

(more…)

December 24, 2009

Holiday Quiz

Filed under: sorting — Jonathan Lewis @ 7:36 pm GMT Dec 24,2009

I have a table with one million rows, there are no indexes on the table. The table has a column called sortcode which has no nulls, and has been generated in a highly random way so that no value appears more than four times. Consider the following queries:

select 
        sortcode
from    t1
order by 
        sortcode
;

select  sortcode 
from
        (
        select  sortcode
        from    t1 
        order by 
                sortcode
        )
where
        rownum <= 10
;

How many rows are sorted in each of these two queries – and roughly how much memory would you expect Oracle to use ?

Addendum: in the light of comment #2, assume sortcode is char(6).

October 1, 2009

_smm_isort_cap

Filed under: Infrastructure,Performance,sorting — Jonathan Lewis @ 6:01 pm GMT Oct 1,2009

I mentioned the hidden parameter _smm_isort_cap recently in my solution to a problem with analytic functions applied to large volumes of data; but in that note I didn’t give you much detail about what it does.

The description given in x$ksppi is: “maximum work area for insertion sort(v1)” which refers to the mechanism used for all sort operations in earlier releases of Oracle. In 10gR2, however, Oracle introduced a new sorting mechanism (commonly called the V2 sort) which uses less memory, less CPU, and can reduce the chances of a sort operation spilling to disc. But the new sort mechanism doesn’t get used for all sort operations – so it’s worth checking in v$sql_workarea(_active) – and even the 10032 trace file – from time to time to see which operations have used the V1 sort and which have used the V2 sort. Here, for example is a simple query against a 10.2.0.3 system just after startup:
(more…)

September 7, 2009

Analytic Agony

Filed under: Infrastructure,Performance,sorting,trace files,Troubleshooting — Jonathan Lewis @ 5:30 pm GMT Sep 7,2009

When I wrote Practical Oracle 8i, version 8.1.5 was the most recent version of Oracle but version 8.1.6  came out just before I finished writing – and the only thing in 8.1.6 I thought important enough to add to the book was a section on Analytic Functions because they were the best new coding feature in the product.

Since then I’ve always warned people to be a little careful about how they use analytic functions because of the amount of sorting they can introduce. My suggestion has always been to crunch “large” volumes of data down to “small” volumes of data before applying any analytic functions to add “intelligence” to the intermediate result.
(more…)

November 25, 2008

SAS Bug

Filed under: CBO,Performance,sorting,trace files — Jonathan Lewis @ 11:29 pm GMT Nov 25,2008

I see that Tom Kyte has found a nasty little bug waiting to trap a few unlucky people as they patch to 10.2.0.4, or upgrade to 11g.

(more…)

October 23, 2008

Manual Optimisation 3

Filed under: Execution plans,Hints,Oracle,Performance,sorting,Tuning — Jonathan Lewis @ 6:38 pm GMT Oct 23,2008

[Back to Manual Optimisation part 2]

This little series started from a note I wrote about manual optimisation where I took advantage of a sort operation in a non-mergeable view to produce sorted data from a final nested loop join without including an “order by” that would have produced a large sort operation.

In fact, as I showed in a follow-up post, this was taking a convenient pagination mechanism to an extreme – and you might decide (with good reason, as Tom Kyte did) that it was an extreme that should not be used.

(more…)

June 3, 2007

Sorting

Filed under: Infrastructure,sorting,Troubleshooting — Jonathan Lewis @ 8:45 pm GMT Jun 3,2007

This queston came up on the Oracle newsgroup a few days ago:

I have a table (call it policy) with three columns a, b and c. The table has two rows, with column c having value zero for both rows. I run the following query

select * from policy order by c;

As both the rows have a value of zero, the result should be sorted ascending by rowid, but I see the opposite;  viz. the result set is sorted descending by rowid.

Is that an issue with the version of 10g server, I am using or is it some settings of the Oracle server?

(more…)

December 17, 2006

Buffer Sorts

Filed under: CBO,Execution plans,Performance,sorting,trace files — Jonathan Lewis @ 9:48 pm GMT Dec 17,2006

In an earlier article I mentioned the buffer sort in a footnote; I thought I would expand a little more on what I think it does and why it appears as a buffer sort in an execution plan rather than the more traditional sort (join).

Consider the trivial script:
(more…)

Powered by WordPress.com.