A little while ago I published a note explaining how it was possible to find queries which ran faster if you manually de-coupled the index and table accesses. Here’s a further example that came up in discussion on a client site recently. The query looks something like this (at least in concept, although it was a little more complex, with some messy bits around the edges):
select ord.* from products prd, customers cst, orders ord where prd.grp = 50 and cst.grp = 50 and ord.id_prd = prd.id and ord.id_cst = cst.id ;
There are indexes on products(id), customers(id) and orders(id), as well as indexes on orders(id_prd) and orders(id_cst). Basically it’s a collection of primary key and “foreign key” indexes – except there are no defined constraints and none of the indexes is unique.
The production “orders” table is very large (hundreds of millions of rows). There are lots of customers – and the orders for each customer are scattered throughout the entire orders table; similarly there are lots of products, and the orders for each product are scattered throughout the orders table. What execution plans (with the right indexes, of course) might you get for this query. There are two obvious possibilities:
----------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ----------------------------------------------------------------- | 0 | SELECT STATEMENT | | 11 | 1584 | 492 | |* 1 | HASH JOIN | | 11 | 1584 | 492 | |* 2 | TABLE ACCESS FULL | CUSTOMERS | 100 | 900 | 14 | |* 3 | HASH JOIN | | 3314 | 436K| 477 | |* 4 | TABLE ACCESS FULL| PRODUCTS | 100 | 900 | 14 | | 5 | TABLE ACCESS FULL| ORDERS | 1000K| 120M| 446 | ----------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("ORD"."ID_CST"="CST"."ID") 2 - filter("CST"."GRP"=50) 3 - access("ORD"."ID_PRD"="PRD"."ID") 4 - filter("PRD"."GRP"=50) ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 11 | 1584 | 10223 | | 1 | NESTED LOOPS | | | | | | 2 | NESTED LOOPS | | 11 | 1584 | 10223 | | 3 | NESTED LOOPS | | 3314 | 436K| 3614 | |* 4 | TABLE ACCESS FULL | PRODUCTS | 100 | 900 | 14 | | 5 | TABLE ACCESS BY INDEX ROWID| ORDERS | 33 | 4158 | 36 | |* 6 | INDEX RANGE SCAN | ORD_PRD_FK | 33 | | 2 | |* 7 | INDEX RANGE SCAN | CST_I1 | 1 | | 1 | |* 8 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 9 | 2 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter("PRD"."GRP"=50) 6 - access("ORD"."ID_PRD"="PRD"."ID") 7 - access("ORD"."ID_CST"="CST"."ID") 8 - filter("CST"."GRP"=50)
The first option is to hash the two dimension tables into memory then probe each in turn after a tablescan of the orders table – and this could be a very effective choice in some cases; the second option is to drive from one of the dimension tables into the orders table, picking up a relatively large number of rows, then discard a large fraction of those rows by indexing (or possibly doing a hash join) into the second dimension table. (Obviously the roles of products and customers could be reversed in the nested loop plan I’ve shown).
Both plans are likely to be rather expensive – a full scan on orders could be a huge amount of I/O in the form of “db file scattered read” waits; the indexed access path from one dimension to the orders table could be a huge amount of I/O in the form of “db file sequential read” wait collecting rows we won’t eventually need. In the correct circumstances (with the right data patterns) then, we might like to see a star transformation. But there are two possible problems:
- It is “common knowledge” that you need primary keys on the dimension tables to do a star transformation. (In fact I was sure of this until about 30 minutes ago but then I discovered I was wrong.)
- It is also “common knowledge” that you need bitmap indexes on the fact table to do a star transformation. Again this is wrong (but fortunately I’ve known about that for years).
So if we enable star transformations (alter session set star_transformation_enabled=true) we can get a plan that looks like this:
---------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ---------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 136 | 102 | | 1 | TEMP TABLE TRANSFORMATION | | | | | | 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6601_17047D9 | | | | |* 3 | TABLE ACCESS FULL | CUSTOMERS | 100 | 900 | 14 | | 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6601_17047D9 | | | | |* 5 | TABLE ACCESS FULL | PRODUCTS | 100 | 900 | 14 | |* 6 | HASH JOIN | | 1 | 136 | 74 | |* 7 | HASH JOIN | | 1 | 131 | 71 | | 8 | TABLE ACCESS BY INDEX ROWID | ORDERS | 11 | 1400 | 68 | | 9 | BITMAP CONVERSION TO ROWIDS | | | | | | 10 | BITMAP AND | | | | | | 11 | BITMAP MERGE | | | | | | 12 | BITMAP KEY ITERATION | | | | | | 13 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6600_17047D9 | 1 | 13 | 2 | | 14 | BITMAP CONVERSION FROM ROWIDS| | | | | |* 15 | INDEX RANGE SCAN | ORD_CST_FK | | | 3 | | 16 | BITMAP MERGE | | | | | | 17 | BITMAP KEY ITERATION | | | | | | 18 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6601_17047D9 | 1 | 13 | 2 | | 19 | BITMAP CONVERSION FROM ROWIDS| | | | | |* 20 | INDEX RANGE SCAN | ORD_PRD_FK | | | 3 | | 21 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6600_17047D9 | 100 | 500 | 2 | | 22 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6601_17047D9 | 100 | 500 | 2 | ---------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("CST"."GRP"=50) 5 - filter("PRD"."GRP"=50) 6 - access("ORD"."ID_PRD"="C0") 7 - access("ORD"."ID_CST"="C0") 15 - access("ORD"."ID_CST"="C0") 20 - access("ORD"."ID_PRD"="C0")
In this example you can see that Oracle has decided to use a hash join to do a “join-back” to the dimension tables and has decided to extract the subset of rows (grp = 50) from each of those tables and save them in a global temporary. This join-back is a little odd since the query isn’t selecting any date from those two tables so we shouldn’t need to do it – but I think that may be a side effect of not having the primary key declared.
I’ve broken the plan up a little bit to make it a little easier to see the critical steps:
- Lines 11 – 15 we select the index ranges we want from the customer index on orders, convert to bitmaps and merge.
- Lines 16 – 20 we do the same for the products index on orders
- Lines 8 – 10 we do the “bitmap and” between the two constructed bit strings, convert to rowids, and pick up exactly the rows we want
- Lines 6,7,21,22 show two hash joins to the temporary tables which represent the subset of rows we have from customers and products
- Lines 1 – 5 are where we started by extracting the two subsets into a form of internalised global temporary table.
If this doesn’t work well enough for you – and, allowing for a few little tweaks here and there, it really should – or if there is something about your code that makes it impossible for Oracle to do a star transformation, here’s an example of taking total control by restructuring the SQL to do a similar operation:
select ord.* from ( select /*+ leading(prd ord) use_nl(ord) no_merge */ ord.rowid prid from products prd, orders ord where prd.grp = 50 and ord.id_prd = prd.id ) prid, ( select /*+ leading(cst ord) use_nl(ord) no_merge */ ord.rowid crid from customers cst, orders ord where cst.grp = 50 and ord.id_cst = cst.id ) crid, orders ord where prid.prid = crid.crid and ord.rowid = prid.prid ;
With the no_merge hints we produce two result sets: prid – the rowids from the orders table for all rows which match the products we are interested in; and crid – the rowids from the orders table for all rows which match the customers we are interested in.
We then do a join between crid and prid – which has to leave us with just those rowids for the rows which represent the customers we’re interested in who have placed orders for products we’re interested in. So we can then do a join, by rowid, to the orders table to collect the data for those orders. Here’s the plan:
-------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 11 | 1650 | 570 | | 1 | NESTED LOOPS | | 11 | 1650 | 570 | |* 2 | HASH JOIN | | 11 | 264 | 559 | | 3 | VIEW | | 3361 | 40332 | 277 | | 4 | NESTED LOOPS | | 3361 | 87386 | 277 | |* 5 | TABLE ACCESS FULL | CUSTOMERS | 98 | 882 | 81 | |* 6 | INDEX RANGE SCAN | ORD_CST_FK | 34 | 578 | 2 | | 7 | VIEW | | 3390 | 40680 | 281 | | 8 | NESTED LOOPS | | 3390 | 88140 | 281 | |* 9 | TABLE ACCESS FULL | PRODUCTS | 100 | 900 | 81 | |* 10 | INDEX RANGE SCAN | ORD_PRD_FK | 34 | 578 | 2 | | 11 | TABLE ACCESS BY USER ROWID| ORDERS | 1 | 126 | 1 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("PRID"."PRID"="CRID"."CRID") 5 - filter("CST"."GRP"=50) 6 - access("ORD"."ID_CST"="CST"."ID") 9 - filter("PRD"."GRP"=50) 10 - access("ORD"."ID_PRD"="PRD"."ID")
The SQL has obviously become more complex – and complexity is generally something to be wary of as it’s an easy step from “complex” to “wrong”. So don’t try something like this unless you’re sure it’s necessary, and then document what you’re doing very carefully.
Footnote:
If you want to play around with this example, here’s the code to generate the tables. I was using my standard LMT, 1MB uniform extents, 8KB block size, freelist management model. The database was version 11.1.0.6 and CPU costing was disabled.
rem rem Script: star_emulation.sql rem Author: Jonathan Lewis rem Dated: April 2011 rem create table products as select rownum id, mod(rownum,300) grp, lpad(rownum,10,'0') small_vc, rpad('x',100) padding from all_objects where rownum <= 30000 -- > comment to avoid wordpress format issue ; create index prd_i1 on products(id); -- alter table products add constraint prd_pk primary key(id); create table customers as select rownum id, mod(rownum,300) grp, lpad(rownum,10,'0') small_vc, rpad('x',100) padding from all_objects where rownum <= 30000 -- > comment to avoid wordpress format issue ; create index cst_i1 on customers(id); -- alter table customers add constraint cst_pk primary key(id); create table orders as with generator as ( select --+ materialize rownum id from dual connect by rownum <= 1000 -- > comment to avoid wordpress format issue ) select rownum id, trunc(dbms_random.value(1,30000)) id_prd, trunc(dbms_random.value(1,30000)) id_cst, lpad(rownum,10,'0') small_vc, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1000000 -- > comment to avoid wordpress format issue ; create index ord_prd_fk on orders(id_prd); create index ord_cst_fk on orders(id_cst);
Hi Jonathan,
It’s very interesting post and lot lof lessions to learn from this.
My question is how many rows does the query return ?
From my understanding point of view, CBO prediction/calculation went wrong when star transformation was enabled.
When I have checked the first plan
For each order almost we are dealt with 100 products
33 * 100 = 3300
further for one particular customer 1 – with result combined from previous
it select 11 products.
When, you have enabled the start tranformation
initial load/select from customers – went 100 records of customers
products and also 100,perhaps 11 orders picked up perfectly
instead of 33 when compared to nested loops of initial explain plan
Finally – due to “Bitmap And” and “Bitmap Merge” the temp table used
by CBO – suspection went wrong and resulted the rows to “1”.
What could be the reason ?
Comment by Pavan Kumar N — April 22, 2011 @ 7:34 pm BST Apr 22,2011 |
Pavan Kumar,
Your observation is correct – the cardinality calculation for Star Transformations goes wrong. This is what I said in Chapter 9 of “Cost Based Oracle – fundamentals”.
The second factor that comes into play arises from the fact that star transformations obey normal join arithmetic. After using the bitmap and mechanism to identify the starting set in the fact table the optimizer simply applied the standard join arithmetic to the task of joining back the three dimension tables. And that’s highly questionable, of course, because we’ve already done that once to derive the starting number of rows – every row we join back is doing the join to extend the row length, and there is no question of rows being eliminated or multiplied by the join. Star transformations get the wrong cardinality because they apply join selectivities twice
(The reference to “three” dimension table is because that’s what I had in the example in the book.)
Comment by Jonathan Lewis — April 24, 2011 @ 10:00 am BST Apr 24,2011 |
I’ve just noticed that all the “rownum” predicates disappeared as I copied up the SQL to create my sample data set, so I’ve just edited the code to put them back. For the purposes of demonstrating the principle I created a fairly small data set – just 1,000,000 orders with 30,000 products and customers.
Comment by Jonathan Lewis — April 22, 2011 @ 10:51 pm BST Apr 22,2011 |
Great write-up Jonathan. I got one of such complex plan into my exadata db from an OBIA report. Lot of Bitmap conversions,Index_joins and GTT with MERGE JOIN CARTESIAN when star transformation is enabled. I just wanted to ask, how do we find whether star transformation is useful in such scenario or not ? Is there any way to identify whether bitmap conversions or index_joins are getting costlier ?
Comment by Bhavik Desai — April 23, 2011 @ 7:07 am BST Apr 23,2011 |
Bhavik Desai,
If the problem had been simple enough to answer in a comment to a blog it would be so simple that it would have been coded into the optimizer and you wouldn’t have needed to ask the question. If the optimizer can’t work it out you need to work it out for yourself, using your knowledge of the data to build a better model of the optimizer. See, for example, this article I wrote for Simple Talk.
Comment by Jonathan Lewis — April 24, 2011 @ 10:10 am BST Apr 24,2011 |
That’s a very good re-write. It may appear to be complex but if you read it section-by-section it becomes plain.
However most of us (or I) very rarely think of using three hints together.
Hemant K Chitale
Comment by Hemant K Chitale — April 23, 2011 @ 2:26 pm BST Apr 23,2011 |
Hemant,
Thanks for the comment – I try to make a big thing about clarity in my presentations on writing efficient SQL, and I think that Oracle helps us make complex things appear simpler by allowing us to use inline views (or subquery factoring – although that sometimes causes undesirable side effects).
However most of us (or I) very rarely think of using three hints together.
I probably ought to admit, here, that I’ve actually not used enough hints – to my mind once you start putting in a “micro-management” hint like use_nl() you need to define the entire shape of the plan: so my three hints should have been five, because along with the no_merge hint I should have had:
join order for the query block – leading()
join method for every join – use_nl()
access method for every table – full(prd) index(ord (id_product))
I’ve written several notes about the diffculties of using hints well – in this case I should have been more strict with myself.
Comment by Jonathan Lewis — April 24, 2011 @ 10:37 am BST Apr 24,2011 |
Jonathan,
using your example in 11.2.0.1 (with 8K blocksize, LMT, ASSM, CPU costing enabled; and after gathering table statistics) I get a different result:
I guess the strange order of row sources in the nested loops has to do with the new NL impementation in 11.1. Apart from this the execution plan seems to be a mix of the two obvious possibilities you have shown.
Comment by Martin Preiss — April 23, 2011 @ 7:53 pm BST Apr 23,2011 |
Martin,
Your plan is covered by this comment: “discard a large fraction of those rows by indexing (or possibly doing a hash join) into the second dimension table”.
In passing, even though you’ve used my code to generate the data, there are still things like database parameters and system statistics (CPU Costing) that can vary; not to mention the possibility of stats gathering samples being different. You’ll notice, as an easy comparison, your line 9 shows a full tablescan of products at a cost of 209 with 2% CPU – my first example shows a cost of 14 without CPU (because I’d disable CPU costing). In fact when you get to my manually rewritten code you can see that I must triggered some change to my environment because the cost of the products tablescan in that plan is 81.
Comment by Jonathan Lewis — April 24, 2011 @ 10:44 am BST Apr 24,2011 |
Jonathan,
thank you for the clarification – I should read more carefully …
Without cpu costing (/*+ opt_param(‘_optimizer_cost_model’,’io’) */) the cost for the FTS on products drop to 84, and when I recreate the table in another non-ASSM tablespace I get your value of 81 (I guess: because of the different HWM). Without cpu costing I also get the plan with two hash joins.
To get to the cost of 14 I had to set db_file_multiblock_read_count=128 – and that was a little surprise because the parameter had already this value (according to v$system_parameters); but then I remembered that there are hidden parameters to define different MBRC values for costing and for data access (_db_file_exec_read_count=128 and _db_file_optimizer_read_count=8). So I got my numbers sorted …
Comment by Martin Preiss — April 24, 2011 @ 3:06 pm BST Apr 24,2011 |
Martin,
That was a nice piece of reverse engineering, well done.
Comment by Jonathan Lewis — April 26, 2011 @ 7:47 am BST Apr 26,2011
[…] Jonathan Lewis gives another great example of how it was possible to find queries which ran faster if you manually de-coupled the index and table accesses. […]
Pingback by Log Buffer #217, A Carnival of the Vanities for DBAs | The Pythian Blog — April 25, 2011 @ 4:53 am BST Apr 25,2011 |
Hi Jonathan,
One extremely minor stuff: In the section where you explain the star transformation, second bullet point, you say “Lines 15-20 we do the same….” — That should probably read “Lines 16-20….” , right?
And one question: Why do the two temp tables generated for the PRODUCTS and CUSTOMERS tables share the same system generated name ( SYS_TEMP_0FD9D6601_17047D9 ) at the top part of the plan? The temp tables clearly have different names at the bottom of the plan.
Regards
Comment by Andy — April 26, 2011 @ 5:29 am BST Apr 26,2011 |
Andy,
Thanks for the correction – now fixed.
I don’t know why the first appearance of the two temp tables shows the same name – but it’s probably just a little bug in the “explain plan” bit of the code.
It’s an interesting little oddity that when I run the query with star transformation in 11.1.0.6 and pull the plan from memory the “load as select” lines don’t hold a temporary table name at all.
Comment by Jonathan Lewis — April 26, 2011 @ 7:58 am BST Apr 26,2011 |
On enabling the star transformation the number of rows returned are more . Why is it so..??
Without star transformation enabled – Number of rows returned 14
With star transformation enabled – Number of rows returned 11K
Thanks for the help..!!
Comment by Pravz — November 15, 2011 @ 2:55 pm GMT Nov 15,2011 |
Pravz,
If you mean that actual number of rows returned when you run the query (and this behavious is definitely happening when there’s no chance that someone else has been changing the data) then it’s a priority 1 bug (wrong results) and you should raise an SR.
If you mean the cardinality predicted in the execution plan changes then it’s also a bug – though less important one – possibly related to other features of the costing. You didn’t say which vesion you are using, and I have an example where the cardinality prediction changes between 11.1 and 11.2; the change doesn’t match your description, but it does show that a change is possible.
Comment by Jonathan Lewis — November 22, 2011 @ 8:12 am GMT Nov 22,2011 |
[…] publishing my note about emulating star transformations when there was something blocking the optimizer’s ability to do them automatically I ended up […]
Pingback by Star Transformation – 2 | Oracle Scratchpad — January 23, 2016 @ 12:21 pm GMT Jan 23,2016 |
[…] use that subquery as a “semijoin driver”. This is essentially the mechanism used by the Star Tranformation, where the optimizer rewrites a simple join so that each dimension table (typically) appears twice, […]
Pingback by Semijoin_driver | Oracle Scratchpad — January 24, 2016 @ 11:43 am GMT Jan 24,2016 |
[…] 4 percent of the data in the table. In a proper data warehouse we might have been looking at a bitmap star transformation solution for this query, but in a mixed system we might want to run warehouse queries against […]
Pingback by Serial Bloom | Oracle Scratchpad — September 30, 2020 @ 1:35 pm BST Sep 30,2020 |