Oracle Scratchpad

June 24, 2016

Never …

Filed under: Infrastructure,Oracle,Philosophy — Jonathan Lewis @ 1:15 pm BST Jun 24,2016

From time to time a question comes up on OTN that results in someone responding with the mantra: “Never do in PL/SQL that which can be done in plain  SQL”. It’s a theme I’ve critqued a couple of times before on this blog, most recently with regard to Bryn Llewellyn’s presentation on transforming one table into another and Stew Ashton’s use of Analytic functions to solve a problem that I got stuck with.

Here’s a different question that challenges that mantra. What’s the obvious reason why someone might decide to produce something like the following pattern of code rather than writing a simple “insert into t1 select * from t2;”:


        cursor c1 is
        select * from t2

        type c1_array is table of c1%rowtype index by binary_integer;
        m_tab c1_array;


        open c1;
                fetch c1
                bulk collect into m_tab limit 100;

                        forall i in 1..m_tab.count
                                insert into t1 values m_tab(i);
                        when others
                                then begin
                                        --  proper exception handling should go here

                exit when c1%notfound;

        end loop;
        close c1;

There is a very good argument for this approach.

Follow-up (Saturday 25th)

As Andras Gabor pointed out in one of the comments, there are documented scenarios where the execution plan for a simple select statement is not legal for the select part of an “insert into .. select …” statement. In particular, if you have a distributed query the most efficient execution plan may require the remote site to be the driving site but the plan for a CTAS or insert/select is required to use the local site as the driving site. There are workarounds – if you’re allowed to use them – such as creating a view at the remote site and selecting from the view, or you could create a pipelined function locally and select from the pipelined function (but that’s going to be writing PL/SQL anyway, and you’d have to create one or two object types in the database to implement it).

Another example of plan limitations, that I had not seen before (but have now found documented as “not a bug” in MoS note 20112932), showed up in a comment from Louis: a select statement may run efficiently because the plan uses a Bloom filter, but the filter disappears when the statement is used in insert/select.

These limitations, however, were not the point I had in mind. The “obvious” reason for taking the pl/sql approach is error handling. What happens if one of the rows in your insert statement raises an Oracle exception ? The entire statement has to rollback. If you adopt the PL/SQL array processing approach then you can trap each error as it occurs and decide what to do about it – and there’s an important detail behind that statement that is really important: a PL/SQL array insert can operate at virtually the same speed as the simple SQL statement once you’ve set the arraysize to a value which allows each insert to populate a couple of blocks.

Let me emphasise the critical point of the last sentence:  array inserts in PL/SQL operate at (virtually) the speed of the standard SQL insert / select.

As it stands I don’t think the exception handler in my code above could detect which row in the batch had caused the error – I’ve just printed the ID from the first row in the batch as a little debug detail that’s only useful to me because of my knowledge of the data. Realistically the PL/SQL block to handle the inserts might look more like the following:

-- In program declaration section

        dml_errors      exception;
        pragma exception_init(dml_errors, -24381);

        m_error_pos     number(6,0)     := 0;

-- ------------------------------

                        forall i in 1..m_tab.count save exceptions
                                insert into t1 values m_tab(i);
                        when dml_errors then begin

                                for i in 1..sql%bulk_exceptions.count loop

                                                'Array element: ' ||
                                                        sql%bulk_exceptions(i).error_index || ' ' ||

                                        m_error_pos := sql%bulk_exceptions(i).error_index;
                                                'Content: ' || m_tab(m_error_pos).id || ' ' || m_tab(m_error_pos).n1

                                end loop;

                        when others then raise;

You’ll notice that I’ve added the save exceptions clause to the forall statement. This allows Oracle to trap any errors that occur in the array processing step and record details of the guilty array element as it goes along, storing those details in an array calls sql%bulk_exceptions. My exception handler then handles the array processing exception by walking through that array.

I’ve also introduced an m_error_pos variable (which I could have declared inside the specific exception handler) to remove a little of the clutter from the line that shows I can identify exactly which row in the source data caused the problem. With a minimum of wasted resources this code now inserts all the valid rows and reports the invalid rows (and, if necessary, could take appropriate action on each invalid row as it appears).

If you’ve got a data loading requirement where almost all the data is expected to be correct but errors occasionally happen, this type of coding strategy is likely to be the most efficient thing you could do to get your data into the database. It may be slightly slower when there are no errors, but that’s a good insurance premium when compared with the crash and complete rollback that occurs if you take the simple, though slightly faster on good days, approach – and there are bound to be cases where a pre-emptive check of all the data (that would – we hope – make the pure SQL insert safe) would add far more overhead than the little bit of PL/SQL processing shown here.


It’s obviously a little difficult to produce any time-based rates that demonstrate the similarity in performance of the SQL and PL/SQL approaches – the major time component in a little demo I built was about the I/O rather than the the CPU (which, in itself, rather validates the claim anyway). But if you want to do some testing here’s my data model with some results in the following section:

rem     Script: plsql_loop_insert.sql
rem     Author: Jonathan Lewis

execute dbms_random.seed(0)

create table t1
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
        cast(rownum as number(8,0))                     id,
        2 * trunc(dbms_random.value(1e10,1e12))         n1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
        generator       v1,
        generator       v2
        rownum <= 1e6
create table t2
        /*+ no_parallel(t1) */
        id + 1e6        id,
        n1 - 1          n1,
        rpad('x',100,'x') padding
from t1 

-- update t2 set n1 = n1 + 1 where id = 2e6;
-- update t2 set n1 = n1 + 1 where id = 2e6 - 10;
-- update t2 set n1 = n1 + 1 where id = 2e6 - 20;
-- update t2 set n1 = n1 + 1 where id = 1750200;
-- update t2 set n1 = n1 + 1 where id = 1500003;
-- update t2 set n1 = n1 + 1 where id = 1500001;

alter system checkpoint;
alter system switch logfile;

                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'

                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'

create unique index t1_i1 on t1(n1) nologging;
create unique index t1_pk on t1(id) nologging;
alter table t1 add constraint t1_pk primary key(id);

I’ve generated 1 million rows with an id column and a random integer – picking the range of the random numbers to give me a very good chance (that worked) of getting a unique set of values. I’ve doubled the random values I use for t1 so that I can substract 1 and still guarantee uniqueness when I generate the t2 values (I’ve also added 1 million to the id value for t2 for the same uniqueness reasons).

The optional update to add 1 to a scattering of rows in t2 ensures that those values go back to their original t1 values so that they can cause “duplicate key” errors. The SQL insert was a simple insert into t1 select * from t2 (ensuring that parallel query didn’t come into play), and the PL/SQL detail I used was as follows:


        cursor c1 is
        select /*+ no_parallel(t2) */ * from t2

        type c1_array is table of c1%rowtype index by binary_integer;
        m_tab c1_array;

        dml_errors      exception;
        pragma exception_init(dml_errors, -24381);

        m_error_pos     number(6,0)     := 0;


        open c1;
                fetch c1
                bulk collect
                into m_tab limit 100;

                        forall i in 1..m_tab.count save exceptions
                                insert into t1 values m_tab(i);

                        when dml_errors then begin

                                for i in 1..sql%bulk_exceptions.count loop

                                                'Array element: ' ||
                                                        sql%bulk_exceptions(i).error_index || ' ' ||

                                        m_error_pos := sql%bulk_exceptions(i).error_index;
                                                'Content: ' || m_tab(m_error_pos).id || ' ' || m_tab(m_error_pos).n1

                                end loop;

                        when others then raise;


                exit when c1%notfound;  -- when fetch < limit

        end loop;
        close c1;

The PL/SQL output with one bad row (2e6 – 20) looked like this:

Array element: 80 ORA-00001: unique constraint (.) violated
Content: 1999980 562332925640

Here are some critical session statistics for different tests in 11g:

No bad data, insert select
Name                                                 Value
----                                                 -----
CPU used when call started                             944
CPU used by this session                               944
DB time                                              1,712
redo entries                                     1,160,421
redo size                                      476,759,324
undo change vector size                        135,184,996

No bad data, PL/SQL loop
Name                                                 Value
----                                                 -----
CPU used when call started                             990
CPU used by this session                               990
DB time                                              1,660
redo entries                                     1,168,022
redo size                                      478,337,320
undo change vector size                        135,709,056

Duplicate Key (2e6-20), insert select (with huge rollback)
Name                                                 Value
----                                                 -----
CPU used when call started                           1,441
CPU used by this session                             1,440
DB time                                              2,427
redo entries                                     2,227,412
redo size                                      638,505,684
undo change vector size                        134,958,012
rollback changes - undo records applied          1,049,559

Duplicate Key (2e6-20), PL/SQL loop - bad row reported
Name                                                 Value
----                                                 -----
CPU used when call started                             936
CPU used by this session                               936
DB time                                              1,570
redo entries                                     1,168,345
redo size                                      478,359,528
undo change vector size                        135,502,488
rollback changes - undo records applied                 74

Most of the difference between CPU time and DB time in all the tests was file I/O time (in my case largely checkpoint wait time because I happened to have small log files) but in larger systems it’s quite common to see a lot of time spent on db file sequential reads as index blocks are read for update). You can see that there’s some “unexpected” variation in CPU time – I wasn’t expecting the PL/SQL loop that failed after nearly 1M inserts to use less CPU than anything else – but the CPU numbers fluctuated a few hundredths of a second across tests, this just happened to be particularly noticeable with the first one I did – so to some extent this was probably affected by background activity relating to space management, job queue processing and all the other virtual machines on the system.

Critically I think it’s fair to say that the differences in CPU timing are not hugely significant across a reasonably sized data set, and most importantly the redo and undo hardly vary at all between the successful SQL and both PL/SQL tests. The bulk processing PL/SQL approach doesn’t add a dramatic overhead – but it clearly does bypass the threat of a massive rollback cost.


You might want to argue the case for using basic SQL with the log errors clause. The code method is simple and it gives you a table of rows which have caused exceptions as the insert executed – and that may be sufficient for your purposes; but there’s a problem until you upgrade to 12c.

Here’s how I had to modify my test case to demonistrate the method:


insert into t1 select * from t2
log errors
reject limit unlimited

The procedure call creates a table to hold the bad rows, by default it’s name will be err$_t1, and it will be a clone of the t1 table with changes to column types (which might be interesting if you’ve enabled 32K columns in 12c — to be tested) and a few extra columns:

SQL> desc err$_t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$                        NUMBER
 ORA_ERR_MESG$                          VARCHAR2(2000)
 ORA_ERR_ROWID$                         ROWID
 ORA_ERR_OPTYP$                         VARCHAR2(2)
 ORA_ERR_TAG$                           VARCHAR2(2000)
 ID                                     VARCHAR2(4000)
 N1                                     VARCHAR2(4000)
 PADDING                                VARCHAR2(4000)

SQL> execute print_table('select * from err$_t1')
ORA_ERR_NUMBER$               : 1
ORA_ERR_MESG$                 : ORA-00001: unique constraint (TEST_USER.T1_I1) violated

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
ID                            : 1999980
N1                            : 562332925640
PADDING                       : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

SO what’s the problem with logging errors ? Here are the sets of session stats corresponding to the ones that I reported above for the SQL and PL/SQL options. The first set comes from running this test on, the second from

11g results
Name                                                 Value
----                                                 -----
CPU used when call started                           1,534
CPU used by this session                             1,534
DB time                                              2,816
redo entries                                     3,113,105
redo size                                      902,311,860
undo change vector size                        269,307,108

12c results
Name                                                 Value
----                                                 -----
CPU used when call started                             801
CPU used by this session                               801
DB time                                              3,061  -- very long checkpoint waits !!
redo entries                                     1,143,342
redo size                                      492,615,336
undo change vector size                        135,087,044

The 12c stats are very sinilar to the stats from the perfect SQL run and the two PL/SQL runs – but if you look at the 11g stats you’ll see that they’re completely different from all the other stats. The number of redo entries (if nothing else) tells you that Oracle has dropped back from array processing to single row processing in order to be able to handle the error logging (1 million rows, one entry for each row, it’s PK index entry, and the unique key index entry.)

Until 12c “error logging” is just row by row processing.


As far as I can tell, I first pointed out this “single row processing” aspect of the log errors option some time around December 2005.

Late Entry:

While looking for a posting about efficient updates  I came across another of my posting that compares SQL with PL/SQL for updates – it’s worth a read.


January 1, 2016


Filed under: Oracle,Philosophy — Jonathan Lewis @ 1:02 pm BST Jan 1,2016

I was sent the following email a few years ago. It’s a question that comes up fairly frequently and there’s no good answer to it but, unusually, I made an attempt to produce a response; and I’ve decided that I’d start this year by presenting the question and quoting the answer I gave so here, with no editing is the question:

I’m disturbing you for some help about becoming an Oracle master expert. Probably you are getting this kind of emails a lot but I would be appreciate if you give a small answer to me at least.

First, shortly I want to introduce my self. I’m an *Oracle Trainer* in Turkey Oracle University for 2 years. Almost for 4 years, I worked as software engineer and meet with Oracle on these days. After a while I decided to develop myself in Oracle database technologies and become trainer as i said. I also give consultancy services about SQL / PLSQL development and especially* SQL / PLSQL tuning*. I really dedicate myself to these subjects. As a trainer I also give DBA workshop lectures but in fact I didnt actually did dba job in a production system. I have the concept and even read everything I found about it but always feel inadequate because didnt worked as a DBA on a production system. So many DBA’s has taken my class and they were really satisfied (they have got all answers for their questions) but I did not. I’m a good trainger (with more that 97 average points in oracle evaluations) but I want to be best.

Even in SQL / PLSQL tuning, I know that I am really good at it but I also aware that there are some levels and I can not pass through the next level. for ex: I can examine execution plan (index structures, access paths etc), find cpu and io consumption using hierarchical profiler and solve the problem but can’t understand yet how to understand how much IO consumed by query and understand slow segments. if you remember, for a few days ago, on OTN you answered a question that I involved about sequence caching and Log file sync event. There, I said that sequence can cause to log file sync event (and as you said that was true) but when someone else write a simple code and couldnt see this event, I couldnt answer to him, you did (you said that it was because optimizing).

that is the level what i want to be. I am really working on this and age on 29. but whatever I do I cant get higher. I need a guideness about that. I even worked free for a while (extra times after my job here). I need your guideness, as I said I can work with you if you want to test and I want to learn more advanced topics while working. In Turkey, I couldn’t find people who can answer my questions so I can not ask for guideness to them.

And my (impromptu, and unedited) reply:

Thank you for your email. You are correct, I do get a lot of email like this, and most of it gets a stock response; but yours was one of the most intelligently written so I’ve decided to spend a little time giving you a personal answer.

Even if you were to spend a few years as a DBA, you would probably not become the sort of expert you want to be. Most DBAs end up dealing with databases that, for want of a better word, we could call “boring”; for a database to be interesting and show you the sorts of problems where you have to be able to answer the types of question I regularly answer you probably need to be the DBA for a large banking or telecoms system – preferably one that hasn’t been designed very well – that has to handle a very large volume of data very quickly. On these extreme systems you might find that you keep running into boundary conditions in Oracle that force you to investigate problems in great detail and learn all sorts of strange things very quickly. On most other systems you might run into a strange problem very occasionally and spend several years on the job without once being forced to solve any difficult problems very quickly.

If you want to become an expert, you need to be a consultant so you get to see a lot of problems on lots of different systems in a very short time; but you can’t really become a consultant until you’re an expert. As a substitute, then, you need to take advantage of the problems that people report on the OTN database forum – but that doesn’t mean just answering questions on OTN. Look for the problems which people have described reasonably well that make you think “why would that happen”, then try to build a model of the problem that has been described and look very closely at all the statistics and wait events that change as you modify the model. Creating models, and experimenting with models, is how you learn more.

Take, for example, the business of the sequences and pl/sql – you might run the test as supplied with SQL_trace enabled to see what that showed you, you could look very carefully at the session stats for the test and note the number of redo entries, user commits, and transactions reported; you could look at the statistics of enqueue gets and enqueue releases, ultimately you might dump the redo log file to see what’s going into it. Many of the tiny little details I casually report come from one or two days of intense effort studying an unexpected phenomenon.  (The log file sync one was the result of such a study about 15 years ago.)

 Happen new year to all my readers.

December 23, 2015


Filed under: Oracle,Philosophy — Jonathan Lewis @ 12:56 pm BST Dec 23,2015

This post is a 100% copy of a message that Tanel Poder sent to the Oracle-L mailing list in response to a thread about the performance of SSD. It’s not just a good answer to the question, it’s a wonderfully succinct insight into how to think about what you’re really testing and it displays the mind-set that should be adopted by everyone.

If you measure write performance on an idle Exadata machine without any other load going on, you are not comparing flash vs disk, you are comparing flash vs the battery-backed 512MB RAM cache in the “RAID” controllers within each storage cell!

This is how the “disk” that’s supposed to have a couple of milliseconds of average latency (it still rotates and needs to seek + calibrate to next track even in sequential writes) gives you sub-millisecond write latencies… it’s not the disk write, it’s the controller’s RAM write that gets acknowledged.

And now when you run a real workload on the machine (lots of random IOs on the disk and Smart Scans hammering them too), your disk writes won’t be always acknowledged by the controller RAM cache. When comparing *busy* flash disks to *busy* spinning disks vs. *idle* flash disks vs *idle* spinning disks (with non-dirty write cache) you will get different results.

So, I’m not arguing here that flash is somehow faster for sequential writes than a bunch of disks when talking about throughput. But if you care about latency (of your commits) you need to be aware of everything else that will be going on on these disks (and account for this in your benchmarks).

Without queueing time included, a busy flash device will “seek” where needed and perform the write in under a millisecond, a busy disk device in 6-10 milliseconds. So your commits will end up having to wait for longer (yes, your throughput will be ok due to the LGWR writing multiple transactions redo out in a single write, but this doesn’t change the fact that individual commit latency suffers).

This latency issue of course will be mitigated when you are using a decent storage array with enough (well-managed) write cache.

So I’d say there are the following things you can compare (and need to be aware of which hardware are you really benchmarking):

1) Flash storage
2) Disk storage without (write) cache
3) Disk storage with crappy (write) cache
4) Disk storage with lots of well-managed & isolated (write) cache

And the second thing to be aware of:

1) Are you the single user on an idle storage array
2) Are you just one of the many users in a heavily utilized (and randomly seeking) storage array

So, as usual, run a realistic workload and test it out yourself (if you have the hardware :)

July 3, 2014

Philosophy 22

Filed under: Philosophy — Jonathan Lewis @ 9:59 am BST Jul 3,2014

Make sure you agree on the meaning of the jargon.

If you had to vote would you say that the expressions “more selective” and “higher selectivity” are different ways of expressing the same idea, or are they exact opposites of each other ? I think I can safely say that I have seen people waste a ludicrous amount of time arguing past each other and confusing each other because they didn’t clarify their terms (and one, or both, parties actually misunderstood the terms anyway).

Selectivity is a value between 0 and 1 that represents the fraction of data that will be selected – the higher the selectivity the more data you select.

If a test is “more selective” then it is a harsher, more stringent, test and returns less data  (e.g. Oxford University is more selective than Rutland College of Further Education): more selective means lower selectivity.

If there’s any doubt when you’re in the middle of a discussion – drop the jargon and explain the intention.


If I ask:  “When you say ‘more selective’ do you mean ….”

The one answer which is absolutely, definitely, unquestionably the wrong reply is: “No, I mean it’s more selective.”


February 4, 2014

Philosophy 21

Filed under: Philosophy — Jonathan Lewis @ 2:46 pm BST Feb 4,2014

I’ll  be buying the tickets for my flight to Seattle and Kaleidoscope 14 some time tomorrow. The cut-off date on my credit card bill is today, so if I get the tickets tomorrow I won’t have to pay for them until the end of March.

When you know you have to pay it’s worth thinking about when you have to pay. It’s a principle that works in Oracle databases, too.

On the flip-side – sometimes you don’t realise that the clever thing you’ve done now is going to make someone else pay later.

May 28, 2013

French Philosophy

Filed under: Philosophy — Jonathan Lewis @ 5:45 pm BST May 28,2013

From Mohamed Houri, here’s a French translation of all my “Philosophy” notes to date.

January 7, 2013

Philosophy 20

Filed under: Philosophy — Jonathan Lewis @ 6:52 am BST Jan 7,2013

It’s important to revisit the questions you think you’ve answered from time to time. You may find that your previous answer was wrong or incomplete; you may find that looking at your past answers may give you ideas for new questions.

I had this thought while staring out of the window earlier on today. When I’m working at home I spend most of my time in a room that looks onto my back garden – and I have five different bird feeders in the garden and a pair of binoculars by my computer. Today I was watching some (Eurasian) Jays that tend to appear fairly promptly when I put out a handful of peanuts.

There’s clearly some sort of pecking order among these jays (and I think there are two different families), and one of the jays is clearly very aggressive and tends to frighten off the others, but a common behaviour pattern when two are down is that the less aggressive jay hops a few steps away from the more aggressive one and turns its back.

For years I’ve assumed that this is just a typical “underdog” behaviour – i.e. “I’m not a threat, I can’t attack, I’m not even looking at you” – but today it suddenly dawned on me that there was another possibility that simply hadn’t crossed my mind: if you’re a bird and thinking about running away you won’t want to take off towards your opponent, the best direction to point in is the direction that’s going to move you away from trouble as quickly as possible.

My point, of course, is that it’s easy to believe that you understand something simply because you’ve accepted a reasonable explanation – coming back to the issue some time later may allow you to come up with other ideas, whether or not those ideas arise by you deliberately questioning your belief, or by an accident of intuition.

Footnote: If this was an example of Oracle behaviour I’d be doing some serious research on it by now; but my birdwatching is only for casual pleasure, so I’m not going to start trawling the internet for theses on Jay behaviour.

October 18, 2012

Philosophy 19

Filed under: Philosophy — Jonathan Lewis @ 6:13 pm BST Oct 18,2012

We’ve reached that time of year (Autumn, or Fall if you prefer the American term) when I’m reminded that tending a garden is like tending an Oracle database.

This is a picture of the oak tree on my front lawn, taken about 4 hours ago. Looking at it now it shows hardly any sign of the coming winter and little of the colour that let’s you know it’s preparing to drop its huge volume of leaves, but yesterday morning I spent the best part of an hour raking up leaves that had dropped over the course of the previous week.

Over the next six weeks, I’ll be out be out with my leaf rake every few days to clean up the mess – and I’ll look down at the mess that’s on the ground, then look up at the mess that’s waiting to join it, and then I’ll do just enough work to make the lawn look just good enough to keep my wife happy for a few more days until I get sent out to do it all over again.

You can spot the analogy, of course – it’s important to think about how much effort it’s worth spending to get to an end result which is good enough for long enough. There’s no point in spending a huge amount of effort getting a fantastic result that is going to be obliterated almost immediately by the next problem that gets dumped in your lap. When the tree is nearly bare, I’ll do a thorough job of clearing the leaves, until then, 95% is easy enough, and good enough.

Footnote: avid arboriculturalists might wonder why the tree is lop-sided – being a little light on the side towards the road – it’s the sort of thing that happens when a tree gets hit by a lorry.

September 24, 2012

Philosophy 18

Filed under: Philosophy — Jonathan Lewis @ 5:28 pm BST Sep 24,2012

A question I asked myself recently was this:

Which is the worst offence when publishing an article about some feature of Oracle:

  1. Saying something does work when it doesn’t
  2. Saying something doesn’t work when it does
  3. Saying something does work when in some cases it doesn’t.
  4. Saying something doesn’t work when in some cases it does.

I don’t think it’s an easy question to answer and, of course, it’s not made any easier when you start to consider the number of cases for which a feature does or doesn’t work (how many cases is “some cases”), and the frequency with which different cases are likely to appear.

June 26, 2012

Philosophy 17

Filed under: Oracle,Philosophy — Jonathan Lewis @ 5:17 pm BST Jun 26,2012

You need to understand the application and its data.

A recent request on OTN was for advice on making this piece of SQL run faster:

delete from toc_node_rel rel
where   not exists (
                select  *
                from    toc_rel_meta meta
                where   rel.rel_id = meta.rel_id


April 4, 2012

Philosophy 16

Filed under: Philosophy — Jonathan Lewis @ 5:49 pm BST Apr 4,2012

I couldn’t help laughing when I saw this.

March 30, 2012


Filed under: Philosophy — Jonathan Lewis @ 5:56 pm BST Mar 30,2012

Here’s a wonderful lesson from Cary Millsap – be very careful if you ever want to sell him anything – that reminded me of a Powerpoint slide I had produced for a presentation a few years ago. It took me a little time to track it down but I finally found the slide, reproduced below, in a presentation called: “The Burden of Proof” that I had given for the Ann Arbor Oracle User Group in 2002. (The picture of the Earth is the Apollo 17 image from NASA):


August 30, 2011


Filed under: Philosophy — Jonathan Lewis @ 6:32 am BST Aug 30,2011

Here’s a quote that says it all:

Dr Joseph Lykken of Fermilab – in response to some (negative) results from the Large Hadron Collider that suggest the simplest form of SuperSymmetry is wrong:

“It [supersymmetry] is a beautiful idea. It explains dark matter, it explains the Higgs boson, it explains some aspects of cosmology; but that doesn’t mean it’s right.”

Mind you, Feynmann got there years ago:

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

July 15, 2011

Philosophy 15

Filed under: Philosophy — Jonathan Lewis @ 5:19 pm BST Jul 15,2011

If you run a query that is supposed to return one row from a large table, and there’s a suitable index in place you would probably expect the optimizer to identify and use the index. If you change the query to return all the data (without sorting) from the table you would probably expect the optimizer to choose a full tablescan.

This leads to a very simple idea that is often overlooked:

Sometimes it takes just one extra row (that the optimizer knows about) to switch a plan from an indexed access to a full tablescan.

There has to be a point in our thought experiment where the optimizer changes from the “one row” indexed access to the “all the rows” tablescan.

If you’re lucky and the optimizer’s model is perfect there won’t be any significant difference in performance, of course. But we aren’t often that lucky, which is why people end up asking the question:  “How come the plan suddenly went bad, nothing changed … except for a little bit of extra data?” All is takes is one row (that the optimizer knows about) to change from one plan to another – and sometimes the optimizer works out the wrong moment for making the change.

March 25, 2011


Filed under: Oracle,Philosophy,Troubleshooting — Jonathan Lewis @ 6:51 pm BST Mar 25,2011

“There is no space problem.”

If you saw this comment in the middle of a thread about some vaguely described Oracle problem, which of the following would you think was the intended meaning:

    There is a problem – we have no space.
    We do not have a problem with space

Wouldn’t it make life so much easier to choose between:

    We are not seeing any Oracle errors.
    We are seeing Oracle error: “ORA-01653: unable to extend table X by N in tablespace Z”

(That’s just one of many possible space-related errors, of course.)

Next Page »

Powered by