Oracle Scratchpad

Public Appearances

23rd – 24th April Online Event: The great Tanel Poder Virtual Conference has, like Halley’s comet (but a little more quickly), returned with a stellar list of speakers: Cary Millsap, Kerry Osborne, Tanel Põder, Mauro Pagano and me.

I will be doing a session on Trouble-shooting with Execution Plans on Thursday 23rd from 6:00p, to 7:30pm (BST) – 13:00 – 14:30 EST and joining in with the group sessions with the other speakers at other times. Here’s the abstract for my session:

There is a huge amount of information available in an execution plan that can help you understand why a query is doing more work and taking more time than you expect – especially if you can take advantage of the dbms_monitor package to pull the plan and its execution statistics from memory after running the real query.

There are cases, though, where it is necessary to look closely at the execution figures to understand – or get the clues about – where the work is going and what is being done. Sometimes the numbers are reported in the wrong place, sometimes it’s hard to tell whether the reported times for some parts of the plan are the cause or the effect of the total time taken, sometimes the counters for reads or writes or buffer visits don’t seem to make sense and you need to crosscheck with the session’s wait events (v$session_event) or activity (v$sesstat).

In this presentation we start with a brief reminder of query blocks, transformations, the interpretation of basic execution plans and how the optimizer stitches multiple query blocks together; then we pick up a few special cases of patterns in plans, including updates, deletes, analytic functions and match_recognize(). Finally we go to a few “live” examples to highlight the importance of looking carefully at the predicate information and the execution statistics.

Postponed Warsaw, Poland: POUG Workshop 2020: I’ll be doing two sessions at this event – both of them in the “beginner” stream. Before lunch I’ll be talking about creating and understanding execution plans and at the end of the day I’ll be doing a wrap-up presentationat on using the AWR (or Statspack) report as a starting point for trouble shooting.

Both will be 90 minute lectures – with a little pause for thought and refreshment in the middle.

The Foundations of Understanding Execution Plans

One of the key components of good applications is efficient SQL, and if you need to understand why some piece of SQL is not executing efficiently then it’s important that you are able to create, share, and interpret truthful execution plans. This presentation will give you a solid understanding of how to meet all three of those targets.

Most presentations on execution plans start with a simple instructions on how to create them and then how to read very simple plans. This presentation will start at the opposite end of the problem by looking at an SQL statement and asking what the optimizer might do with it and only then look at what that means in terms of the possible execution plans. In this way we gain an introduction to query blocks, transformations, and the reason why we only ever need to look at simple execution plans in order to understand what’s happening in complex queries.

From this point we look at plans for a single query block; then examine the ways that Oracle presents plans that involve multiple query blocks in several different circumstances. This will lead us to the problems that the optimizer has with working out how to choose between doing a transformation that eliminates a query block, or isolating a query block and having to execute it multiple times at run-time – at the same time we’ll discover that there are run-time optimisations (tricks) that Oracle uses that will make the optimizer’s calculations (or guesswork) produce totally unrealistic estimates.

From a purely technical viewpoint we will be covering the packages dbms_xplan and dbms_momitor, and the three most important parts of an execution plan – the operation (body) of the plan, the predicate information, the statistics information (estimated and actual) with some passing references to the outline information and the projection information.

Beginners’ Guide to using the AWR

In this session we will draw pull together the strands from the three previous sessions as we get to grips with the information that’s available in the Automatica Workload Repository (AWR) or (for those without the diagnostic and performance licences, or running Standard Edition) the free Statspack utility.

If a user can tell you exactly where they are having a problem – which report is taking too long to run, which screen is responding very slowly, which drop-down menu is oozing like honey instead of dropping like a stone – then there are high precision strategies you can use to identify the cause of the problem. The AWR is there to help on those occasions when the database appears to be generally “unhealthy”, or pushing the limits of the hardware, or showing random variations in performance.

The three previous sessions of the stream will have examined:

    • How the core RDBMS engine works
    • Wait Events and DB time
    • Execution Plans

In this session we learn how the AWR can help us build a picture of what the engine is doing and give us some clues about how we can reduce the core activity cut back on basic resource consumption; we learn to recognise the relationships between the wait events and instance activity stats that tell us why the database is working harder than we expect; and we find easy ways to track down expensive activity and understand why the underlying execution plans may need to be addressed.

The bottom line on using the AWR is that it allows you to find the worst examples of excessive resource usage caused by one of two key problems

    • You’re doing something the hard way
    • You’re doing something too often

The numbers will tell you how to reduce the workload – but you still have to remember that that might not make any visible (response time) difference to the end user.

27th – 28th October Zurich, Switzerland: Trivadis Performance Days 2020: I’ll be presenting two sessions this year, “How to look at Performance Problems” on the Tuesday 15:20 – 16:40, and “Engineering Indexes” on the Wednesday 13:30 – 14:50.

How to look at Performance Problems:

There are a few things that might introduce “exotic” performance problems in an Oracle database that are hard to pin down because (for example) they depend very specifically on a particular implementation of a O/S mechanism that only gets invoked by a certain special activity in a background process; but most performance problems fall into a short list of common patterns.

In this presentation we’ll examine some examples of the problems that seem to appear most frequently and ask what to do next when such a problem presents itself. Very often someone will say something like: “I’ve seen one of these before – it’s problem X” – but if that’s the first response to the problem you need to raise the question – “How do I confirm or refute that suggestion before I take action that might not work and might have side effects.” Alternatively you may say something like: “It’s possible that this is happening because Y, can I create a simple model of Y and see if I get the same symptoms, and check what other indicators appear in my model to see if they appear here as well.”

In many cases of DBAs asking for help in the online forums I’ve seen examples where all the necessary evidence is immediately available, but no-one looks at it because they’ve already headed off to the wrong diagnosis. In other cases I’ve seen people claim to have a firm diagnosis when they should be asking “did you see Z?” or “how did these numbers compare with those numbers”.

Engineering Indexes

When trying to come up with indexing solutions to performance problems there are two conflicting targets you have to keep in mind. On one hand you want to improve the (local) efficiency of queries, on the other you want to minimise the (global) side-effects of index maintenance. The autonomous database with its automatic indexing has taken its first steps in balancing these targets by working out the minimum set of indexes that cover the maximum number of different query requirement, but the technology is in its infancy and humans can do much better than the machine – especially if they understand the data and are allowed to modify the code.

In this presentation we cover three main topics. The easy one is how to decide if an existing index has been constructed in the optimum fashion or whether there are some basic features we could use to make it better without changing the (leading) set of columns. Then we look at the ways in which we could make massive improvements to efficiency if we’re allowed to change some code to cater for “function-based” indexes; and finally we look at the ways in which we might need to have some special handling in place to deal with statistics so that the optimizer will always know how good our indexes are and use them when appropriate without our having to add hints to the code.

 

1 Comment »

  1. Hi Jonathan,

    Trust all’s well at your end.

    It’s after long you have announced online event, eagerly waiting for your this session and hopefully others this year.

    Regards,
    Vijay

    Comment by vijaysehgal — February 25, 2020 @ 4:27 pm GMT Feb 25,2020 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress.com.