Oracle Scratchpad

April 5, 2010

Failed Login

Filed under: audit,Infrastructure,Oracle,Performance,Troubleshooting — Jonathan Lewis @ 7:59 pm GMT Apr 5,2010

Here’s a piece of code I found recently running every half hour on a client site:

SQL_ID = 2trtpvb5jtr53

        TO_CHAR(current_timestamp AT TIME ZONE :"SYS_B_0", :"SYS_B_1") AS curr_timestamp,
        COUNT(username) AS failed_count
        returncode != :"SYS_B_2"
AND     TO_CHAR(timestamp, :"SYS_B_3") >= TO_CHAR(current_timestamp - TO_DSINTERVAL(:"SYS_B_4"), :"SYS_B_5")

The names of the bind variables tell you that the client has set cursor_sharing to force or similar – but that’s not important at the moment. What is important is the execution plan for this query, which started with a full tablescan of sys.aud$… which is not very nice if your audit table is about 1.1 million blocks.

So we need to ask where this query is coming from and whether we can optimise it. Unfortunately the answer to the first question is: Oracle Enterprise Manager and the answer to the second is no (unless you implement the “Cary Millsap optimisation” and just don’t run it at all).

The code comes from a test for “failed logins” and runs (by default, possibly) every 30 minutes to count the number of login attempts that have failed in the last half hour so that OEM can raise an alert if the number of login failures suggests an attempted hack attack on the database – so it seems likely that it’s supposed to examine a small (time-based) fraction of the audit table and we might hope for an indexed access path to allow the query to operate efficiently.

Too bad there’s that to_char() function messing up the timestamp column – but if we can set cursor_sharing to exact for the OEM sessions we might be able to create a suitable function-based index on the underlying column. So let’s walk backwards through the view definitions to see where the timestamp column comes from.

The view dba_audit_session is defined as follows:

create or replace view DBA_AUDIT_SESSION
       os_username,  username, userhost, terminal, timestamp, action_name,
       logoff_time, logoff_lread, logoff_pread, logoff_lwrite, logoff_dlock,
       sessionid, returncode, client_id, session_cpu, extended_timestamp,
       proxy_sessionid, global_uid, instance_number, os_process
      action between 100 and 102

So the timestamp column comes directly from the dba_audit_trail view, which is defined in 9i as:

create or replace view DBA_AUDIT_TRAIL
       timestamp#	/* TIMESTAMP */,
      sys.aud$ aud,
      system_privilege_map spm,
      system_privilege_map spx,
      STMT_AUDIT_OPTION_MAP aom, audit_actions act
        aud.action#     = act.action    (+)
  and - aud.logoff$dead = spm.privilege (+)
  and   aud.logoff$dead = aom.option#   (+)
  and - aud.priv$used   = spx.privilege (+)

So we’re after the timestamp# column in aud$ – which is defined as a date column and declared not null – which means we can declare a simple function-based index on it that matches the predicate supplied by OEM.

But my client is running 10.2.0.x, and in sql.bsq you find the following comment beside the timetamp# column in the code to create aud$:

/* OBSOLETE: 10iR1 and above: time of query */,

and the definition of dba_audit_trail starts with:

create or replace view DBA_AUDIT_TRAIL
       cast ((from_tz(ntimestamp#,'00:00') at local) as date),

Where ntimestamp# is defined and commented as follows:

ntimestamp#   timestamp,                /* new timestamp (in UTC) of query */

So if you’re running 10g or 11g, you need to created a function-based index around that from_tz() function – and here’s what happened with my first simple test:

create index aud_jpl on aud$(from_tz(ntimestamp#,'00:00') at time zone '00:00')
ERROR at line 1:
ORA-01743: only pure functions can be indexed

You can’t create an index to make any “query by timestamp” against the audit views efficient – from_tz() doesn’t meet the required restrictions on function-based indexes. So if you’re still waiting to upgrade some of your existing code to access the audit trail (through the views) may simply stop working efficiently when you do get to the new version. (Worse still: if you used to query aud$ directly, the timestamp# column is no longer populated so your queries will work but return the wrong – always empty – results).

Possible workarounds:

The change is part of the mechanism for handling multiple timezones. The audit record is stored with a “normalised” timestamp (i.e. UTC – or GMT as I prefer to call it) wherever it comes from so that the report can collect results converted to local time. If you’ve arranged your system so that everything operates in a single time zone then this is an enhancement you don’t need so you could simple change the definition of dba_audit_trail to omit the from_tz() call, and then create a function-based index (there are a couple of little complications with this approach thanks to Daylight Saving Time).

The only other idea I’ve had is to make sure you run an archival process on a very regular basis that copies and deletes the data from the aud$ table so that the OEM query only has to scan a small table. Copying and deleting once per week, or even once per day, might be appropriate. If you do this, of course, you can massage the data as you make the copy so that is structured efficiently for any other reports you might want to run against the audit trail.

Bottom line for my client, though – 30% to 50% of the physical I/O on his system is down to OEM checking for failed logins, and when that check happens at the wrong moment it’s bad news for some time-critical processing. So we disabled the check from OEM … and found that OEM kept running the query, even while reporting the fact that the check had been disabled.


If you were thinking of trying to do something clever with triggers to copy the ntimestamp# column to the timestamp# column, don’t bother; you’re not allowed to create triggers on objects owned by SYS (Oracle error ORA-04089).


  1. /me sarcastic curious – does OEM report itself as a “bad guy”? does it recommend running Auto Tuning Adviser for that statement? What are the recommendations?

    >to make sure you run an archival process on a very regular basis that copies and deletes the data from the aud$ table so that the OEM query only has to scan a small table
    There’s a package for purging old audit records properly (it can be installed on 10gR2 too).

    Comment by Timur Akhmadeev — April 5, 2010 @ 8:48 pm GMT Apr 5,2010 | Reply

  2. We’ve been pretty much forced to turn off the audit and truncate the table ..

    Comment by daryl — April 5, 2010 @ 8:49 pm GMT Apr 5,2010 | Reply

  3. I’d agree that the best long-term solution is to put in place some sensible house-keeping of the sys.aud$ table. Before V11 (which introduces some better functionality for managing the audit trail and also moving the sys,aud$ table itself to a different tablespace in a manner supported by Oracle), this is probably going to be the creation of a new table (partitioned if you have that option) and periodic copy-and-delete of data from sys.aud$ to your new table. You then need to decide if you move everything that is recent to your new table or leave, for example, a week’s data in the sys.aud$ so that you can use the standard views to interogate the data.

    sys.aud$ has only one index on it, on columns sessionid and ses$tid. Given that all audited activity potentially inserts or updates to this table, it is a point of contention. Adding more indexes will slow down activity on this point of contention.

    The column sessionid increments over time {though I have not verified it does not eventially wrap around}.

    If you really need the result of the OEM query, or can’t turn it off, as you found, I wonder if one solution (and I admit it is complex) is to:
    (a) create a scheduled job that selects timestamp and maximum sessionid from sys.aud$ into a control table, let us call it aud_sessionid_timestamp. It should be very fast, using the only index, doing as many io’s as the index is deep.
    (b) use query rewrite to alter the standard query to include another filter clause:
    where sys.aud$.sessionid> (select min (ast.sessionid)
    from aud_sessionid_timestamp ast
    where ast.timestamp > systimestamp-1)

    That would limit the query to a day’s worth of data?

    Would that be possible for your client?

    Comment by mwidlake — April 5, 2010 @ 9:53 pm GMT Apr 5,2010 | Reply

    • Martin,

      Cunning; that might work, but I think it might require some experimentation to get it to behave all the time. I think the client would just prefer to turn the feature off.

      Comment by Jonathan Lewis — April 12, 2010 @ 1:46 pm GMT Apr 12,2010 | Reply

  4. Bug 9556885 filed for this issue (may not be publicly visible).

    Comment by Greg Rahn — April 6, 2010 @ 10:54 pm GMT Apr 6,2010 | Reply

    • Greg,

      Thanks for that – I couldn’t find anything when I first checked metalink.

      Update Oct 2011: Now visible as doc ID: 9556885.8

      Comment by Jonathan Lewis — April 12, 2010 @ 1:43 pm GMT Apr 12,2010 | Reply

  5. Today’s sys.aud$ chuckle at the end of this security post.

    Comment by joel garry — April 6, 2010 @ 10:56 pm GMT Apr 6,2010 | Reply

  6. In 11g this seems to be officially supported.

    The maximum size of the database audit trail tables (AUD$ and FGA_LOG$) is determined by the default storage parameters of the SYSTEM tablespace, in which it is stored by default. If you are concerned that a too-large database audit trail will affect the SYSTEM table performance, then consider moving the database audit trail tables to a different tablespace.

    Comment by Venkatesh Naicker — May 23, 2010 @ 2:56 pm GMT May 23,2010 | Reply

  7. Venkatesh,

    Thanks for that reference.

    There have been contradictory notes on Metalink (MOS) for many years now about moving aud$ – I don’t have the references to hand but one suggests that you do it, another says you would be unsupported if you did because recovery won’t work. Having a reference in the manuals for 11g is a good thing.

    Comment by Jonathan Lewis — May 23, 2010 @ 3:36 pm GMT May 23,2010 | Reply

  8. […] 3-How does failed login check on dba_audit_session can cause performance problem ? Jonathan Lewis-Failed Login […]

    Pingback by Blogroll Report 02/04 /2010 – 09/04/2010 « Coskan’s Approach to Oracle — July 19, 2010 @ 1:41 pm GMT Jul 19,2010 | Reply

  9. […] If you tried executing the above query, you might have noticed that the query execution required a bit of time.  Enterprise Manager also, apparently, executes a somewhat similar SQL statement every 30 minutes.  You can learn more about the potential performance impact of Enterprise Manager’s use of SYS.DBA_AUDIT_SESSION here. […]

    Pingback by Failed Logon Attempts « Charles Hooper's Oracle Notes — April 4, 2012 @ 5:01 am GMT Apr 4,2012 | Reply

  10. […] Indexing time (though the link in the comments below to Tony Hasler’s blog probably gives you all the answers you need), and an Oracle design error that I’ve visited before. […]

    Pingback by systimestamp | Oracle Scratchpad — April 17, 2013 @ 11:25 am GMT Apr 17,2013 | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by