Here’s a piece of code I found recently running every half hour on a client site:
SQL_ID = 2trtpvb5jtr53 SELECT TO_CHAR(current_timestamp AT TIME ZONE :"SYS_B_0", :"SYS_B_1") AS curr_timestamp, COUNT(username) AS failed_count FROM sys.dba_audit_session WHERE returncode != :"SYS_B_2" AND TO_CHAR(timestamp, :"SYS_B_3") >= TO_CHAR(current_timestamp - TO_DSINTERVAL(:"SYS_B_4"), :"SYS_B_5")
The names of the bind variables tell you that the client has set cursor_sharing to force or similar – but that’s not important at the moment. What is important is the execution plan for this query, which started with a full tablescan of sys.aud$… which is not very nice if your audit table is about 1.1 million blocks.
So we need to ask where this query is coming from, and whether we can optimise it ? Unfortunately the answer to the first question is: Oracle Enterprise Manager, and the answer to the second is no (unless you implement the “Cary Millsap optimisation” and just don’t run it at all.
The code comes from a test for “failed logins” and runs (by default, possibly) every 30 minutes to count the number of login attempts that have failed in the last half hour so that OEM can raise an alert if the number of login failures suggests an attempted hack attack on the database – so it seems likely that it’s supposed to examine a small (time-based) fraction of the audit table, and we might hope for an indexed access path to allow the the query to operate efficiently.
Too bad there’s that to_char() function messing up the timestamp column – but if we can set cursor_sharing to exact for the OEM sessions we might be able to create a suitable function-based index on the underlying column. So let’s walk backwards through the view definitions to see where the timestamp column comes from.
The view dba_audit_session is defined as follows:
create or replace view DBA_AUDIT_SESSION as select os_username, username, userhost, terminal, timestamp, action_name, logoff_time, logoff_lread, logoff_pread, logoff_lwrite, logoff_dlock, sessionid, returncode, client_id, session_cpu, extended_timestamp, proxy_sessionid, global_uid, instance_number, os_process from dba_audit_trail where action between 100 and 102 /
So the timestamp column comes directly from the dba_audit_trail view, which is defined in 9i as:
create or replace view DBA_AUDIT_TRAIL ( OS_USERNAME, USERNAME, USERHOST, TERMINAL, TIMESTAMP, ... ) as select spare1, userid, userhost, terminal, timestamp# /* TIMESTAMP */, ... from sys.aud$ aud, system_privilege_map spm, system_privilege_map spx, STMT_AUDIT_OPTION_MAP aom, audit_actions act where aud.action# = act.action (+) and - aud.logoff$dead = spm.privilege (+) and aud.logoff$dead = aom.option# (+) and - aud.priv$used = spx.privilege (+) ;
So we’re after the timestamp# column in aud$ – which is defined as a date column and declared not null – which means we can declare a simple function-based index on it that matches the predicate supplied by OEM.
But my client is running 10.2.0.x, and in sql.bsq you find the following comment beside the timetamp# column in the code to create aud$:
/* OBSOLETE: 10iR1 and above: time of query */,
and the definition of dba_audit_trail starts with:
create or replace view DBA_AUDIT_TRAIL ( OS_USERNAME, USERNAME, USERHOST, TERMINAL, TIMESTAMP, ... ) as select spare1, userid, userhost, terminal, cast ((from_tz(ntimestamp#,'00:00') at local) as date),
Where ntimestamp# is defined and commented as follows:
ntimestamp# timestamp, /* new timestamp (in UTC) of query */
So if you’re running 10g or 11g, you need to created a function-based index around that from_tz() function – and here’s what happened with my first simple test:
create index aud_jpl on aud$(from_tz(ntimestamp#,'00:00') at time zone '00:00') * ERROR at line 1: ORA-01743: only pure functions can be indexed
You can’t create an index to make any “query by timestamp” against the audit views efficient – from_tz() doesn’t meet the required restrictions on function-based indexes. So some of your existing code to access the audit trail (through the views) may simply stop working efficiently if you’re still waiting to upgrade. (Worse still: if you used to query aud$ directly, the timestamp# column is no longer populated, so your queries will work and return the wrong – always empty – results).
The change is part of the mechanism for handling multiple timezones. The audit record is stored with a “normalised” timestamp (i.e. UTC – or GMT as I prefer to call it) wherever it comes from so that the report can collect results converted to local time. If you’ve arranged your system so that everything operates in a single time zone then this is an enhancement you don’t need so you could simple change the definition of dba_audit_trail to omit the from_tz() call, and then create a function-based index (there are a couple of little complications with this approach thanks to Daylight Saving Time).
The only other idea I’ve had is to make sure you run an archival process on a very regular basis that copies and deletes the data from the aud$ table so that the OEM query only has to scan a small table. Copying and deleting once per week, or even once per day, might be appropriate. If you do this, of course, you can massage the data as you make the copy so that is structured efficiently for any other reports you might want to run against the audit trail.
Bottom line for my client, though – 30% to 50% of the physical I/O on his system is down to OEM checking for failed logins, and when that check happens at the wrong moment it’s bad news for some time-critical processing. So we disabled the check from OEM … and found that OEM kept running the query, even while reporting the fact that the check had been disabled.
Footnote: if you were thinking of trying to do something clever with triggers to copy the ntimestamp# column to the timestamp# column, don’t bother; you’re not allowed to create triggers on objects owned by SYS (Oracle error ORA-04089).