I’ve said in the past that one of the best new features, in my view, in 11g was the appearance of proper virtual columns; and I’ve also been very keen on the new “approximate NDV” that makes it viable to collect stats with the “auto_sample_size”.
Who’d have guessed that if you put them both together, then ran a parallel stats collection it would break :(
The bug number Karen quotes (10013177.8) doesn’t (appear to) mention extended stats – but since virtual columns, function-based indexes, and extended stats share a number of implementation details I’d guess that they might be affected as well.
So you’ve decided you want to audit a particular table in your database, and think that Oracle’s built in audit command will do what you want. You discover two options that seem to be relevant:
audit all on t1 by access;
audit all on t1 by session;
To check the audit state of anything in your schema you can then run a simple query – with a few SQL*Plus formatting commands – to see something like the following:
I wonder if I ought to create an “Argh” category on my blog for articles about some of the nastier, and sometimes unbelievable, bugs in the Oracle code. Here’s one (that’s just about forgivable in terms of failing to test) published by Sokrates a couple of months ago.
If you’re using the merge command without including insert/delete options then Oracle will ignore check constraints. (What’s the most important requirement of a database – many people would say that it’s the correctness of the data.)
A few days ago I was rehearsing a presentation about how to investigate how Oracle works, and came across something surprising. Here’s a simple bit of code:
Here’s one of those funny little details that can cause confusion:
SQL> select * from user_audit_object;
no rows selected
SQL> audit select on indjoin by session whenever successful;
4 indjoin ij
6 id between 100 and 200
7 and val between 50 and 150
1 row selected.
SQL> select * from user_audit_object where obj_name = 'INDJOIN';
no rows selected
Yesterday I introduced a little framework I use to avoid the traps inherent in writing PL/SQL loops when modelling a session that does lots of simple calls to the database. I decided to publish the framework because I had recently come across an example where a series of SQL statements gives a very different result from a single PL/SQL block.
Imagine I have a simple SQL statement with a “where clause” that looks like this:
t2.id1(+) = t1.id1
and t2.id2(+) = t1.id2
Would you expect it to run several times faster (25 minutes instead of a few hours) when the only change you made was to swap the order of the join predicates to:
t2.id2(+) = t1.id2
and t2.id1(+) = t1.id1
That’s “Virtual Private Database”, “Fine Grained Access Control”, or “Row-level Security” – three different names for the same feature. (Four if you include the separately licensed Oracle Label Security (OLS) which is a product built on top of RLS).
I’ve just seen a nice presentation from John Batchelor of Sopra hitting the key points of intent, implementation and threat from this (free) product and wondered how many sites used it. So I’ve put up a little poll about usage, with a follow-up for those who do use it about the impact on their system.
Here’s an extract from an execution plan I found on a client site recently. I’ve collapsed lines 5 to 42 into a single line representing the rowsource produced by a fairly messy execution plan, leaving just the last three stages of execution on view. Each of three operations joins the same collection variable (using the table() operator) to the row source – once through a hash join, then twice more (joining to two other columns) through nested loop outer joins:
The resulting estimates of row counts and costs are quite entertaining and, fortunately, not very accurate:
Apart from the fact that the “Rows” figure for the FILTER operation at line 6 is blank, what’s the obvious error in this extract from an execution plan:
Here’s an unexpected error that appeared recently while I was doing a test on a database running 184.108.40.206. (The fact that I got an error didn’t surprise me, it was Oracle’s choice of error for the mistake I’d made.)
Cut-n-Paste from a SQL*Plus session:
SQL> alter system set db_cache_size = 256m scope = memory;
alter system set db_cache_size = 256m scope = memory
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00001: unique constraint (.) violated
If you’re wondering, I was trying to reduce the db_cache_size from 384MB and the value had been set by the startup parameter file – so the ORA-02097 is probably trying to tell me that I can’t reduce the dynamic value below the initial minimum. (I didn’t check this, I just bounced the database with a change of parameter file, but it’s probably a reasonable guess.)
That’s “Block Change Tracking” if you’re not familiar with the acronym – a feature that appeared in the 10g with its own background process (CTWR – change tracking writer) to help rman do faster incremental backups on very large datafiles.
It’s very useful if you’re using large tablespaces (i.e. the “one file per tablespace”) option – but, as with many things Oracle produces, when you push the limits or move into combinations of features odd errors start to appear.
Saibabu Devabhaktuni has written a blog note about using BCT with physical standby databases – a relatively new enhancement to BCT, but possibly one you need to be cautious about for reasons he explains.
There are times as I browse through Metalink when I see descriptions of bugs that make me wonder how on earth anyone managed to find them. There are bugs which are so bizarre in their combination of prerequisites that you might think they’d never,ever, show up. Here’s one that got a mention on OTN some time back.
Problem: an update on a simple, unindexed, table takes 90 minutes if the table is in a tablespace using a 16KB block size; but closer to 90 seconds if the table is in a tablespace using a 4KB block size. The effect is totally reproducible.