Oracle Scratchpad

December 23, 2014

Just in case

Filed under: Infrastructure,Oracle,Performance,redo — Jonathan Lewis @ 10:37 am GMT Dec 23,2014

For those who don’t read Oracle-l and haven’t found Nikolay Savvinov’s blog, here’s a little note pulling together a recent question on Oracle-L and a relevant (and probably unexpected) observation from the blog. The question (paraphrased) was:

The developers/data modelers are creating all the tables with varchar2(4000) as standard by default “Just in case we need it”. What do you think of this idea?

The general answer was that it’s a bad idea (and unnecessary, anyway) and one specific threat that got mentioned was the problem of creating indexes and Oracle error ORA-01450; but coincidentally Nikolay Savvinov had written about a performance-related “bug” in August this year, which turned out, only last week, to be expected behaviour. You can read his articles for the details, but since he used a 4KB block size to demonstrate it I thought I’d repeat the exercise using an 8KB block size.

 



drop table t1 purge;
create table t1 (id number(6), v1 varchar(40), v2 varchar2(40), v3 varchar2(40));
create unique index t1_i1 on t1(id);

execute snap_redo.start_snap

insert into t1 
select	object_id, object_name, object_name, object_name 
from	all_objects
where	rownum <= 10000
;

execute snap_redo.end_snap


drop table t1 purge;
create table t1 (id number(6), v1 varchar(4000), v2 varchar2(4000), v3 varchar2(4000));
create unique index t1_i1 on t1(id);

execute snap_redo.start_snap

insert into t1 
select	object_id, object_name, object_name, object_name 
from	all_objects
where	rownum <= 10000
;

execute snap_redo.end_snap

I’ve dropped and created the same table twice, once with varchar2(40) columns and once with varchar2(4000) columns.

I’ve created an index on a (non-character) column – the specific results vary depending on whether the index is unique or non-unique, and whether or not you have the index, and whether or not the table already holds data, and the effective clustering on the index columns etc. etc. but the key difference between the two sets of results doesn’t go away.

I’ve inserted object_name values (maximum usage 32 bytes) into the varchar2() columns, inserting 10,000 rows.

The snap_redo package is one of my simple pre/post packages that calculates changes in values in some dynamic performance view – in this case it’s looking at v$sysstat (system statistics) for statistics relating to redo generation, which means you need to run this test on an otherwise idle instance. Here are the two sets of results from an instance of 11.2.0.4:


Name                                                                     Value
----                                                                     -----
messages sent                                                               11
messages received                                                           11
calls to kcmgcs                                                            313
calls to kcmgas                                                             37
calls to get snapshot scn: kcmgss                                           74
redo entries                                                               769
redo size                                                            1,317,008
redo wastage                                                             3,888
redo writes                                                                 11
redo blocks written                                                      2,664
redo write time                                                             10
redo blocks checksummed by FG (exclusive)                                2,242
redo ordering marks                                                          1
redo subscn max counts                                                       1
redo synch time                                                              7
redo synch time (usec)                                                  88,875
redo synch time overhead (usec)                                          1,810
redo synch time overhead count (<2 msec)                                    11
redo synch writes                                                           11
redo write info find                                                        11
undo change vector size                                                261,136
rollback changes - undo records applied                                      2
IMU undo allocation size                                                17,184


Name                                                                     Value
----                                                                     -----
messages sent                                                                8
messages received                                                            8
calls to kcmgcs                                                            222
calls to kcmgas                                                             56
calls to get snapshot scn: kcmgss                                           52
redo entries                                                            20,409
redo size                                                            5,606,872
redo buffer allocation retries                                               1
redo wastage                                                             1,248
redo writes                                                                  6
redo blocks written                                                     11,324
redo write time                                                             26
redo blocks checksummed by FG (exclusive)                                  571
redo ordering marks                                                         32
redo subscn max counts                                                       1
redo synch time                                                              6
redo synch time (usec)                                                  60,230
redo synch time overhead (usec)                                            159
redo synch time overhead count (<2 msec)                                     1
redo synch writes                                                            1
redo write info find                                                         1
undo change vector size                                              1,590,520
IMU undo allocation size                                                   144

Notice, particularly, the great change in the number of redo entries and the total redo size when the character columns are defined at varchar2(4000). Note particularly that the number of redo entries is roughly “2 * number of rows inserted” – for each row that’s one for the row and one for the index entry. You can check the redo log content by dump the log file, of course (and Nikolay did), or you can take my word for it that Oracle is doing the equivalent of single row processing in the varchar2(4000) case and array processing in the varchar2(40) case.

When Oracle calculates that the row length is larger than the block size it falls back to single row processing; this can increase your redo generation significantly, and since the rate at which you can pump out redo is the ultimate rate at which you can load data this could have a significant impact on your data loading times. Declaring character columns as varchar2(4000) “just in case” is a bad idea.

 

June 25, 2013

12c

Filed under: 12c,Infrastructure,Oracle,Partitioning,redo — Jonathan Lewis @ 11:43 pm GMT Jun 25,2013

The news is out that 12c is now available for download (Code, Docs and Tutorials). There are plenty of nice little bits in it, and several important additions or enhancements to the optimizer, but there’s one feature that might prove to be very popular:

SQL> alter table p1 move partition solo ONLINE;

Table altered.

(more…)

October 5, 2012

SSD

Filed under: Exadata,Infrastructure,Oracle,Performance,redo — Jonathan Lewis @ 1:04 pm GMT Oct 5,2012

There’s never enough time to read everything that’s worth reading, so even though Guy Harrison’s blog is one of the ones worth reading I find that it’s often months since I last read it. Visiting it late last night, I found an interesting batch of articles spread over the last year about the performance of SSD – the conclusions may not be what you expect, but make sure you read all the articles or you might end up with a completely misleading impression:

Don’t forget to read the comments as well. For other notes Guy has written about SSD, here’s a URL for his SSD tag.

September 17, 2012

Private Redo

Filed under: Infrastructure,Oracle,redo — Jonathan Lewis @ 8:28 pm GMT Sep 17,2012

Following a question on the Oracle Core Addenda pages, here’s a little script to tell you about the sizes of the public and private redo threads currently active in the instance. It’s a minor variation of a script I published in Appendix D (Dumping and Debugging), page 237 to show the addresses of current activity in the various log buffers:
(more…)

August 19, 2011

Redo

Filed under: Infrastructure,Oracle,redo — Jonathan Lewis @ 2:58 am GMT Aug 19,2011

In the comments to a recent blog posting about log file syncs, Tony Hasler has produced a stunning observation about Oracle and ACID, in particular the D (durability) bit of transactions.

The observation can best be described with a demonstration (which I have run on versions from 8.1 to 11.2) involving three sessions, one of which has to be connected with sysdba privileges.

(more…)

May 27, 2011

Audit Ouch!

Filed under: audit,Bugs,Infrastructure,Oracle,redo — Jonathan Lewis @ 5:37 pm GMT May 27,2011

A few days ago I was rehearsing a presentation about how to investigate how Oracle works, and came across something surprising. Here’s a simple bit of code:
(more…)

January 3, 2011

Redo

Filed under: Infrastructure,redo — Jonathan Lewis @ 7:40 pm GMT Jan 3,2011

A couple of days ago I published a link to some comments I had made on OTN about differences in redo generation between 10g and earlier versions of Oracle. This raised a few questions that suggested a follow-up (or perhaps “prequel”) note might be a little helpful. So I’ve created a simple SQL script to help demonstrate the differences and show how some of the pieces hang together.
(more…)

December 23, 2010

Private Redo

Filed under: redo,undo — Jonathan Lewis @ 10:17 am GMT Dec 23,2010

About this time last year I wrote a short answer on OTN about Private Redo Threads and In-Memory Undo. Thanks to a follow-up question a year later I’ve been prompted to supply a link to my original answer because it was actually pretty good: OTN Thread “In Memory Undo”.

Update: If you’re looking at this note and haven’t expanded the view to see the comments, make sure that you do look at them since they include a couple of technical details I described in response to follow-up questions.

July 2, 2010

Unrecoverable

Filed under: Infrastructure,redo — Jonathan Lewis @ 6:38 pm GMT Jul 2,2010

A recent question on the OTN database forum asked: “What’s the difference between index rebuild unrecoverable and nologging?”

The most important difference, of course, is that unrecoverable is a deprecated option so you shouldn’t be using it even though it still works.
(more…)

June 27, 2010

Coalesce

Filed under: Index Rebuilds,Indexing,Infrastructure,Oracle,Performance,redo — Jonathan Lewis @ 6:36 pm GMT Jun 27,2010

The following question came up in an email conversation a little while ago:

Are you aware of any problems a large oltp site might have with running index coalesce during production hours, as opposed to doing index rebuilds in a maintenance window?

(more…)

February 9, 2010

Why Undo ?

Filed under: Infrastructure,redo,undo — Jonathan Lewis @ 5:32 pm GMT Feb 9,2010

A recent thread on the OTN database forum asks the question:

“… since redo also has past and current information why can’t redo logs be used to retrieve that information? Why is undo required when redo already has all that information.”

The thread has generated some interesting replies – but to a large extent they describe how the undo and redo work rather than explaining why the designers at Oracle Corp. chose to implement undo and redo the way they did. Since I’m sitting in an airport (Zurich – where the coffee bar in the business lounge has a very large bowl of Lindt chocolate squares available) waiting for a plane I thought I’d use the time to share my thoughts on the why.

(more…)

October 30, 2009

logging

Filed under: Performance,redo — Jonathan Lewis @ 1:02 pm GMT Oct 30,2009

I’ve just jotted down a few notes about “log file sync” waits, “log file parallel write” waits, and the nologging option in response to a question on OTN about redo activity when creating a large index. The ensuing conversation also picks up various topics relating also to backup, recovry and dataguard.

October 14, 2009

Nutshell – 1

Filed under: Infrastructure,redo,undo — Jonathan Lewis @ 8:22 pm GMT Oct 14,2009

Undo, redo, and recovery in a nutshell. (The “1” in the title may turn out to be optimistic, I offer no guarantee about further nutshells.)

In the most general case of updating a block – when you change one row in an table, say, or mark an index entry as deleted – Oracle does the following:
(more…)

November 28, 2008

Log file write

Filed under: Infrastructure,Performance,redo,Troubleshooting — Jonathan Lewis @ 9:04 am GMT Nov 28,2008

A few years ago, I wrote a note about a bug in 9i that made it difficult to decide whether or not you had a problem with the log writer (lgwr). The problem was that the time spent in wait event “log file parallel write” was not captured properly – so you were reduced to guessing whether the time spent in “log file sync” might actually be an indication of a log file write problem.

Time changes everything – including the available tools and levels of skill of the people doing the trouble – and I see that Riyaj Shamsudeen recently published a note about the same problem.

Riyaj approaches the problem from a different direction, using exactly the right tool for the task, to demonstrate the error directly (rather than relying on modelling, inference and a follow-up check with Metalink as I did in 2005).

[Update 30th Nov 2008:] Here’s another interesting article on a closely related topic from Christian Bilien – in this case from a 10g system.

April 12, 2007

Log File Switch

Filed under: Infrastructure,redo,Tuning — Jonathan Lewis @ 9:55 pm GMT Apr 12,2007

A few days ago, I published a simple query you could run against v$log_history to show the time between log file switches.  So if you use this script and decide that your log file switches are happening at inappropriate intervals, what can you do ?

(more…)

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 4,751 other followers