Oracle Scratchpad

July 14, 2007

Analysing Statspack (6)

Filed under: Statspack,Troubleshooting — Jonathan Lewis @ 1:31 pm BST Jul 14,2007

[Further Reading on Statspack]

Here’s an extract of an AWR (automatic workload repository) snapshot published some time ago on the Internet, along with the text describing why it’s worth seeing. The extract comes from an article by Don Burleson:

Here is an an example of an Oracle 10g database with an undersized log buffer, in this example 512k:

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                   % Total
Event                            Waits    Time (s)   DB Time Wait Class
log file parallel write          9,670         291     55.67 System I/O
log file sync                    9,293         278     53.12 Commit
CPU time                                       225     43.12
db file parallel write           4,922         201     38.53 System I/O
control file parallel write      1,282          65     12.42 System I/O


I have a problem with this data set, though: there’s nothing in the data to point to the explanation – is it intended to give us a clue of what to look for when the log buffer is too small, and if that’s not the intention why bother printing it. (And there’s nothing in any of the surrounding text to explain how you could discover that the problem was the log buffer).

The other problem with this data set is that there is no scope information – how many CPU’s are there on the box, and how long was the snapshot.

Of course, the fact that the “log buffer space” wait doesn’t appear in the top 5 might make you suspect that it’s not really the most important problem with this database; and the fact that the author didn’t print the “log buffer space” wait line from the next section of the AWR report is rather irritating.

So what could you glean from this extract ? Can you find anything in it that tells you that the log buffer might be too small ?

We are told that the buffer is 512KB – if we assume that the DBA hasn’t configured this deliberately and that this is the default log buffer size, then we have one to four CPUs. That doesn’t give us much extra information though.

We can see that apart from the CPU time most of the reported time is for background processes; only the log file sync time is a foreground wait, and conveniently it happens to be similar to the log file parallel write time – which tends to suggest that this is a low-concurrency system (When we do a log file write there seems to be, on average, just one process waiting for that write to complete and it waits about the same amount of time as the write takes to complete). This still doesn’t give us much to go on.

We note that the average times for writes are:

  • log file parallel write – 30.1 m/s
  • db file parallel write – 40.8 m/s
  • control file parallel write – 50.7 m/s

These figures are not particularly good – so if there are complaints about performance we have an indication that slow disk devices are our most obvious root cause.

Of course, log buffer space waits usually need systems with some degree of concurrent activity – if I’ve issued a commit and made the log writer write I’ll be waiting on a log file sync, so someone else has to fill the log buffer to get a log buffer space wait.

There is one exception to this – if I keep generating fairly large redo records then the log writer could be triggered into writing by one of the “non-commit” events … such as seeing the log buffer one-third full.  That could explain why I have more log file parallel write waits than log file sync waits - and why my sync waits total 13 seconds less than my write waits. It would have been so nice to see the stats about the log buffer space waits following a claim that we were looking at a database with an undersized log buffer.

No matter – maybe there are 13 seconds of log buffer space waits. What’s the problem with this machine. We spent 278 seconds waiting on log file sync waits which were dependent on slow log file parallel writes. If we increased the log buffer size we would probably spend an extra 13 seconds in log file sync waiting for slow log file parallel writes. This machine’s problem is probably slow disks.

Of course I wouldn’t normally diagnose a performance problem from just the “Top 5″ events – but if that’s all that’s on offer and it doesn’t seem to corroborate the offered diagnosis then you have to ask for more information. I’d want to check the load profile (how much work was actually happening – such as user calls, executes, transactions), then see what other time was lost just below the “Top 5″ – where, for example, are the disk read times ?

Provisionally: this looks like a small single-user run on a single-CPU box with a slow disk drive; maybe it’s just a testbed Windows laptop.

Practically: if you thought the statistics were supposed to tell you the symptons that would help you spot an undersized log buffer, they don’t.

Example 2:

Here’s another sample from the same author where, again, he seems to suggest that he is showing an extract from an AWR report that is symptomatic of an undersized log buffer.:


Here is a AWR report showing a database with an undersized log_buffer, in this case where the DBA did not set the log_buffer parameter in their init.ora file:

                                                                Avg
                                                  Total Wait   wait    Waits
Event                            Waits   Timeouts   Time (s)   (ms)     /txn 
log file sequential read         4,275          0        229     54      0.0
log buffer space                    12          0          3    235      0.0

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                           % Total
Event                                     Waits    Time (s) Ela Time 
CPU time                                            163,182    88.23
db file sequential read               1,541,854       8,551     4.62
log file sync                         1,824,469       8,402     4.54
log file parallel write               1,810,628       2,413     1.30
SQL*Net more data to client          15,421,202         687      .37

Update 1st Sept 2010:I was prompted to re-read this article quite recently and discovered that I hadn’t discussed my reasons for disagreeing with Mr. Burleson’s interpretation of the second set of statistics, so I’ve added the following notes to address that omission.

The first thing to note, of course, is that there actually are some waits for log buffer space – 12 of them, totalling 3 seconds wait time. So there is at least a little justification for thinking that an increase in the log buffer size might make a difference. But before bouncing the database to increase the log buffer you need to examine the context a little more carefully.

  • First of all – it’s possible to get a few log buffer space waits as a log file switch takes place, especially in a fairly busy system, almost regardless of how large you make the log buffer – so when the figures for log buffer space waits are very small compared to the figures for log file sync waits it’s probably sensible to ignore them.
  • secondly, when you increase the size of the log buffer you make it possible for a process to write more data into the log buffer before a log file write takes place, which means the writes are bigger and take longer, and this can increase the time spent in log file sync waits (this trade-off between log buffer space waits and log file sync waits was a classic choice in earlier versions of Oracle).
  • thirdly – in this particular case – the time spent on CPU was more than 163,000 seconds, which makes the 3 seconds of log buffer space waits look rather insignificant; moreover, extreme CPU pressure can have side-effects such as slowing down the rate at which the log buffer can be cleared – hence causing log buffer space waits.

So let’s ignore the log buffer space for the moment – it’s probably not worth any effort, it ‘s quite likely to be a side effect of a more important issue, and attempting to address it may cause more problems than it fixes.

The most significant figure in the Top 5 is, by a long way, CPU – so the obvious thing to do is look for SQL that’s consuming a lot of CPU: a good start would be checking the sections of the AWR report labelled: “SQL ordered by CPU”, “SQL ordered by Executions” and possibly “Segments by Logical Reads”.

A quick look at the other events in the Top 5 is instructive, though.

Notice that the number of log file syncs is large and very close to the number of log file writes – this suggests that there may be a lot of small transactions going on (if you have a small number of larger transactions you tend to see more log file writes than log file syncs.

When you look at the timing you notice that the log file writes are much quicker (1.3 ms) than the log file syncs (4.6 ms) – the difference is typically the time it takes the “redo synch write” message to go to the log writer and the acknowledgement to come back from the log writer. The difference is a classic indicator of CPU starvation – and that also brings us back to the log buffer space waits:  if CPU starvation is making it hard for the log writer to get the message to flush the buffer to disc it becomes increasingly likely that the buffer will fill up unexpectedly and cause log buffer space waits.

You’ll also notice that the number of waits for “SQL*Net more data to client” seems rather high. Something is firing queries against this database and fetching lots of data back from it – it would be interesting to see the number of SQL*Net roundtrips to/from client and the number of bytes passed to see if this we can make any assumption about whether this is lots of smallish result sets, or a small number of very large result sets. Round-trip activity is an effective way of overloading your database and should probably be investigated. It’s possible, of course, that the incoming SQL will show itself up under one of the “SQL ordered by …” sections of the report.

The other figure in the Top 5 is the “db file sequential read” – which is operating at a respectable average time of 5.5 m/s – if this were a statspack report I’d look at the Event Histogram report, though, to check that this wasn’t a large number of very fast (i.e. locally cached, CPU-intensive) reads with a relatively small number of very slow reads. We need to account for – and address – that CPU usage somewhere.

The final number, that I haven’t mentioned yet, is the “log file sequential read” – that’s a wait event which can do with a little note of its own so I shall try to find some time to write about it over the next few weeks. In this case the number could simply be the archive processes recording their activity as they read the online redo log files and copy them.

Bottom line: these stats don’t give you any cause to fiddle with the log buffer size. The possible saving is tiny, and there are far more important targets to aim at.

[Further Reading on Statspack]

Footnote: Any advice about reading AWR reports is almost always relevant when reading Statspack reports (and vice versa).

29 Comments »

  1. Hi,
    I do not think I’m an expert, but, without taking a look at the statpack data there is already something it does not matches the explanation.

    In 10g and all O.S. I’ve seen ( they are not all availables but quite a lot ) the log buffer is “automatically” set by oracle (based on the parameter) but always much bigger than the setting (aroud some Mb, not Kb)
    This behaviour was documented by oracle in the notes:

    351857.1 The Log_buffer Cannot be Changed In 10g R2
    373018.1 LOG_BUFFER Differs from the Value Set in the spfile or pfile

    Even when I set very low log_buffer I was never able to see a log buffer of 512K

    Notes.
    - Perhaps he is talking about 10.1, but I have not used that version
    - Perhaps he is using an early example ( 9i ) and indicating it is a newer version

    [Edited 5th Aug JPL. Minor deletion]

    Comment by kokoliso — July 14, 2007 @ 6:33 pm BST Jul 14,2007 | Reply

  2. LFS includes time that log writer and the LFS waiters (remember piggy back commits) spend in runable states too. Systems at high processor utilization can exhibit both long duration log file writes and log file sync even if the I/O subsystem is properly greased.

    I see nothing here the would make me crank of the log buffer either…

    Nice post, Jonathan.

    Comment by kevinclosson — July 14, 2007 @ 9:30 pm BST Jul 14,2007 | Reply

  3. Kokoliso, example 1 is a 10g report – you can tell by the “Wait Class” column on the “Top 5″. However, 10.2 introduced a nice little enhancement to this section of the report by adding the ‘average wait time’ column to it. So this is a 10.1 report. [Added 5th Aug] Example 2 looks like a 9i Statspack report, rather than a 10g AWR report – but it’s very easy to use the wrong name when talking about these reports nowadays (in fact my article originally referred to Example 1 as a Statspack at one point, and AWR at another).

    Kevin, I think you may confuse some people with your comment about reporting runnable time as wait time. Correct me if I have mis-interpreted your comments, but I assume you are thinking of a scenario like:
    - I send a ‘redo synch write’ message and go into wait state ‘log file sync’, putting myself off the run queue.
    - the log writer clears the buffer and puts me back on the run queue
    - there are lots of busy tasks (CPU consumers) in the run queue ahead of me, so I cannot yet discover that my “real” wait has finished.
    - I get scheduled to run, check the system clock, but can’t tell that the total elapsed time was partly a “real” wait and partly a delay in getting to the top of the run queue.

    Comment by Jonathan Lewis — July 15, 2007 @ 9:06 am BST Jul 15,2007 | Reply

  4. How can you figure out the average times for writes (m/s) ?

    Comment by Bunditj — July 15, 2007 @ 4:01 pm BST Jul 15,2007 | Reply

  5. Bunditj, the report shows total time waited (in seconds) and number of waits. Multiple time by 1000 (to get it in milliseconds) and divede by number of waits – to get average wait in milliseconds.

    For example (log file parallel writes): 291 * 1000 / 9670 = 30.093

    Comment by Jonathan Lewis — July 15, 2007 @ 7:49 pm BST Jul 15,2007 | Reply

  6. Kokoliso,

    About the size of the redo log buffer in 10gR2.
    In the metalink notes you reference, it is stated that the fixed SGA area and the redo buffer ar combined together and that the combined size is rounded up to the nearest granule. The extra space created by the rounding up is then assigned to the log buffer.
    In a lot of cases this will indeed give you a log buffer a couple of mb in size, but it does not mean that the log_buffer can’t be set anymore. Just that it is more of a lower bound.
    To see the parameter take effect, you would have to set it to a large value (in mb not kb).

    I suspect that oracle has changed the behaviour because of the flashback database feature, for which they advice a log_buffer of several mb in size (much bigger then the prior defaults).
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/rpfbdb003.htm#sthref522
    http://download.oracle.com/docs/cd/B19306_01/server.102/b25159/configbp.htm#sthref187

    regards

    Freek

    Comment by Freek — July 15, 2007 @ 11:25 pm BST Jul 15,2007 | Reply

  7. Freek, thanks for the follow-up. It got marked as ‘suspect spam’ which means I received an email about it – which I only read this morning.

    The pointers to the documentation on flashback database are interesting, though, because they show a good example of how to be suspicious of the manuals.

    When I read it, the first thought that crossed my mind was: “Why, when you enable flashback logging, do you need to increase the size of the ‘traditional’ log buffer ?”.

    The second thought was – “I know I get two public and 18 private redo threads on my laptop. What’s going to happen if I set the log_buffer to 16MB?”.

    I haven’t tried analysing a log buffer dump to see if the flashback writer also sends a small packet to the log buffer (as the database writer does from 9i onwards) – but when I set the log_buffer to 12MB, I found that I had a public redo thread size of about 7.3MB,

    I don’t intend to worry about the details – until I see a site with symptoms that point at the log buffer – but it seems as if the size of the shared redo threads are dependent on:
    - Number of CPUs
    - CPU width (32/64 bit)
    - transactions (the parameter)
    - SGA granule size
    - log_buffer
    - whether or not in-memory undo is enabled

    In fact, given the concept of shared and private redo threads, the question of what the log_buffer is trying to define becomes a little blurred.

    One day I will check what the flashback writer does to the log buffer – because anything that has an impact on the volume of redo generated is important. But it’s yet another test case that has to wait until it’s a real problem for someone.

    Comment by Jonathan Lewis — July 16, 2007 @ 8:15 am BST Jul 16,2007 | Reply

    • Hi Jonathan,

      I note that log_buffer even if set, is going to be different {more} post 10.2.x – however if it (oracle internal algorithm) sets a very high value we can see side effects of too long “LOG FILE SYNC”? In one of our PRD instance, we have a SGA granule size of 64MB and when we set LOG_BUFFER to 16MB, i see redo buffers of more than 78MB. Since we have a mixed workload (short transactions + batch jobs) – short transactions are seeing some waits on log file sync. Earlier log_buffer was set to 150MB and oracle took nearly 210MB the waits seen on short UI house keeping transactions were waiting quite long and after reducing – we are now in a “thin acceptable limit”. Is there any other way to reduce redo buffers? anything less than 32MB not only may be enough it will bring down my log file sync even further (which is very much needed for UI house keeping transactions). Even if i unset, i think its still going to allocate 64MB? BTW, we are on 11.2.0.3.

      Rgds
      Abhay

      Comment by Abhay — March 11, 2013 @ 5:16 pm BST Mar 11,2013 | Reply

      • Abhay,

        The difference between your setting for the log buffer and the usage you report is a little surprising. Based on the numbers I would guess that you have a machine with quite a lot of CPUs and an instance with a large value for parameter “transactions”.

        Rather than guess, though, I would prefer to check the cpu_count and value of the parameter, and I would like to see the number of log buffer strands (public and private) that you have – and there is a query on my blog that SYS can run to report the last one:
        http://jonathanlewis.wordpress.com/2012/09/17/private-redo-2/

        I see that you’ve also taken this to OTN, and it’s best to continue it there. However, it’s worth pointing out that essentially your comments about big buffers and the mix of long and short transactions resulting in “unreasonable” log file sync waits for the short transactions are correct. When a session issues a commit it has to wait for ALL the public log buffers to be written to disc up to their high water marks, and if you have several buffers which have been filled very rapidly by multiple concurrent batch processes then a session that does one little update and commits may have to wait for several megabytes of redo to be written.

        I think Tony Hasler ( http://tonyhasler.wordpress.com/ ) has written a blog note about this; and because LGWR was writing the separate log buffers one after the other (rather than try to write them all asynchronously) his workaround was to disable the multiple log buffer feature.

        Comment by Jonathan Lewis — March 12, 2013 @ 12:46 pm BST Mar 12,2013 | Reply

        • Hi Jonathan,

          Thanks for your reply. Indeed the transactions parameter is high due to derived value from session which is set to 5000, & cpu is 64. On the public and private strands, public strands equals 16MB (which i set for log_buffer) spread across 4 threads and there are quite a few (about 500+) private strands with nearly 130KB. COMMIT doesn’t need to wait for private strands to be flushed? atleast the ones in use?

          I will also actively update on OTN, we can discuss there if you prefer.

          Rgds
          Abhay

          Comment by Abhay — March 13, 2013 @ 8:13 pm BST Mar 13,2013

  8. Excellent!

    Comment by Mirjana — July 16, 2007 @ 10:09 am BST Jul 16,2007 | Reply

  9. “Kevin, I think you may confuse some people with your comment about reporting runnable time as wait time. Correct me if I have mis-interpreted your comments, but I assume you are thinking of a scenario like:
    - I send a ‘redo synch write’ message and go into wait state ‘log file sync’, putting myself off the run queue.
    - the log writer clears the buffer and puts me back on the run queue
    - there are lots of busy tasks (CPU consumers) in the run queue ahead of me, so I cannot yet discover that my “real” wait has finished.
    - I get scheduled to run, check the system clock, but can’t tell that the total elapsed time was partly a “real” wait and partly a delay in getting to the top of the run queue.”

    I think I need to make a blog entry about this. For now I’ll add that the time for a log file sync (LFS) wait starts as soon as the foreground process is done posting LGWR (it then waits on LFS in the post/wait interface which is an IPC semaphore wait on most platforms). Time is ticking (T0) for the process I’ll call “LFS_X”.

    LGWR’s state at the time of posting is important. If he is already in the middle of servicing a flush he could be in the middle of one or more I/Os. Remember, LGWR will limit large flush operations to multiple 128KB writes (async). So LFS_X is likely now paying for the prior group of processes being serviced. Once LGWR is done servicing the prior groups I/Os, it goes into a loop of postings of those LFS waiters and yes, LFS_X is still waiting and yes the work at hand for LGWR has nothing directly to do with getting LFS_X out of his wait. The act of posting the previous set of LFS waiters makes them runable (they were in a post/wait just like LFS_X). Once LGWR posts all the waiters from his current batch he checks to see if there is more work to do before going to sleep. In our case there is (LFS_X needs servicing). LGWR flushes the buffer that has LFS_X’s redo pieces and LGWR then posts LFS_X (a semaphore operation on most platforms). LFS_X is now runable. The first thing LFS_X does when it commences running is checks the microsecond resulution time of day (T1).

    The delta from T0 above to T1 is the amount of time LFS_X waited in LFS. Any processor shortage suffered by LGWR between T0 and T1 affects the duration of poor LFS_X’s LFS wait time. That includes any time LGWR goes in and out of the kernel for posting. That includes any natural exhausting of LGWR’s scheduler time quantum (10ms). That includes any time LGWR’s processor is interrupted to handle a hardware interrupt. And, finally, that includes time that LFS_X himself has to wait after being posted for the CPU (upon which it has cache affinity) to pick him up. And that decision is based upon his state (runable), mode (user mode) and priority (user mode 100+ depending on age) and nice value.

    Lot’s of people at lots of systems houses (HP,IBM,Sun,SGI,Sequent,Pyramid, DEC, DG, etc) have spent significant time optimizing the average time between T0 and T1. This work produced the slick Post/Wait drivers that some platforms still support.

    Comment by kevinclosson — July 16, 2007 @ 10:47 pm BST Jul 16,2007 | Reply

  10. Kevin, very informative. Thanks for taking the trouble.

    Comment by Jonathan Lewis — July 17, 2007 @ 5:52 am BST Jul 17,2007 | Reply

  11. Hi Jonathan,
    Probably has nothing to do with this particular thread,
    but could you please explain me up to which point the PROCESSES parameter can influence log file sync wait.
    According to one of ours DBAs this should be the first and only parameter to tune when the log file sync appears in Top 5 Timed Events (v 9.2.0.5).
    Not quite sure about that, especially after reading following thread in asktom:
    “Processes Parameter – What does this do and how do you know if it is to high”
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:45237883974004

    Thanks!

    Regards,

    Mirjana

    Comment by Mirjana — July 17, 2007 @ 9:23 am BST Jul 17,2007 | Reply

  12. Mirjana, after a log file write the log writer (lgwr)has to post the processes that were waiting for that write to complete. (See Kevin’s notes above). Any problems with the processes parameter revolve around how lgwr identifies which processes were waiting.

    The algorithm has changed through the versions, but essentially there is an array for the processes – exposed as v$process, and another for sessions – exposed as v$session, and once upon a time the lgwr would walk through the length of one of these arrays (v$process I think) to determine which processes needed to be posted.

    So if you have set processes to 3,000 (say) when the most processes you ever needed was more like 300 then log file sync waits could waste a lot of time as lgwr walked through 2,700 array entries that were always going to be unused.

    The evolution for the algorithm through 9i and 10g aimed to avoid looking at “obviously” redundant entries to minimise the time taken to find processes that needed posting.

    However – ‘tuning’ the log file sync by changing the processes parameter is really a case of correcting a basic configuration error. If you have a value that is “about right” then there shouldn’t be much benefit in tweaking it a tiny bit – but if it’s massively wrong then (particularly if you are running 8i still) you might see a noticeable change if you reduce it significantly.

    It’s a bit awkward to create a good demonstration of the impact once you’re past 8i as you need to create lots of live processes to see the effects most clearly.

    Comment by Jonathan Lewis — July 17, 2007 @ 10:11 am BST Jul 17,2007 | Reply

  13. Thank you very much for your response Jonathan!

    Best regards,
    Mirjana

    Comment by Mirjana — July 17, 2007 @ 10:54 am BST Jul 17,2007 | Reply

  14. Please fill me in if this is a proper conclusion about the whole Log Writer Slave thing:
    1. Using Asynch I/O for writing redo is (typically) desirable. It doesn’t parallelize log writes, but does help throughput to some extent.
    2. Often, an OS will just support that outright.
    3. In the event there is not a better alternative to achieve asynch I/O, Log Writer Slaves can be used.
    4. As a result, their use is platform specific. It may be possible to use them anyway, but it doesn’t solve anything if you already have asynch I/O.
    5. By default, Oracle “knows” whether it needs to do this and starts the slaves it needs. Maybe he makes a mistake from time to time – so it can be configured, but as that algorithm gets better and as platforms more platforms support asynch I/O, the need to do so becomes increasingly rare.
    6. Is there a specific context in which Oracle (10g) typically “gets it wrong” and the DBA actually has to do this – or is it just something that surfaces from time to time (or almost never)? Lots of small commits resulting in frequent small writes? Larger writes? Inconsistent disk response time? (perhaps if there are other busy files with the redos or lots of switches – which of course probably should be remedied in other ways…)

    [Edited JPL 5th Aug: minor deletion]

    Comment by Joe — July 19, 2007 @ 3:18 pm BST Jul 19,2007 | Reply

  15. My understanding is that LGWR Asynch I/O (maybe emulated by I/O slaves) simply parallelizes the writing of the same current write batch to the redo log members in the current redo log group (if redo log files are “multiplexed”, something that “always” happen in production). Hence the name of the wait event “log file parallel write”.

    So the parallelization gain is capped by the number of redo log members (usually two or three).

    Is that correct ?

    [Edited JPL 5th Aug: minor deletion]

    Comment by Alberto Dell'Era — July 19, 2007 @ 4:37 pm BST Jul 19,2007 | Reply

  16. This is what Steve says in his site:

    http://www.ixora.com.au/notes/redo_write_multiplexing.htm

    Alessandro

    [Edited JPL 5th Aug: minor deletion]

    Comment by Alessandro Deledda — July 20, 2007 @ 9:33 am BST Jul 20,2007 | Reply

  17. I would like to pose two more questions on the technical component of the thread. Thanks for the replies earlier – I now understand the slave process thing much better.

    Here is some context for my question (and ignorance of the process above.) I often help tune a packaged application (Siebel usually) running on an instance that has already been set up by a DBA shop that has a set of instance standards. As a result – I usually don’t have to do anything with the actual “infrastructure” stuff. As you’d expect the vast majority of the isues are the SQL statements and plans they result in driving too many LIOs.

    The other very common wait event is log file sync and log file parallel write. Lots of good information in the Oracle Press Book on the OWI, Jonathan’s site, and many others to diagnose this. It basically boils down to that “Siebel. Commits. A. Lot.” Although it is annoying to those focused on datbase waits, if you think about the types of “business transactions” that are experienced in a call center – they are pretty small, so it is not exactly a ridiculous design.

    First question is:
    How can I tell if the absence of asynchronous log writer I/O is a problem?

    My guess is that I would see significant log file sync’s in statspack, but zero log file parallel write waits. I do not see any notes indicating whether you get “parallel writes” even if there is only one write going on at a time, nor do I have the test instance facilities to check this directly myself. Perhaps someone will just know?

    Second question is:
    Is asynch I/O something that should generally be used? It seems to me from the exaplanation that it is. Now – I know I’m posing a general question which certainly begs a follo on of “it depends on the specifics”, but think about it this way: You have to make a decision one way or the other when setting up an instance. Should the position be to:
    1. “Use asynch I/O unless there is a good reason not to”
    or rather to
    2. “Not use asynch I/O unless there is a good reason to do so”
    ?

    [Edited 5th Aug = JPL: minor deletion]

    Comment by Joe — July 22, 2007 @ 1:50 pm BST Jul 22,2007 | Reply

  18. [...] un grande carico di lavoro. Questo post è stato inspirato a Kevin Closson da un’altro post, di Jonathan Lewis, che ha innescato una stimolante (per me) [...]

    Pingback by LGWR e ASYNC I/O « Oracle and other — July 24, 2007 @ 2:32 pm BST Jul 24,2007 | Reply

  19. 5th August 2007

    I have deleted several comments relating to a non-technical posting by Don Burleson that has since been removed from his website. I have also edited several posts to make them consistent with the other deletions.

    If you are the author of a post that I have modified and object to the change I have made, please let me know, and I will either delete the entire post (or apply any alteration that you think fit).

    Comment by Jonathan Lewis — August 5, 2007 @ 8:40 pm BST Aug 5,2007 | Reply

  20. Kevin Closson has followed up his notes in comment #9 with a most informative blog entry about LGWR processing.

    Comment by Jonathan Lewis — August 5, 2007 @ 8:42 pm BST Aug 5,2007 | Reply

  21. Kevin Closson has answered my question above in his blog entry – I was wrong, it is possible to have multiple asynch I/O operations in-flight on the same redo log file, not only on different redo log members belonging to the same redo log group. Thanks Kevin!

    Comment by Alberto Dell'Era — August 8, 2007 @ 10:18 am BST Aug 8,2007 | Reply

  22. [...] an earlier article on Statpack I quoted some figures taken from an article by Don Burleson: Here is an an example of an Oracle 10g [...]

    Pingback by Analysing Statspack(8) « Oracle Scratchpad — November 10, 2007 @ 2:38 am BST Nov 10,2007 | Reply

  23. [...] with 10.2.  Page 158 has been addressed by other contributors on the Oracle OTN forums (reference reference2) [...]

    Pingback by Book Review: Oracle Tuning: The Definitive Reference Second Edition « Charles Hooper's Oracle Notes — November 10, 2010 @ 2:41 am BST Nov 10,2010 | Reply

  24. [...] with that of the book author?  For an example of what I am trying to uncover, take a look at this blog article.  The last four “Top 5″ sections are from the AskTom website – any opinions on [...]

    Pingback by Waiting for a Long Time – What is Going On? « Charles Hooper's Oracle Notes — December 7, 2010 @ 3:27 pm BST Dec 7,2010 | Reply

  25. [...] 原文:http://jonathanlewis.wordpress.com/2007/07/14/analysing-statspack-6/ [...]

    Pingback by xpchild » Analysing Statspack 6 — July 10, 2011 @ 6:21 am BST Jul 10,2011 | Reply

  26. […] ! Inutile de dire à quel point j’acquiesce à la lecture de ce post de Jonathan Lewis et du commentaire associé. La valeur par défaut minimal du log_buffer est de 2M en 10g. En fait, ça dépend entre autre de […]

    Pingback by Règlement de comptes sur fond de 10g ! | ArKZoYd — August 7, 2013 @ 9:08 am BST Aug 7,2013 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,877 other followers