Oracle Scratchpad

June 9, 2013

SQL*Net Compression – 2

Filed under: Infrastructure,Oracle — Jonathan Lewis @ 2:53 pm BST Jun 9,2013

I wrote a note a few years ago about SQL*Net compression (this will open in a new window so that you can read the posts concurrently), showing how the order of the data returned by a query could affect the amount of network traffic. An example in the note demonstrated, using autotrace statistics that the number of bytes transferred could change dramatically if you sorted your return data set. At the time I asked, and postponed answering, the question: “but how come the number of SQL*Net round trips has not changed ?”

A couple of weeks ago someone asked me if I had ever got around to answering this question – and I hadn’t. So I started by writing an answering comment, then decided it was getting a little long so transferred it to a separate blog note, and here it is.

We can start with the two sets of stats:

Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        143  consistent gets
          0  physical reads
          0  redo size
     425479  bytes sent via SQL*Net to client
        494  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10000  rows processed

Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        133  consistent gets
          0  physical reads
          0  redo size
      79287  bytes sent via SQL*Net to client
        494  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      10000  rows processed

Basically, the answer comes from interpreting the names of the statistics correctly.

The “SQL*Net roundtrips to/from client” is the number of pairs of waits for “SQL*Net Message from client”/“SQL*Net Message to client” – which is controlled (in this example) very strongly by the array fetch size and the total number of rows (Note: we have arraysize = 1000 and rows returned = 10000 … the number of round trips is very close to 10,000 / 1,000.) This is exactly how the statistic is described in the 11.2 SQL*Plus User’s Guide and Reference (page down a bit from the targeted heading) which says: “Total number of Oracle Net messages sent to and received from the client” – unfortunately this description is easy to mis-understand.

If we enable extended SQL tracing (event 10046 / dbms_monitor) we would see that there is more activity going on than is reported by  set autotrace traceonly statistics:

Here’s a sample of waits from the first query (retested on 11.2.0.2) with one blank line inserted to improve clarity:

WAIT #140199974133928: nam='SQL*Net message from client' ela= 319 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699605359
WAIT #140199974133928: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699605402
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 34 driver id=1650815232 #bytes=8155 p3=0 obj#=-40016366 tim=1370786699605478
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 4 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699605528
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699605565
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699605603
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699605640
FETCH #140199974133928:c=0,e=262,p=0,cr=7,cu=0,mis=0,r=1000,dep=0,og=1,plh=3617692013,tim=1370786699605654

WAIT #140199974133928: nam='SQL*Net message from client' ela= 319 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699605993
WAIT #140199974133928: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699606024
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 19 driver id=1650815232 #bytes=8155 p3=0 obj#=-40016366 tim=1370786699606082
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699606127
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699606163
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699606199
WAIT #140199974133928: nam='SQL*Net more data to client' ela= 3 driver id=1650815232 #bytes=8148 p3=0 obj#=-40016366 tim=1370786699606234
FETCH #140199974133928:c=0,e=230,p=0,cr=7,cu=0,mis=0,r=1000,dep=0,og=1,plh=3617692013,tim=1370786699606247

And here’s the equivalent on the 2nd query:

WAIT #140199974133928: nam='SQL*Net message from client' ela= 1187 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699689809
WAIT #140199974133928: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699689893
FETCH #140199974133928:c=0,e=540,p=0,cr=0,cu=0,mis=0,r=1000,dep=0,og=1,plh=2148421099,tim=1370786699690421

WAIT #140199974133928: nam='SQL*Net message from client' ela= 1096 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699691585
WAIT #140199974133928: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=-40016366 tim=1370786699691666
FETCH #140199974133928:c=0,e=506,p=0,cr=0,cu=0,mis=0,r=999,dep=0,og=1,plh=2148421099,tim=1370786699692161

My session data unit (SDU) has defaulted to roughly 8KB and if you convert the driver id (1650815232) to hexadecimal you get 0x62657100, which you can recognise as BEQ, the local (bequeath) connection.

When there is little de-duplication available in the data order (first example) the 1,000 rows I fetch on each fetch call total about 48KB and I end up doing one “SQL*Net message to client” with five follow-up “SQL*Net more data to client”. (The bytes parameter for the “SQL*Net message to client” always shows 1 – Cary Millsap has pointed out that this is a long-standing error, so I’m assuming the call would be passing 8KB like all the other calls).

When I get maximum de-duplication in the data the entire 1,000 rows manages to compress down to less than the 8KB that’s available and all I see is the single “SQL*Net message to client” with no calls for more data.

So SQL*Net roundtrips isn’t measuring what you think. Based on an informal reading of the description you and I might expect the first test to report far more round trips than the second but that’s not the way that Oracle reports it, it’s literally counting the number of specific matched pairs of waits for “SQL*Net message to/from client” – the “more data” trips don’t get counted. As a result the same number of round-trips can report a hugely (and arbitrarily) different number of bytes.

Footnote

It is unfortunate that I titled this piece and its predecessor with the word “compression” because SQL*Net does have a compression option (I don’t know which version it appeared in) that can be configured in the sqlnet.ora file by 4 parameters:

sqlnet.compression = on  |  off
sqlnet.compression_levels = (low | high)    # the parentheses are required
sqlnet.compression_threshold = N            # the number of bytes below which a packet should not be compressed
sqlnet.compression_acceleration = on | off  # if oracle and your hardware support it

From some simple tests it looks as if Oracle will deduplicate the data before compressing it.

5 Comments »

  1. The difference between “message to client” and “more data to client” is related to what the underlying network protocol is doing.

    TCP/IP is sending one “message to client” and keeps on sending more data without waiting (as long as the “send limit” is not reached). Everything is designed to allow sending “more data to client” without delays from latency.

    On the other hand, a new “message to client” cannot start until the old “message” is entirely received and acknowledged. Every new “message to client” (after the first) is delayed by the time it takes for the last packet to get from sender to receiver, and the time for the last acknowledgement to get back to the sender.

    This is what Oracle calls a “round trip”; it corresponds to at least double the latency between sender and receiver.

    Reducing “round trips” is key to reducing network-level delays. This is why larger arraysizes are even more important in distributed processing, and why Oracle finally increased the default SDU size (in 11G I believe) from 2K.

    The SDU should probably just be the same size as the TCP send buffer (which may also needs tuning).

    Comment by Stew Ashton — June 10, 2013 @ 7:26 am BST Jun 10,2013 | Reply

    • Stew
      I think it may be more complicated than that.

      Starting from the bottom up, this is my understanding:

        TCP has a “maxmimum transmission unit” (MTU) typically something like 1,500 bytes, though “Jumbo Frames” allows up to 9,000 bytes.

        The TCP send and receive buffers allow the TCP stack to send many packets (of size MTU) without asking for an ACK from the far end until the buffer-size has been exhausted. This removes a very old bottleneck from TCP which used to wait for an ACK for each packet.

        Oracle has a “session data unit” (SDU) which is the maximum size of the data packet that it will send to the remote end of the link (i.e. put onto the TCP stack as a message), if it needs to send more data than this in response to a user call from the remote end it will wait on “SQL*Net more data to client” until the TCP stack has cleared and the TCP protocol has informed it that it can proceed.

      Imagine a user call has an arraysize that demands 60KB of data, the SDU has been set to the maximum of 32KB. the TCP send and receive buffers have been set to 16KB and the MTU is 1,500 bytes. (All following arithmetic will be approximate):

        The Oracle server process puts 32KB into the TCP buffer, and goes into “more data to client” (with 28KB left to send).
        tcp sends 11 packets of 1,500 bytes then waits for an ACK
        tcp sends another 11 packets of 1,500 bytes then waits for an ACK
        tcp cues the Oracle server process that the send is complete
        The Oracle server process goes puts the remaining 28KB into the TCP buffer, and goes into “SQL*Net message from client”.
        tcp sends 11 packets of 1,500 bytes then waits for an ACK
        tcp sends the last 9 packets of 1,500 bytes then waits for an ACK.

      In “modern” times, some of the layers become invisible because of the way that monitors (such as netstat) count tcp round-trips, and because the default size for the tcp send and receive buffers seem to be larger than the maximum values for the SDU (and the SDU rarely seems to be set to anything other than the default 8KB).

      Comment by Jonathan Lewis — June 17, 2013 @ 7:38 am BST Jun 17,2013 | Reply

  2. Jonathan,
    this behaviour is expected to happen when dblinks are used, isn’t? Like in an INSERT SELECT over a dblink. The data transfered from remote site, if ordered, could to be compressed at SQL*Net layer.
    Thanks.

    Comment by Johnny — August 14, 2013 @ 10:24 pm BST Aug 14,2013 | Reply

  3. […] stats although the impact of SQL*Net compression is always worth a quick reminder (here and here) on the other hand the wait events (specifically the wait counts) are most […]

    Pingback by DB links | Oracle Scratchpad — July 3, 2019 @ 1:16 pm BST Jul 3,2019 | Reply

  4. […] SQL*Net Compression pt. 2 (June 2013): How SQL*Net compression reduces “more data” waits, but not “roundtrip” counts. […]

    Pingback by Infrastructure Catalogue | Oracle Scratchpad — July 22, 2023 @ 10:53 pm BST Jul 22,2023 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.