Skip to content

sql array size , oracle SDU, network card MTU -> different components for data transfer – complete reference for the new oracle dba

You are a new oracle DBA and you would like to understand about the data transfer that takes place between the oracle client and the oracle server.
The data transfer between the oracle client and the oracle server happens in the form of network packets. each network packet has a size. The oracle client first puts the data into a buffer and this buffer has also a size(SDU) and the tools like sqlplus or the jdbc driver and other java gui tools also can specify the array size which tells oracle how many rows to send at a stretch from the server to the client or vice versa.
So basically when transferring data you have different components to consider like arraysize , SDU , MTU etc.

Below set of articles will give you an indepth understanding on how you can manipulate these values to make sure that your data transfer between the client and server is optimized. In my case i never had to change the default sizes as the applications were working correctly with the default values for all the relevant components as mentioned above.

Now go ahead and enjoy the articles

In sqlplus session set array size to some value that tells sqlplus how many rows to fetch at a time.

I tried it in arz02 but the roundtrips number did not change.

Array Size Demo ARRAYSIZE nnn
The default value of nnn is 15, which is too small for large data transfers. Try larger and larger values of nnn until response improvements become marginal.


It means that you will fetch results one at a time from the server when issuing SELECT statements, ie one client-server roundtrip for each row fetched. This is very bad in terms of performance. Increasing it will be much more efficient but use more memory. A good value would be between 50 and 500, more than 500 would use more memory for a very little performance gain.

AFAIK the default is 100 in 10g while it was 15 in 9i. Someone correct me if I’m wrong.

Another parameter ARRAYSIZE allows you to specify how many Rows will SQLPlus request from the database server at each fetch. With small rows, you can set a larger ARRAYSIZE. (Note : Do not go overboard with ARRAYSIZE. If you set it to too large a value and you are retreiving "slightly large" rows, the resultset (arraysize ‘n’ rows X rowsize ‘m’ bytes) may exceed the SQLPlus Buffer size and you’d get an error in the midst of fetching your data

Depends on whether you are using a Host Language Interface like Pro*C
or plain OCI. In Pro*C you will have host arrays set up. You can play
around with the size of those arrays.
Please be aware that in order to be efficient the total size of the
array (number of elements times number of bytes in element) should be
smaller than SDU.

Where this can be of real benefit is in a Java situation. To set the array size, use the setDefaultRowPrefetch method of the OracleConnection class. However, the size should be determined empirically by setting the size and running test runs because this is a case of diminishing returns as the array size gets larger. Also, the size of network buffers and packets need to be factored in when sizing the buffers in SQL.

28.2.2 Standard Fetch Size and Oracle Row Prefetching

Oracle JDBC connection and statement objects allow you to specify the number of rows to prefetch into the client with each trip to the database while a result set is being populated during a query. You can set a value in a connection object that affects each statement produced through that connection, and you can override that value in any particular statement object. The default value in a connection object is 10. Prefetching data into the client reduces the number of round trips to the server.

Similarly, and with more flexibility, JDBC 2.0 allows you to specify the number of rows to fetch with each trip, both for statement objects (affecting subsequent queries) and for result set objects (affecting row refetches). By default, a result set uses the value for the statement object that produced it. If you do not set the JDBC 2.0 fetch size, then the Oracle connection row-prefetch value is used by default.

For more information, see "Oracle Row Prefetching" and "Fetch Size".

You are looking to optimize the data transfer by adjusting the SDU size  
and what to know when to modify and when to not modify the SDU.
Tuning your application to reduce the number of round trips across the network  
is the best way to improve your network performance. If this is done, it is also
possible to optimize data transfer by adjusting the size of the session data  
unit (SDU).  
The SDU is a buffer that Net8 uses to place data before transmitting across the network.
Net8 sends the data in the buffer either when requested or when it is full.  
Modify session data unit size when:  
1) The data coming back from the server is fragmented into separate
2) You are on a wide area network (WAN) that has long delays.
3) Your packet size is consistently the same.
4) Large amounts of data are returned.
Do not modify session data unit size when:  
1) Your application can be tuned to account for the delays.
2) You have a higher speed network where the effect of the data transmission  
   is negligible.
3) Your requests return small amounts of data from the server.                                                        

The values for SDU range from 512 to 32767 bytes with a default of 2048 bytes, If you do not specify a value for SDU.

You have to set the same value in both the listener.ora and tnsnames.ora. If the value of SDU is different in the two files the lower of the two values will be used.

To minimize packet header overhead and message fragmentation, set the SDU size as a multiple of the MSS (Maximum Segment Size). When Oracle Advanced Securityencryption is not used, increase the SDU size by one (1). For example, the TCP/IP version 4 MSS on Ethernet is 1460 bytes. Use a multiple of 1460 for the SDU size if encryption is used. If encryption is not used, increase the SDU size to 1461.

  1. Our server has a gig nic in it and was connected to a gig switch. Call it switch A. Switch A was connected to a wall-jack, which led to a patch panel. At the patch panel, Switch A was routed to an input port in Switch B, which was a 100meg switch. We finally traced this path and noticed we had a fault like blinking on Switch B. We took our production system down for 30mins. We routed the new server directly to the wall jack and to switch B. File copy between client and server immeditately caused a fault light on Switch B. So, we went into the server and set the gig nic to 100meg full duplex. File copy was blindingly fast and no fault light. Brought the database back up and allowed users to begin work. System performance is exactly what I was hoping for on the new server. My 1min, 16 sec report took less than 2 secs.

The reason why our developer machine was so fast was that she was connected to Switch B. If we moved her to anywhere but Switch B, she got the same poor performance as everyone else.

  1. Thanks for the response. In fact, I have never faced this kind of problem before. I’m trying to understand why in the local server the export operation took so fast, and why the export operation from a remote machine took so long. 2 or 3 minutes is acceptable, but 1 hour? For 5 years, we have working in a Oracle 10g installed in another server, and this kind of problem have never happened before. The development team used to take exports (exp) and imports (imp) using default buffers size since then, without problems:

exp user/passwrod@service file=file.dmp …

The network analyst of the company made some tests using “scp” Linux commands in order to copy big files from that new Oracle 11g server to another server and vice versa without problems. The speed rate of copying a file was normal as expected (10MB/s) in a 100Mbs network.

The SDU/TDU SQLNet configuration is using the default values. The MTU of the LAN is fine too (1500 bytes).

So, I’m not sure how to find out either what could might causing this delay or contention. Maybe looking at Linux Kernel related network parameters? Maybe taking a test … like connecting a direct crossover cable connection between the server and a client machine?

Tnsping DARWIN 999

OK (0 msec)

OK (10 msec)

  1. only got acceptable export speed when I changed the default buffer size to a high value:

exp user/password buffer=100000000

So, do you think there is some O/S kernel related parameter that I should to review?

The best is to instrument your client application and measure it yourself. Server side tracing & wait events don’t show you the complete picture (for example, "SQL*Net message to client" doesnt measure the time it took to actually deliver the message to the client, its rather just the time it took to put the msg into TCP send buffer).

There are tools which can capture and digest SQL*Net traffic for analysis – but none of them are free for now (this will likely change in coming months

As I wrote in my previous post, "SQL*Net message to client" does NOT measure network latency! It merely measures how long it took to put the response message into TCP send buffer!

Once the response packet is put into TCP send buffer, Oracle server process starts waiting for "SQL*Net message FROM client" again. It’s up to TCP stack to deliver this packet from this point and Oracle server process has no way for measuring directly when did the packet arrive (it would have to start intercepting TCP ACK packets at kernel level for that).

This behaviour also explains, why the "SQL*Net message TO client" waits are usually unbelievably short – like 1 microsecond etc. The light in vacuum travels only 300 meters in one microsecond, the signal in a cable travels much less – yet when sending a single packet from London to Hong-Kong, we see few microsecond "response times"…


solaris02$ fgrep "SQL*Net message to client" sol10g_ora_837.trc | head
WAIT #8: nam=’SQL*Net message to client’ ela= 2 driver #bytes=1 p3=0 obj#=-1 tim=16418611294
WAIT #7: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16418644515
WAIT #5: nam=’SQL*Net message to client’ ela= 2 driver #bytes=1 p3=0 obj#=-1 tim=16428803213
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16428880945
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16428927443
WAIT #5: nam=’SQL*Net message to client’ ela= 0 driver #bytes=1 p3=0 obj#=-1 tim=16428973661
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16429019250
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16429066742
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16429114761
WAIT #5: nam=’SQL*Net message to client’ ela= 1 driver #bytes=1 p3=0 obj#=-1 tim=16429162471

Note that you can see longer times spent waiting for "SQL*Net message to client" when sending large amounts of data. This happens when your TCP send buffer gets full, thus TCP stack can not accept further packets. Sending will be blocked until the remote site sends back further ACK packets which state up to which byte in the TCP transmission stream it has received the data.

So, if you’re sending loads of data over a slow link or TCP connection, the SQL*Net message to client wait time can be used as a low-confidence indicator of your SQL*Net throughput, but never a measure of network latency!

As per your other question about benchmarking – you do not need to restart your instance to reset the wait times to zero. You just need to store snapshots of your V$SESSION_EVENT & V$SESSTAT views and calculate deltas appropriately. If you want tools for it, check for Snapper and Sesspack on my blog.


There is no single optimal TCP setting which is best in all situations.

However, most operating systems default TCP send & receive buffer settings are too low for high throughput on WAN networks (which have long packet round-trip time, RTT).

So if you want high WAN throughput then you need to increase those buffers, usually by changing kernel TCP parameters (names dependent on your OS).

However it’s not as simple as that, there are more things (parameters) to consider, like TCP congestion window size, packet loss & retransmit, congestion collapse etc. If you just set the buffers too high, you might end up with even lower throughput or lots of trashing due packet retransmits etc. Also, if you have thousands of open TCP connections, you don’t want each of those use too much memory (the buffers use kernel memory). Solaris for example allows you to set TCP buffer size based on the TCP connection endpoints, so all your LAN connections can use small buffers while WAN ones will use large ones.

Before changing anything, I strongly recommend getting to know how TCP protocol works (it turned out much more powerful and complex than I had thought when I started studying it). Google for "TCP performance tuning" for example or see those links:

There are two ways for doing this, one is using tcp_host_param (which I used):

Another option is to use route commands for setting per-route TCP buffer sizes:

One row of the table is of size 1 KB . it takes 4 round trips to the database server to return this takes 2 milli seconds for this .

2 rows take 1936 bytes .took  1 milli second. 6 round trips.

11 rows   6829 bytes  2 milli seconds.  24 round trips.

100 rows  73267 bytes 14 milliseconds . 202 round trips. 202 trips.

200 rows 145077 bytes ..took 29 milli seconds. 402 round trips.

300 rows .. 222137 bytes ..took 45 milliseconds ..602 round trips

587 rows 441795 bytes took 81 milliseconds ..1176 round trips

SDU – WAN optimisations

October 15, 2008 · Leave a Comment

We had a meeting yesterday about the performance of an application across the WAN to the more remote regions of the world.

The upshot of the meeting was that the latency and bandwith of the network was fine but that the client-server network communications could be optimised.

My initial reaction was that the main influence was arraysize/fetchsize, but there are two main influences in Oracle client – Oracle server communications and as the fetchsize has already been increased explicitly, we may get some benefit from optimising SDU settings – something that I wasn’t aware of until I started investigating/reading further today:

  • arraysize/fetchsize – the number of rows that the client will fetch from the database at one time.
  • SDU – Session Data Unit: a buffer that Oracle Net uses to place data before transmitting it across the network. Oracle Net sends the data in the buffer either when requested or when it is full.
    (TDU – Transport Data Unit – cannot be configured in Oracle v8 onwards)

From the oracle documentation, “The SDU value for network transmission is negotiated between the sender and receiver sides of the connection, and the minimum SDU value of the two endpoints is used for any individual connection.” In an oracle client – oracle server stack, “the receiving sqlnet.ora can include the DEFAULT_SDU_SIZE parameter. The receiving side listener.ora must indicate the SDU change for the SID. The sending side tnsnames.ora file must include the SDU modification for the particular service”

In other words, for a connection to have an SDU size greater than 2048, the client and the server must specify a value the larger value for the SDU. The database will then select the lower of the 2 values.

In an oracle client negotiation via sqlnet.ora, you can up the trace levels and see the negotiation between client and server to see what SDU will be used.

Configuring Session Data Unit:


MTU for Solaris is 1500 bytes, but maybe ifconfig can help you.

What happens is
your session assembles a sqlnet package. This is simply a sequence of bytes in your array (sqlnet doesn’t care whether your arrray size is 1 or 10 or 100 or whatever)

As soon as the size exceeds SDU (default 2048 bytes) sqlnet will fragment it, when sending it to your communication driver or network card.
This is why you see ‘sqlnet more data to client’: sqlnet can not send everything in 1 packet.
How big these packets are you can see by enabling sqlnet trace on level 16.

When the (fragmented) packet is bigger than the MTU of the network card, the network card will also fragment it. As this is outside Oracle, it will not show up in any Oracle statistic.
It may show up in netstat, but I’m a database guy so I don’t know.

Two general recommendations (independent of your current problem):
1 do not bump up the array size in your ODBC driver or whatever configuration item you have into the sky:
2 set SDU to a multiple of MTU. As the max SDU is 32k and the MTU is usually 1500 or 1508, this is 30000 bytes

I have been testing this in the past. In my case ideal array size shouldn’t exceed 20.

Sybrand Bakker

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Post a Comment

You must be logged in to post a comment.