Intel AS/400 RISC Server User Manual

Page 73

Advertising
background image

y

For additional information regarding your Host Ethernet Adapter please see your specification
manual and the

Performance Management

page for future white papers regarding iSeries and HEA.

y

1 Gigabit Jumbo frame Ethernet enables 12% greater throughput compared to normal frame 1 Gigabit
Ethernet. This may vary significantly based on your system, network and workload attributes.
Measured 1 Gigabit Jumbo Frame Ethernet throughput approached 1 Gigabit/sec

y

The jumbo frame option requires 8992 Byte MTU support by all of the network components
including switches, routers and bridges. For System Adapter configuration, LINESPEED(*AUTO)
and DUPLEX(*FULL) or DUPLEX(*AUTO) must also be specified. To confirm that jumbo frames
have been successfully configured throughout the network, use NETSTAT option 3 to “Display
Details” for the active jumbo frame network connection.

y

Using *ETHV2 for the "Ethernet Standard" attribute of CRTLINETH may see slight performance
increase in STREAMING workloads for 1 Gigabit lines.

y

Always ensure that the entire communications network is configured optimally. The maximum
frame size parameter
(MAXFRAME on LIND) should be maximized. The maximum
transmission unit (MTU) size
parameter (CFGTCP command) for both the interface and the route
affect the actual size of the line flows and should be configured to *LIND and *IFC respectively.
Having configured a large frame size does not negatively impact performance for small transfers.
Note that both the System i and the other link station must be configured for large frames. Otherwise,
the smaller of the two maximum frame size values is used in transferring data. Bridges may also limit
the maximum frame size.

y

When transferring large amounts of data, maximize the size of the application's send and receive
requests. This is the amount of data that the application transfers with a single sockets API call.
Because sockets does not block up multiple application sends, it is important to block in the
application if possible.

y

With the CHGTCPA command using the parameters TCPRCVBUF and TCPSNDBUF you can alter
the TCP receive and send buffers. When transferring large amounts of data, you may experience
higher throughput by increasing these buffer sizes up to 8MB. The exact buffer size that provides the
best throughput will be dependent on several network environment factors including types of
switches and systems, ACK timing, error rate and network topology.

In our test environment we used

1 MB buffers. Read the help for this command for more information.

y

Application time for transfer environments, including accessing a data base file, decreases the
maximum potential data rate. Because the CPU has additional work to process, a smaller percentage
of the CPU is available to handle the transfer of data. Also, serialization from the application's use of
both database and communications will reduce the transfer rates.

y

TCP/IP Attributes (CHGTCPA) now includes a parameter to set the TCP closed connection wait
time-out value (TCPCLOTIMO) . This value indicates the amount of time, in seconds, for which a
socket pair (client IP address and port, server IP address and port) cannot be reused after a connection
is closed. Normally it is set to at least twice the maximum segment lifetime. For typical applications
the default value of 120 seconds, limiting the system to approximately 500 new socket pairs per
second, is fine. Some applications such as primitive communications benchmarks work best if this
setting reflects a value closer to twice the true maximum segment lifetime. In these cases a setting of

IBM i 6.1 Performance Capabilities Reference - January/April/October 2008

©

Copyright IBM Corp. 2008

Chapter 5 - Communications Performance

73

Advertising
This manual is related to the following products: