Data center bridging and ethernet features – Dell Brocade Adapters User Manual

Page 50

Advertising
background image

22

Brocade Adapters Installation and Reference Manual

53-1002144-01

Adapter features

1

Target rate limiting is enforced on all targets that are operating at a speed lower than that of
the target with the highest speed. If the driver is unable to determine a remote port’s speed, 1
Gbps is assumed. You can change default speed using BCU commands. Target Rate Limiting
protects only FCP write traffic.

vHBA
Virtual HBAs (vHBAs) are virtual port partitions that appear as virtual or logical HBAs to the
host operating system. Multiple vHBAs are not supported, therefore you cannot create or
delete them from an adapter. For more information, refer to

“I/O virtualization”

on page 16.

Data Center Bridging and Ethernet features

Brocade CNAs and Fabric Adapter ports configured in CNA or NIC mode support the following Data
Center Bridging (DCB) and Ethernet networking features:

10 Gbps throughput per port full duplex

1500 or 9600 byte (Jumbo) frames
These frames allow data to be transferred with less effort, reduces CPU utilization, and
increases throughput. Mini-jumbo frames are required to encapsulate FCoE frames on DCB.
Network administrators can change the jumbo packet size from the default setting using host
operating system commands as described in

Appendix A, “Adapter Configuration”

. Note that

the MTU size refers to the MTU for network configuration only. Internally, hardware will always
be configured to support FCoE frames that require mini-Jumbo size frames.

NOTE

The jumbo frame size set for the network driver cannot be greater than the setting on the
attached FCoE switch or the switch cannot accept jumbo frames.

Simple Network Management Protocol (SNMP)
SNMP is an industry-standard method of monitoring and managing network devices. Brocade
CNAs and Fabric Adapter ports configured in CNA or NIC mode provide agent and MIB support
for SNMP. For more information, refer to

“Simple Network Management Protocol”

on page 34.

Checksum/CRC offloads for FCoE packets, IPv4/IPv6 TCP and UDP packets, and IPv4 header.
The checksum offload supports Checksum offloads for TCP & UDP packets and IPv4 header.
This enables the CNA to compute the checksum, which saves host CPU cycles. The CPU
utilization savings for TCP checksum offload can range from few percent with MTU of 1500,
and up to 10-15% for MTU of 9000. The greatest savings are provided for larger packets.

Data Center Bridging Capability Exchange Protocol (DCBCXP) (802.1)
Used between CNA or Fabric Adapter port configured in CNA mode and the FCoE switch to
exchange configuration with directly connected peers. Uses LLDP to exchange parameters
between two link peers.

Enhanced transmission selection (802.1Qaz)
Provides guidelines for creating priority groups to enable guaranteed bandwidth per group.
More important storage data traffic can be assigned higher priority and guaranteed bandwidth
so it is not stalled by less-important traffic.

Advertising