Data center bridging, Using data center bridging (dcb), Data center bridging (dcb): broadcom netxtreme ii – Dell Broadcom NetXtreme Family of Adapters User Manual

Page 169: Network adapter user guide, Overview, Dcb capabilities

Advertising
background image

Data Center Bridging (DCB): Broadcom NetXtreme II® Network Adapter User Guide

file:///C|/Users/Nalina_N_S/Documents/NetXtremeII/English/dcb.htm[9/5/2014 3:45:14 PM]

Back to Contents Page

Data Center Bridging (DCB): Broadcom NetXtreme II

®

Network

Adapter User Guide

Overview

DCB Capabilities

Configuring DCB

DCB Conditions

Data Center Bridging in Windows Server 2012

Overview

Data Center Bridging (DCB) is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery,
low latency, and standards-based bandwidth sharing of data center physical links. DCB supports storage, management,
computing, and communications fabrics onto a single physical fabric that is simpler to deploy, upgrade, and maintain than in
standard Ethernet networks. DCB has a standards-based bandwidth sharing at its core, allowing multiple fabrics to coexist on
the same physical fabric. The various capabilities of DCB allow for LAN traffic (large number of flows and not latency-
sensitive), SAN traffic (large packet sizes and requires lossless performance), and IPC (latency-sensitive messages) to
bandwidth share the same physical converged connection and achieve the desired individual traffic performance.

DCB includes the following capabilities:

Enhanced Transmission Selection (ETS)
Priority-based Flow Control (PFC)
Data Center Bridging Capability eXchange Protocol (DCBX)

DCB Capabilities

Enhanced Transmission Selection (ETS)

Enhanced Transmission Selection (ETS) provides a common management framework for assignment of bandwidth to traffic
classes. Each traffic class or priority can be grouped in a Priority Group (PG), and it can be considered as a virtual link or
virtual interface queue. The transmission scheduler in the peer is responsible for maintaining the allocated bandwidth for each
PG. For example, a user can configure FCoE traffic to be in PG 0 and iSCSI traffic in PG 1. The user can then allocate each
group a certain bandwidth. For example, 60% to FCoE and 40% to iSCSI. The transmission scheduler in the peer will ensure
that in the event of congestion, the FCoE traffic will be able to use at least 60% of the link bandwidth and iSCSI to use 40%.
See additional references at

http://www.ieee802.org/1/pages/802.1az.html.

Priority Flow Control (PFC)

Priority Flow Control (PFC) provides a link-level flow control mechanism that can be controlled independently for each traffic
type. The goal of this mechanism is to ensure zero loss due to congestion in DCB networks. Traditional IEEE 802.3 Ethernet
does not guarantee that a packet transmitted on the network will reach its intended destination. Upper-level protocols are
responsible to maintain the reliability by way of acknowledgement and retransmission. In a network with multiple traffic
classes, it becomes very difficult to maintain the reliability of traffic in the absence of feedback. This is traditionally tackled
with the help of link-level Flow Control.

When PFC is used in a network with multiple traffic types, each traffic type can be encoded with a different priority value and
a pause frame can refer to this priority value while instructing the transmitter to stop and restart the traffic. The value range
for the priority field is from 0 to 7, allowing eight distinct types of traffic that can be individually stopped and started. See
additional references at

http://www.ieee802.org/1/pages/802.1bb.html.

Data Center Bridging eXchange (DCBX)

Advertising