White paper – QLogic 8200 Series The Value of Full Hardware Offload in a Converged Ethernet Environment User Manual

Page 2

Advertising
background image

hSG-WP10004

SN0330923-00 rev. B 09/11

2

White PaPer

Most SaNs today are built using Fibre Channel technology, which offers
a highly reliable, robust, and mature storage protocol. the protocol
meets the data integrity and performance requirements of enterprise
data center customers running critical applications and enterprise
storage solutions. however, there is a fast-emerging new standard,
FCoe (Fibre Channel over ethernet), which promises to introduce the
data center trend of consolidation to the network. FCoe provides a
direct mapping of Fibre Channel onto ethernet and enables the benefits
of Fibre Channel traffic to be natively transported over the ubiquity of
ethernet networks. Migrating to FCoe brings additional benefits, such
as i/O consolidation and lower management costs, while preserving
investments by leveraging existing Fibre Channel knowledge and
management tools, which can be applied directly to FCoe.

When deciding on implementing an FCoe solution, it is important to
know that you have two choices: software initiators or offload engines.
While software initiators are a low-cost way for an organization to
explore the benefits of FCoe SaNs using existing 10Gbe NiCs in
servers, offload engines are specialized adapters (Converged Network

adapters) designed for concurrent i/O support, conserving precious
CPU cycles for applications, services, and virtual server environments.
in addition, offload engines can be used to address emerging
performance requirements.

Besides cost, three other factors—performance, reliability, and
scalability—need to be balanced, along with other considerations,
when determining the proper interconnect to use within an FCoe-
enabled data center. the questions many administrators will be asking
include the following:

to achieve i/O consolidation and cost reduction, should i utilize
low-cost FCoe software initiators or the more expensive Converged
Network adapters with built-in processors to offload from the CPU?

What are the tradeoffs of saving money on a NiC versus a Converged
Network adapter?

this white paper will discuss the advantages and provide guidance for
making an informed decision.

Executive Summary

Introduction – The Situational Impacts of Open FCoE

Open FCoe, a Linux community open source project, was started by intel

®

with the goal of encouraging the development of a native FCoe code. this
code base provides for Fibre Channel protocol processing over ethernet-
based transport and acts as a low-level device driver to send and receive
data packets. Open FCoe is now being released within red hat

®

enterprise

Linux

®

(rheL

®

) distributions, which will help propel storage over converged

ethernet solutions. however, there are several caveats; not just any NiC
can be used. ethernet NiCs must support new ethernet standards, such
as Priority-based Flow Control (802.1Qbb) and enhanced transmission
Selection (802.1Qaz). in addition, an FCoe Switch with a Fibre Channel
Forwarder (FCF) is required to log in to a Fibre Channel fabric.

Despite the fact that Open FCoe is a good indication of how FCoe is
being accepted as part of the Linux infrastructure, there are many factors
to consider when preparing for migration to FCoe. the need for i/O
consolidation, cost reduction, scalability, application performance, and
data integrity are all key factors in the decision-making process. although
Open FCoe solutions leveraging a software initiator and an inexpensive
NiC can be great for many applications, when it comes to enterprise-class
applications and storage an Open FCoe driver solution not only fails to meet
basic requirements but can also be detrimental to data center virtualization
objectives.

Scalability Within Server Virtualization environments

Within the evolution of data centers, server virtualization is the key solution
driving server hardware consolidation, as well as i/O consolidation. FCoe
provides a platform for i/O consolidation by reducing the number of
adapters required for transporting multiple standard i/O protocols. this
follows the trend of consolidation in data centers, providing increased
flexibility and cost savings. Given this, it stands to reason that FCoe would
be used first in data center environments employing VMware

®

, hyper-V

®

,

or other hypervisors that are being used for consolidation of hardware. it’s
true today that we see the very first FCoe implementations taking place
within these virtualized environments.

in a virtualized server environment, there are two important considerations
that should be well thought out prior to implementing FCoe: the increased
density of applications per physical server and the addition of a virtualization
layer. Both will require increased i/O performance. application density
across an i/O adapter is increased within these environments due to
virtualized servers running multiple CPU cores with up to 12 or more
virtual machines (VMs) per core. each of these virtual servers and their
applications are often running over a single physical adapter (in a fault
tolerant situation, more than one adapter is used to provide redundancy).
this creates a very dense application environment and places increasing
i/O performance demands on adapters.

Advertising