White paper – QLogic 8200 Series is the Adapter of Choice for Converged Data Centers User Manual

Page 2

Advertising
background image

hSG-WP11001

SN0330925-00 rev. B 02/13

2

White PaPer

Test Configurations and Procedures

the testing discussed in this paper analyzes and compares both QLogic
and emulex Converged Network adapters in a representative selection
across ethernet and FCoe simulations of real-world workloads.

ethernet Configuration

the industry’s leading ixChariot

®

test tool was used to simulate

real-world applications and to evaluate each adapter’s performance
under realistic load conditions. ixChariot was used due to its ability
to accurately assess the performance characteristics of an adapter
running within a data center network.

Chariot measurements were taken to determine the total throughput
(Mbps) across increasing workloads (threads) and the percentage of
CPU required. throughput was then averaged across each thread count
to estimate the CPU efficiency (Mbps/CPU). CPU efficiency provides an
excellent estimate of the overhead imposed on the processor by running
the significant networking traffic and handling network interrupts.

During these tests, the workloads were increased by adding threads
to reach a maximum of 20. as a result, these tests provide significant
insight into each adapter’s potential networking performance and
remaining headroom to efficiently scale.

additional information regarding the test configuration (such as servers
and driver versions) can be found within appendix a at the end of
this document.

the testing demonstrated the QLogic adapter’s performance advantages,
which provide significantly greater CPU efficiency and an enhanced ability
to scale in a virtualized environment compared to the emulex adapter.

Figure 1. Networking Test Configuration

ethernet test Procedure

1.

a QLogic 10Gb ethernet dual-port PCie

®

2.0 to Converged Network

adapter (QLe8242) was installed on the test server using the latest
released driver.

2.

ixChariot was set up with two end Point (eP) agents installed as remote
agents, and an ethernet Switch was installed between them. each
remote agent was set up to create and measure network traffic.

3.

an ixChariot console was then attached to each eP to indicate what to
transmit, when to transmit, and what data to collect. One end Point was
designated as a client and the other as a server.

4.

a) Dual-port NiC Performance testing – each port on the server
was configured and connected to two separate client servers.
b) teaming/failover testing – the dual-port QLogic adapter within the
server was then configured for teaming with port0 as the active primary
port and port1 as the secondary failover port.

5.

Next, the type of data transmission to model was selected. the
type chosen was the default high performance throughput script.

Leading the data center charge in i/O consolidation is Fibre Channel
over ethernet (FCoe). FCoe promises to unify data and storage
traffic onto a single wire. as FCoe helps to enable data center
consolidation, CPU efficiency related to i/O is emerging as a key factor
in maximizing network consolidation. this is true for a couple of
reasons. First, consolidation is creating denser server environments,
which in turn are driving higher throughput requirements for business-
critical applications. Second, by lowering CPU requirements to
process i/O, virtualization ratios can be maximized to consolidate
applications on fewer servers, thereby accomplishing low total cost
of ownership (tCO).

high throughput with low CPU utilization is the key to scaling within
next-generation data centers. a Converged Network adapter can
easily accomplish this by offloading i/O processing for iP, iSCSi,
and FCoe from the host CPU. Currently (at the time this paper was
published), the QLogic

®

8200 Series Converged Network adapter is the

only Converged Network adapter to offload Fibre Channel over ethernet
(FCoe), iSCSi, and iP (ethernet) traffic concurrently.

Executive Summary

Advertising