Higher efficiency, Flexsuite – QLogic 2600 Series vSphere 5 Virtual Server Engine User Manual

Page 15

Advertising
background image

Higher Efficiency

FlexSuite Adapters feature QLogic I/OFlex technology, a field-
configurable upgrade to use the same hardware for Gen 5 Fibre
Channel or 10GbE server connectivity.

FlexSuite

In Transaction-Intensive and Bandwidth-Intensive Environments

For virtualized environments, the most critical measure of performance is the ability to scale as the number
of VMs and application workloads increase. In testing conducted by QLogic, the QLE2672 FlexSuite Gen 5
Fibre Channel Adapter delivered:

3X the transactions and 2X the bandwidth of 8Gb Fibre Channel Adapters.

The QLE2672 also demonstrated a 50 percent advantage over competitive products for read-only
performance and 25 percent better mixed read-write performance.

This superior performance of QLogic Gen 5 Fibre Channel Adapters translates to support for both higher VM
density and more demanding Tier-1 applications.

QLogic achieves superior performance by leveraging the advanced Gen 5 Fibre Channel and PCIe Gen3
specifications—while maintaining backwards compatibility with existing Fibre Channel networks. The unique
port-isolation architecture of the QLogic FlexSuite Adapters ensures data integrity, security, and deterministic
scalable performance to drive storage traffic at line rate across all ports.

Furthermore, QoS enables IT administrators to control and prioritize traffic.

10GbE Intelligent Networking Eliminates I/O Bottlenecks

QLogic’s 10GbE intelligent Ethernet architecture, combined with new virtualization software features, allow
multiple and flexible receive queues, such as NetQueue, and significantly reduces the delays inherent in
current virtualization implementations by:

• Eliminating some of the hypervisor overhead. This frees up processor resources to support heavier weight
applications on the VMs or to run more VMs per server.

• Eliminating the queuing bottleneck in today’s software-based approach. The current approach creates a
single first-in, first-out queue for incoming packets from the Ethernet adapter through the hypervisor to the
various VMs. Because neither the hypervisor nor the Ethernet adapter knows which packet goes to which
interface, there is substantial packet processing performed in the hypervisor to determine which packet goes
where. It is a processor-intensive task that consumes a great deal of time and CPU cycles.

Advertising