White paper – QLogic 2500 Series 8Gb Fibre Channel Adapter of Choice in Microsoft Exchange Environments User Manual

Page 3

Advertising
background image

hSG-WP08008

SN0230963-00 rev. F 06/13

3

White PaPer

to severely constrain the capabilities of the infrastructure in the face of
increasing virtualization, consolidation, and user loads.

a new class of server-side storage acceleration is the latest innovation in
the market addressing this performance disparity. the idea is simple: fast,
reliable, solid-state flash memory connected to the server brings faster
data access to the exchange server’s CPU. Flash memory is highly available
in the market and promises to perform much faster than any rotational disk
under typical, small, highly random, enterprise i/O workloads.

the 10000 Series adapter from QLogic provides i/O acceleration to
storage traffic for exchange servers. the 10000 Series adapter is a PCie-
based i/O device that provides the integrated storage network (Fibre
Channel) connectivity, i/O caching technology, integrated flash memory,
and embedded processing that are required to make management and
caching tasks transparent to the host server. the QLogic solution delivers
the application performance acceleration benefits of transparent, server-
based cache without the limitations of solutions that require separate,
server-based storage management software and operating system (OS)
filter drivers. as a shared caching resource, QLogic 10000 Series adapters
extend server-based caching to virtualized and clustered environments,
which up to now have been precluded by the single server “captive” nature
of existing server-based caching solutions.

For optimal Microsoft exchange performance, maximized transactional
i/O rate and minimized i/O latency indicate a peak-performing exchange
system. By optimizing these performance values, the platform can
absorb additional users, thereby reducing server, management, and
overall infrastructure costs. the key benefits of the QLogic 10000 Series
adapter identified in this paper are improved performance and reduced
overall costs.

Test Setup

the test setup was configured with an intel

®

Xeon

®

server connected

to hP

®

enterprise Virtual array (eVa) storage through the QLogic 10000

Series adapter and the QLogic 5800V/5802V Fibre Channel Switch.
Jetstress runs were executed with cache levels varying in size from zero to
100 percent of the actual exchange database size. See appendix B for actual
configuration information.

Figure 1 shows the basic setup for the exchange test.

Figure 1. Exchange Test Setup

Exchange Performance Results Using Jetstress

the tests demonstrate that, at various levels of LUN caching in the 10000
Series adapter, caching delivers as much as a 400 percent increase in
the quantity of iOPS and an 80 percent reduction in i/O latency. these
performance improvements are directly related to caching data closer to
the exchange server processor, which eliminates Fibre Channel transit time
over the SaN. table 1 shows these results.

Table 1. Exchange Performance Test Results

Criteria

No

Cache

25%

30%

50%

75%

100%

exchange

i/O read

Latency (ms)

4.724

2.648

2.484

2.021

1.558

0.922

exchange

transactions

(iOPS)

1056

1819

2017

2277

3154

5270

Latency

reduction

0%

(44%)

(47%)

(57%)

(67%)

(80%)

iOPS

increase

0%

72%

91%

116%

199%

399%

Figure 2 shows the total exchange transactional iOPS as measured by the
Jetstress test. With no cache enabled in the 10000 Series adapter, Jetstress
measured 1,056 transactional iOPS. in this case, all i/O traffic makes the
round trip from the server to the SaN storage array. at 100 percent cache,
the cache size is equal to the database size in this test. the measured iOPS
of 5,270 has increased by nearly 400 percent.

Advertising