White paper – QLogic 2500 Series 8Gb Fibre Channel Adapter of Choice in Microsoft Exchange Environments User Manual
Page 4

hSG-WP08008
SN0230963-00 rev. F 06/13
4
White PaPer
Figure 2. Exchange Transactional IOPS Measured by Jetstress
at Varying Cache Levels
Figure 3 shows the exchange i/O read latency as measured by Jetstress.
With no cache enabled in the 10000, Jetstress measured the latency at
4.724ms. as with transactional iOPS, all i/O traffic makes the round trip
from the server to the SaN storage array. at 100 percent cache, the latency
fell to 0.922ms, an 80 percent reduction in this test.
Figure 3. Exchange I/O Read Latency Measured by Jetstress at Varying Cache Levels
QLogic and Emulex 8Gb Standard Fibre Channel Adapter
Comparison: Exchange Server Benchmark Using
IOmeter Test
the iOmeter tool was used to benchmark the QLogic 2500 Series 8Gb Fibre
Channel adapter versus the emulex 8Gb LightPulse adapter in an exchange
environment with minimum subsystem latency.
the iOmeter test setup (Figure 4) consisted of the latest 8Gb adapters from
QLogic and emulex, running on current, commercially available drivers,
installed in a 2.93Ghz intel Nehalem Quad Core (dual socket) server
running the Windows Server
®
2008 r2 OS. the intel Nehalem server was
connected to a texas Memory System ramSan
®
-325 (with 32GB total
capacity) through a QLogic 5000 Series Stackable Fibre Channel Switch.
Using a solid-state disk removes the latency introduced by slower spinning
disk drives and provides a performance benchmark expected from next-
generation storage arrays.
One initiator and four target ramSan ports were configured in a zone to the
QLogic 5000 Series Stackable Fibre Channel Switch. eight NtFS-formatted
LUNs were created on the ramSan: four for the database and four for
log files.
Figure 4. IOmeter Test Setup
When the disk latency is reduced (the disk i/O is striped across an
increased number of spindles), a greater i/O load (the number of users is
increased) can be driven by Microsoft exchange. Using the ramSan-325
and the iOmeter load generation tool allowed the emulation of both of these
conditions, as illustrated in Figure 4.
the tests were performed using the latest commercially available hardware,
software, and drivers. all measurements were made with default settings of
the adapters from both companies.
test Procedure
engineers in the QLogic Solutions lab performed tests on the configuration
described in Figure 4 as follows:
1.
installed a QLogic 8Gb single-channel PCi express
®
to Fibre
Channel adapter (QLe2560) on the test server using the appropriate
miniport driver.
2.
Created workers separately for 8K random read/write to simulate
database access and 4K random reads for log file operations (four
workers for 8K files at 35 percent read and 65 percent write reads, and
one worker for 4K log file reads to maintain iOPS at a 4:1 database to
log ratio similar to exchange 2007). Created eight LUNs and mapped
them to four target ports.
3.
ran tests for one minute, repeated them several times for integrity, and
then averaged the results.
4.
repeated these steps for the emulex 8Gb single-channel PCi express to
Fibre Channel adapter (LPe12000).
1056
1819
2017
2277
3154
5270
0
1000
2000
3000
4000
5000
6000
No Cache 25%
30%
50%
75%
100%
IOP
S
Cache Size (as Percentage of LUN Cached)
Exchange Transactional IOPS
4.724
2.648
2.484
2.021
1.558
0.922
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
No Cache
25%
30%
50%
75%
100%
La
te
ncy
(m
sec
)
Cache Size (as Percentage of LUN Cached)
Exchange I/O Read Latency