White paper – QLogic 10000 Series Mt. Rainier Technology Accelerates the Enterprise User Manual

Page 5

Advertising
background image

SSG-WP12004C

SN0430914-00 rev. C 11/12

5

White PaPer

equally dangerous situation arises if a write-back strategy is implemented
where writes to the shared LUN are cached.

Figure 5 shows the cache for two servers configured for write-back
caching. in this example, as soon as Server 1 performs a write operation,
cache coherence is lost because Server 2 has no indication that any data
has changed. even if Server 2 has not yet cached the region that Server 1
has modified, if it does so before Server 1 flushes the write cache, the data
read from the shared LUN becomes invalid and logically corrupt data is
again processed.

Figure 5. Cached Server Writes Destroys Cache Coherency

The Challenge

Characteristic of the current state of SSD caching technology, strong
arguments and indicators exist both for and against the placement of these
caches in storage arrays, appliances, and servers. this situation compels
system architects to weigh complex and potentially painful trade-offs, and
greatly limits their ability to broadly apply the benefits of SSD caching.

an approach that delivers the scalable and sustainable performance
advantages of server-based SSD caching—combined with the cache
coherence and efficient resource allocation of array- and appliance-
based caching—is essential. to deliver the performance benefits of SSD
caching requires a flexible, affordable, and scalable way to standardize
configurations.

The QLogic Approach

to address this challenge, QLogic has leveraged the company’s core
expertise in network storage and multiprotocol SaN adapters, combined
with over five years of developing and delivering high-performance,
enterprise, data mobility solutions. Based on solutions that reliably move
mission-critical applications and data over multiple storage protocols,
QLogic developed the Mt. rainier technology with QCOP. Mt. rainier is
a lightweight, elegant, and effective solution to the complex problem of
critical i/O performance imbalance. With Mt. rainier and QCOP, QLogic
delivers a solution that simultaneously eliminates the threat of data

corruption due to loss of cache coherence, while enabling efficient and
cost-effective pooling of SSD cache resources among servers. Mt. rainier
combines the performance and scalability of server-based SSD caching
with the economic efficiency, central management, and transparent
support for operating systems and enterprise applications characteristic of
appliance- and array-based SSD caching. QLogic enables it organizations
to specify fast, reliable, infrastructure-compatible, and cost-effective SSD
caching as a standard for i/O-intensive servers.

Theory

the QLogic Mt. rainier technology is a new class of host-based, intelligent,
i/O optimization engines that provide integrated storage network
connectivity, an SSD interface, and the embedded processing required to
make all SSD management and caching tasks entirely transparent to the
host. the only host-resident software required for Mt. rainier operation
is a host operating system-specific driver. all “heavy lifting” is performed
transparently onboard Mt. rainier by the embedded multicore processor.

the most important benefits of SSD caching with the QLogic Mt. rainiers
are the clustering of Mt. rainiers and LUN cache ownership.

Mt. rainier Clustering

Mt. rainier clustering creates a logical group that provides a single point-
of-management and maintains cache coherence with high availability
and optimal allocation of cache resources. When Mt. rainier clusters are
formed, one cluster member is assigned as the cluster control primary and
another as the cluster control secondary. the cluster control primary is
responsible for processing management and configuration requests, while
the cluster control secondary provides high availability as a passive backup
for the cluster control primary.

Figure 6 shows a four-node cache adapter cluster defined with member #1
as the cluster control primary and cluster member #2 as the cluster control
secondary (cluster members #3 and #4 have no cluster control functions).

Figure 6. Cluster Control Primary and Secondary Members

Advertising