White paper – QLogic 10000 Series Mt. Rainier Technology Accelerates the Enterprise User Manual

Page 3

Advertising
background image

SSG-WP12004C

SN0430914-00 rev. C 11/12

3

White PaPer

improved system performance, but also introduces significant costs and
risks. For example, installing new arrays requires migrating existing data
to those new arrays, and this migration generally requires a minimum of
one or two outages per attached server. Furthermore, as the sheer volume
of the data involved in these migrations grows, migration jobs take longer,
and cost, complexity, and risk all increase. With the continuing expectation
of geometrically increasing performance demands, the improvements
delivered by these “big bang” infrastructure upgrades are temporary, by
their nature. the dynamic growth of application workloads at the edge of
the comparatively static storage networks and arrays eventually outstrips
any feasible configuration at the core of those networks. this inherent
guarantee of obsolescence results in excessive spending to optimize
storage performance at the expense of efficient capacity and it drives
infrastructure refresh cycles to typically occur every three to five years.

A New Option: Deploy Flash Memory

in the last few years, Flash memory has emerged as a valuable tool for
increasing storage performance. Flash memory outperforms rotating
magnetic media by orders of magnitude when processing random i/O
workloads. as a new and rapidly expanding semiconductor technology,
QLogic expects Flash memory, unlike mechanical and disk drives, to track
a Moore’s Law-style curve for performance and capacity advances.

to accelerate early adoption, Flash memory has been primarily packaged
as solid-state disk (SSD) drives that simplify and accelerate adoption.
although originally packaged to be plug-compatible with traditional,
rotating, magnetic media disk drives, SSDs are now available in additional
form factors, most notably server-based PCi express

®

boards.

SSD Caching Versus tier 0 Data Storage

the defining characteristic of SSDs is that—independent of physical
form factor—they are accessed as if they are traditional disk drives. the
compatible behavior of SSDs enabled their rapid adoption as an alternative
to 10K to 15K rPM disk drives. in high-end, mission-critical applications,
the much higher performance of SSDs—coupled with much lower power
and cooling requirements—largely offset their initially high prices. as SSD
prices have decreased and capacities have increased, SSDs deployed as
primary storage have seen accelerated adoption for a relatively small set of
performance-critical applications.

Array-Based SSD Caching

initial deployments of SSD caching were delivered by installing SSDs,
along with the required software and firmware functionality, within shared
storage arrays. Due to the plug-compatibility of early SSDs, these initial
implementations did not require extensive modifications to existing array
hardware or software and, in many cases, were available as upgrades to
existing equipment.

advantages

applying SSD caching to improve performance inside storage arrays offers
several advantages that closely parallel the fundamental advantages of
centralized network-attached storage arrays. advantages include: efficient
sharing of valuable resources, maintenance of existing data protection
regimes, and providing a single point of change while maintaining existing
network topology.

Drawbacks

adding SSD caching to storage arrays requires upgrading and, in some
cases, replacing existing arrays (including data migration effort and
risk). even if all of the disk drives are upgraded to SSDs, the expected
performance benefit is not fully realizable due to contention-induced
latency at over-subscribed network and array ports (see Figure 2). the
performance benefits of SSD caching in storage arrays may be short-
lived and performance may not scale smoothly. the initial per-server
performance improvements will decrease over time as the overall demands
on the arrays and storage networks increase with growing workloads, and
with server and virtual server attach rates.

Caching Appliances

Caching appliances—relatively new additions to storage networking— are
network-attached devices that are inserted into the data path between
servers and primary storage arrays.

advantages

Like array-based caching, caching appliances efficiently share relatively
expensive and limited resources, but do not require upgrades to existing
arrays. Because these devices are independent of the primary storage
arrays, they can be distributed to multiple locations within a storage
network to optimize performance for specific servers or classes of servers.

Drawbacks

also in common with arrays, caching appliances are vulnerable to
congestion in the storage network and at busy appliance ports. the
appliance approach offers better scalability than array-based caching
because it is conceptually simple to add incremental appliances into a
storage network. however, each additional appliance represents a large
capital outlay, network topology changes, and outages.

in contrast to array-based caching, caching appliances are new elements in
the enterprise it environment and require changes to policies, procedures,
run books, and staff training.

Lastly, bus and memory bandwidth limitations of the industry-standard
components at the heart of caching appliances restrict their ability to
smoothly scale performance. Because these appliances sit in-band on

Advertising