White paper – QLogic 10000 Series Mt. Rainier Technology Accelerates the Enterprise User Manual

Page 4

Advertising
background image

SSG-WP12004C

SN0430914-00 rev. C 11/12

4

White PaPer

shared network storage links, infrastructure architects and managers
should be concerned about the real-world scalability and stability of these
devices.

Server-Based Caching

the final option for SSD caching placement is at the server edge of the
storage network, directly attached to i/O intensive servers.

advantages

adding large caches to high i/O servers places the cache in a position
where it is insensitive to congestion in the storage infrastructure. the
cache is also in the best position to integrate application understanding
to optimize application performance. Server-based caching requires no
upgrades to storage arrays, no additional appliance installation on the data
path of critical networks, and storage i/O performance can scale smoothly
with increasing application demands. as a side benefit, by servicing a large
percentage of the i/O demand of critical servers at the network edge, SSD
caching in servers effectively reduces the demand on storage networks
and arrays. this demand reduction improves storage performance for
other attached servers, and can extend the useful life of existing storage
infrastructure.

Drawbacks

While the current implementations of server-based SSD caching are
very effective at improving the performance of individual servers (that is,
improving point performance), providing storage acceleration across a
broad range of applications in a storage network is beyond their reach.
however, the ever-increasing pressure on it organizations to do more
with less dictates that, to be supportable, new configurations must be as
generally applicable as possible.

Several serious drawbacks exist with server-based SSD caching, as
currently deployed, including:

Does not work for today’s most important clustered applications and
environments.

Creating silos of captive SSD makes SSD caching much more expensive
to achieve a specific performance level.

Complex layers of driver software increase interoperability risks and
consume server processor and memory resources.

to understand the clustering problem, examine the conditions required for
successful application of current server-based caching solutions, refer to
the idealized read-caching scenario illustrated in Figure 3.

Figure 3. Server Caching Reads from a Shared LUN

Figure 3 shows server-based Flash caching deployed on two servers that
are reading from overlapping regions on a shared LUN. On the initial reads,
the read data is returned to the requestor and also saved in the server-
based cache for that LUN. all subsequent reads of the cached regions are
serviced from the server-based caches, providing a faster response and
reducing the workload on the storage network and arrays.

this scenario works very well, provided that read-only access can be
guaranteed. however if either server in Figure 2 executes write i/0 to the
shared regions of the LUN, cache coherence is lost and the result is nearly
certain data corruption, as shown in Figure 4.

Figure 4. Server Write to a Shared LUN Destroys Cache Coherency

in this case, one of the servers (Server 2) has written back data to the shared
LUN. however, without a mechanism to support coordination between the
server-based caches, Server 1 continues to read and process now-invalid
data from its own local cache. Furthermore, if Server 1 proceeds to write
processed data back to the shared LUN, data previously written by Server 2
is overwritten with logically corrupt data from Server 1. this corruption
occurs even if both servers are members of a host or application cluster
because, by design, server-based caching is transparent to servers and
applications.

the scenario illustrated in Figure 4 assumes a write-through cache strategy
where data is synchronously written to the shared storage subsystem. an

Advertising