White paper – QLogic 10000 Series Accelerating Microsoft SQL Server Beyond Large Server Memory User Manual

Page 2

Advertising
background image

SN0430970-00 Rev. A 02/14 2

White Paper

Accelerating Microsoft SQL Server Beyond

Large Server Memory

DRAM CACHING

Like most database applications, SQL was designed at a time when
server main memory (DRAM) was very expensive. DRAM pricing has since
dropped, especially over the last few years. With tremendous advances
in multi-core server computing and an ever-widening gap between CPU
and storage I/O capabilities, a case could be made that the time is now for
caching databases with server DRAM. But while the low latency benefits
derived from DRAM caching can lead to desirable database performance
improvements, the approach has its limitations. Some of the drawbacks to
DRAM caching include the following:

DRAM cache is “captive” to the individual server it is housed in. Thus,
it is not sharable with clustered servers. While a DRAM cache can
be very effective at improving the performance of individual servers,
providing storage acceleration across clustered server environments or
virtualized infrastructures that use multiple physical servers is outside a
DRAM cache’s capability. This limits the performance benefits of DRAM
caching to a relatively small set of single-server SQL situations.

DRAM cache becomes cost-prohibitive for larger-sized databases
when specialized servers or ultra-high density DRAM modules must be
employed. Tier-1 server companies typically offer servers with a base
configuration of 12 DRAM DIMM sockets and charge a premium for 24
socket servers. DRAM modules show an increasing “price per gigabyte”
(GB) as bit density increases.

Best practice recommendations from Microsoft call for local “swap and
crash” recovery dump files equal to 2.5 × the physical DRAM capacity.
The swap file and the dump file become so large that it creates an
enormous amount of disk fragmentation. It can take additional free
space and many hours of defragmentation to repair the damage done
by these massive files. However, maintenance windows are already
tight. In addition, more DRAM leads to larger swap and dump files,
which lead to longer maintenance cycles. The swap and dump files also
create additional virus scanning overhead, incurring more contention
and latency.

Using DRAM for cache is an all-or-nothing proposition. A DRAM cache is
owned and managed by the application, and every transaction is going
to compete for cache resources under the processing domain of the
server CPU. If the CPU is spending significant cycles running caching
algorithms, it may have a negative effect on other CPU tasks. With
today’s multi-core processors, the impact may be minimal. However,
with older CPUs, this may affect the performance of other server
applications or services, or it may change the server’s ability to scale
virtual machines (VMs).

Most enterprise SQL servers will also need access to networked storage
to service requests for data not held within the DRAM cache, known as
a “cache miss”. To handle a cache miss, an additional Fibre Channel
Adapter or similar storage network Host Bus Adapter will be required in
the server and it will have to be managed separately.

CACHING BENEFITS WITHOUT LIMITATIONS

The QLogic FabricCache 10000 Series Adapter is a new approach to
server-based caching. It is designed to address the drawbacks of DRAM-
based and other types of server-based caching. Rather than creating a
discrete captive cache for each server, the QLogic FabricCache 10000
Series Adapter integrates Host Bus Adapter functionality as part of the
flash-based cache functionality. The QLogic 10000 Series Adapter features
a caching implementation that uses the existing SAN infrastructure to
create a shared cache resource distributed over multiple physical servers,
as identified in the graphic below, “Shared Cache within Application
Clusters.” This capability eliminates the limitations of single-server
caching and enables the performance benefits of flash-based acceleration
for Microsoft SQL Server environments.

FC SAN

Owned

Cached LUNs

Shared LUNs

SAN LUNs

Data01

Data02

Data03

Data04

1

2

3

4

Application

Cluster

Nodes

SAN LUNs

SAN LUNs

Data01

Data02

Data03

Data04

Shared Cache within Application Clusters

Advertising