White paper – QLogic 10000 Series Realize Significant Performance Gains in Oracle RAC with QLogic FabricCache User Manual

Page 5

Advertising
background image

SN0451405-00 rev. B 07/13

5

White PaPer

the benefit of this shared cache becomes apparent when full table
scans occur (Oracle does not cache this in the block buffer). any
repeated full table scans on any of the nodes may benefit from the
information cached in the 10000 Series adapter in the cluster.

NOTE:

Because the i/O behavior is in-band, it is not visible to the database

or even the driver. also, if a node is “evicted” (that is, ejected from the
cluster by Oracle’s i/O fencing, causing a reboot) and the 10000 Series
adapter is not available, the other nodes immediately and directly access
the LUN by means of the storage array. although the performance gain
from the cache is not realized, the application continues to perform without
interruption. in Figure 4, if OraServer3 is evicted from the cluster, the other
nodes directly access Data03 until OraServer3 has rebooted and the 10000
Series adapter is active on the SaN. the 10000 Series adapter then begins
managing the i/O for Data03 and rewarming the cache.

Testing the QLogic 10000 Series Fibre Channel Adapter

QLogic set up a test environment based on a four-node raC cluster,
with four 10000 Series adapters clustered together. the load generator,
SwingBench, is configured on four nodes to provide a significant load on the
database. the SwingBench nodes are coordinated as one load-generation
tool. Follow best practices to get better performance. the results of the
cache versus non-cached runs reflect an improvement of approximately
40 to 70 percent.

Performance testing—SwingBench results

QLogic used SwingBench—a database load generator from Oracle that is
designed to provide load to an Oracle raC database—to demonstrate the
significant performance gains. the largest gain occurs when running large,
complex queries:

The sales history set online analytical processing (OLaP) achieved
approximately 3.25 × more transactions in a one hour run when the
10000 Series adapter caching was enabled at a maximum of one
megabyte, versus when the cache was not enabled. the average
response times were about 75 percent faster.

The order entry set online transaction processing (OLtP) performance
increase was approximately 95 percent more transactions completed
in the one hour run with 10000 Series caching enabled at default
maximum size compared to cache not enabled. the average response
time was also about 45 percent faster.

Your results will vary depending on machine load, query composition, and
Oracle raC configuration (number of nodes). SwingBench is designed
to stress the entire database environment and this stress testing is not
focused specifically on the storage access. the largest gains are observed
where the storage access is heaviest, as in sales history reporting.

Figure 5. Performance Gains with Cache Enabled over Cache Disabled

Figure 6. Response Time Improvement with Cache Enabled over Cache Disabled

test Scenarios

QLogic set up two test scenarios with the following workload types:

Decision support system (DSS) and OLAP workload. this workload
depends on reading and analyzing large amounts of data from the
database. analysis shows a marked benefit from the caching that is
shared between all the nodes in the Oracle raC cluster. the OLaP i/O
pattern reads large blocks of data for the queries, and the LUN cache
was enabled to accept up to a 1MB block.

OLTP workload. this workload contains less massive amounts of data,
resulting in more targeted reads and showing less caching benefit. the
OLtP i/O pattern reads and writes 8KB blocks of data, and the LUN
cache was enabled at the default size of 128KB.

each test scenario was run with the following cache patterns:

Cache not enabled on the 10000 Series adapter

Cache enabled for all LUNs and distributed to different 10000

Series adapters

the cache patterns support the best practice definition of matching the
LUNs in a disk across the nodes in the raC cluster. that is, the LUNs
supporting an aSM disk group are a multiple of the nodes in the cluster.

Advertising