White paper – QLogic 10000 Series Realize Significant Performance Gains in Oracle RAC with QLogic FabricCache User Manual

Page 3

Advertising
background image

SN0451405-00 rev. B 07/13

3

White PaPer

caching Fibre Channel host Bus adapters. in an Oracle raC environment,
host Bus adapters can be replaced using a process that is non-destructive
to application operation. the replacement process includes the following
general steps that must be performed on each server (node) in the Oracle
raC cluster:

1.

Shut down the node.

2.

Place the new 10000 Series adapter in the node, replacing the existing
host Bus adapter.

3.

add the 10000 Series adapter to the fabric zone, enabling it to see
the storage.

4.

add the 10000 Series adapter to a fabric zone for clustering the 10000
Series adapter.

5.

Bring the node back online.

after all the 10000 Series adapters are in place in the nodes, configure the
adapters to enable LUN caching.

System Architecture and Requirements

at a high level, the Oracle raC database comprises four nodes that are
connected to the SaN with the QLogic 10000 Series Fibre Channel adapter.
Both the database public network and the private interconnect are gigabit
ethernet (Gbe).

the load-generation application simulates a business workload and is LaN-
connected to the raC database.

hardware requirements include the following:

Four servers: intel 64-bit, 24 CPUs, 16GB memory (one QLogic 10000
Series Fibre Channel adapter in each server in the cluster)

SaN:

Storage array

Fabric switch with support for 8Gb Fibre Channel

ethernet switch

Oracle RAC Best Practices

QLogic suggests following Oracle’s best practice recommendations for
raC, which include:

Creating four LUNs for each disk group.

Using the Oracle aSM feature to balance the i/O load for a disk group
over the LUNs in that disk group.

Using the Oracle Flash recovery area (Fra) to hold the backups.
however, because the QLogic examples used in this document do not
use or create a Fra, the Fra does not require caching.

QLogic FabricCache 10000 Series Adapter Best Practices

in the Oracle raC database, multiple nodes (servers) are actively reading
and changing data on shared LUNs. the database engine manages this
with a global lock manager.

the 10000 Series adapter communicates “in-band” through the Fibre
Channel fabric to coordinate the cache activity between the 10000 Series
adapters. For these adapters to function effectively, each card must have
visibility to the other cards in the cluster. Visibility is accomplished by
creating a fabric zone that includes all of the 10000 Series adapters.

Because the 10000 Series adapter-enabled cluster has multiple nodes
sharing the same LUNs, this shared-cache environment can improve
the throughput of the application by distributing the cache across all of
the nodes. the distribution works well in a raC environment because
the Oracle raC database groups multiple LUNs into a single storage
pool called a disk group (or LUN set). the Oracle aSM distributes the i/O
across all of the disks (LUNs) in the disk group, of which there can be an
unlimited quantity.

the example used in this document includes four configured disk groups.
One of the disk groups (with four LUNs) is specified for cluster management;
this disk group is not cached because of the low i/O demand on it. the other
three disk groups have cache enabled in the best practice environment.

Figure 1 shows the mapping required by each of four servers for the LUNs
to support the database. Note that each node in the cluster is presented
with the same set of LUNs.

Figure 1. Mapping Required to Support the Database

Online reDO Logs

the online reDO log files are in their own LUN set, which allows these
files with high write performance to be placed on the highest performing
storage. Caching these LUNs permits the archive activity to read from the
cache and not access the SaN storage.

Advertising