White paper – QLogic 10000 Series Realize Significant Performance Gains in Oracle RAC with QLogic FabricCache User Manual

Page 4

Advertising
background image

SN0451405-00 rev. B 07/13

4

White PaPer

Data and index

the tablespaces that support the data and indexes are the largest
tablespaces (use the most storage) and provide the largest gain with
the 10000 Series adapter read caching. Distributing the LUN cache
ownership across all of the nodes in the raC cluster increases the available
FabricCache for caching. When queries are performing full table scans (likely
in OLaP or data warehouse applications), Oracle raC does not cache this
information unless it finds a block containing a needed row. if many users
are performing similar analysis, the 10000 Series adapter cache eventually
contains the needed rows and the analysis becomes much faster.

indexes are read by a database query and tend to stay in the block buffer
for the database, where the work is spread over many nodes. index blocks
in the 10000 Series adapter cache and subsequent access to the index on
other nodes is very fast.

UNDO and teMP tablespaces

Like the online reDO log files, UNDO and teMP tablespaces are on a
different LUN set. this LUN set allows the tablespaces to be placed on the
highest performing storage, which enables the consistent read activity and
temporary sorts to perform as fast as possible.

QLogic FabricCache 10000 Series Adapter Deployment

the illustrations in this section show the LUN ownership that is assigned
to each 10000 Series adapter for cache ownership. in the cluster, a
LUN is assigned to one and only one 10000 Series adapter for cache
ownership, while each of the 10000 Series adapters is the cache
owner for one of the LUNs in the defined LUN groups: reDO, Data,
and UNDO.

Figure 2 shows the LUN ownership for each 10000 Series adapter.

Figure 2. LUN Cache Ownership for the Four-Server RAC Environment

i/O Behavior with FabricCache

While each LUN is owned by one (and only one) 10000 Series adapter, the
LUN is shared with the other 10000 Series adapters in the cluster as a SaN
LUN. LUN sharing ensures that each server has visibility to all the LUNs
being cached. Figure 3 shows the mapping for the data LUN group, where
each data LUN:

is owned by a single 10000 Series adapter (represented by the larger
disk in the foreground).

is shared with the other 10000 Series adapters as a SaN LUN
(represented by the smaller disks of the same color in the background
on the other servers).

Figure 3. SAN LUN Cache Mapping Example for Data LUNs

Figure 4 shows the i/O behavior for one set of LUNs for the data. the
behavior is the same for all LUNs that have cache enabled.

Figure 4. LUN Cache Operation Example for Server 1 Data LUNs

Figure 4 depicts the following caching operation:

OraServer1 requests data from the SaN. if the data is on the Data01
LUN, the local cache is accessed. a cache miss results in access to
the SaN.

Data on Data02 (or Data03 or Data04) is accessed in-band with
OraServer1’s 10000 Series adapter requesting the data from the remote
10000 Series adapter that is cache owner for that LUN. a cache miss
means that the remote 10000 Series adapter with cache ownership for
that LUN accesses the SaN for the data before returning that requested
data to OraServer1.

in this clustered, shared cache environment, all data access is managed
by the 10000 Series adapter that has cache ownership for that LUN.

Advertising