IBM z/OS User Manual

Page 43

Advertising
background image

43

of Sysplex Timers in an Expanded Availability confi gura-

tion remains at 40 km (25 miles). Therefore, to achieve the

extended distance of up to 100 km between sites, one of

the options to be considered is locating one of the Sysplex

Timers in an intermediary site that is less than 40 km from

one of the two sites. Other potential options can be evalu-

ated when the RPQ request is submitted to IBM for review.

Coordinated near continuous availability and disaster

recovery for Linux guests: z/VM 5.1 is providing a new

HyperSwap function so that the virtual device associ-

ated with one real disk can be swapped transparently to

another. HyperSwap can be used to switch to secondary

disk storage subsystems mirrored by Peer-to-Peer Remote

Copy (PPRC).

HyperSwap can also be helpful in data migration scenar-

ios to allow applications to use new disk volumes.

GDPS plans to exploit the new z/VM HyperSwap function

to provide a coordinated near continuous availability and

disaster recovery solution for z/OS and Linux guests run-

ning under z/VM. This innovative disaster recovery solution

requires GDPS, IBM Tivoli System Automation for Linux,

Linux on zSeries, and z/VM V5.1 designed to help antici-

pate and rapidly respond to business objectives and tech-

nical requirements while maintaining unsurpassed system

availability. This solution is may be especially valuable

for customers who share data and storage subsystems

between z/OS and Linux on zSeries.

To support planned and unplanned outages, GDPS is

designed to provides the following recovery actions:

• Re-IPL in place of failing operating system images

• Site takeover/failover of a complete production site

• Coordinated planned and unplanned HyperSwap of

storage subsystems transparently to the operating

system images and applications using the storage

Performance enhancements for GDPS/PPRC and GDPS/XRC

confi gurations

• Concurrent activation of Capacity BackUp (CBU) can

now be performed in parallel across multiple servers,

which may result in an improved RTO. This improvement

may apply to both the GDPS/PPRC and GDPS/XRC con-

fi gurations.

• In a GDPS/XRC confi guration, it is often necessary to

have multiple System Data Movers (SDMs). The number

of SDMs is based on many factors, such as the number

of volumes being copied and the I/O rate. Functions are

now capable of | being executed in parallel across mul-

tiple SDMs, thus helping to provide improved scalability

for a coupled SDM confi guration.

• Analysis has shown that PPRC commands issued by

GDPS will generate a large number of Write to Operator

messages (WTOs) that may cause WTO buffer short-

ages and temporarily adversely impact system perfor-

mance. The Message Flooding Automation function is

expected to substantially reduce the WTO message

traffi c and improve system performance by suppressing

redundant WTOs.

Performance enhancements for GDPS/PPRC and GDPS/

XRC became generally available March 2003.

These GDPS enhancements are applicable to z800, z900,

z890, and z990. For a complete list of other supported

hardware platforms and software prerequisites, refer to

the GDPS executive summary white paper, available at:

ibm.com/server/eserver/zseries/pso

Advertising