A1 b1, Server host – IBM DS6000 User Manual

Page 274

Advertising
background image

250

IBM System Storage DS6000 Series: Copy Services with IBM System z

Summary

In summary, the following characteristics are typical of a synchronous data replication
technique:

Application write I/O response time is affected —this can be modeled and predicted.

Local and remote copies of data are committed to both storage disk subsystems before
host write I/O is complete.

Data consistency is always maintained at the remote site as long as no failures occur. If a
rolling disaster occurs, freeze/run is needed to maintain consistency.

Bandwidth between both sites has to scale with the peak write I/O rate.

Data at the remote site is always current.

No extra means, such as additional journal volumes or tertiary copy, are required.

A tier 7 solution is achieved with automation software such as GDPS.

This is different with an asynchronous data replication approach. We still require data
consistency at a distant site in order to just restart at the distant site when the local site
becomes unavailable.

22.1.2 Asynchronous data replication and dependent writes

In normal operations, for asynchronous data replication, data consistency for

dependent

writes

will be preserved depending on the technique used to replicate the data

dependent

writes and data consistency are explained in detail in Section 13.3, “Consistency Group
function” on page 136.
For example, Global Copy, that is explained in Part 5, “Global Copy”
on page 199,
and Global Mirror, that we are explaining now, use different techniques.

An asynchronous remote copy approach is usually required when the distance between the
primary site and the secondary site is beyond an efficient distance for a synchronous remote
copy solution. Metro Mirror provides an efficient synchronous approach for up to 300 km,
when utilizing Fibre Channel links.

Figure 22-4 Asynchronous data replication

In an asynchronous data replication environment, an application write I/O goes through the
following steps; see Figure 22-4:

1. Write application data to the primary storage disk subsystem cache.

Primary

A

2C00

Primary

Primary

A

Primary

A1

B1

Storage Disk
Subsystem 1

Storage Disk
Subsystem 2

Replicate

Asynchronous

2

Server

Host

3

1

4

Secondary

Primary

Advertising