Effects of a hard drive failure – HP Dynamic Smart Array Controllers User Manual

Page 20

Advertising
background image

Replacing, moving, or adding hard drives 20

CAUTION:

Sometimes, a drive that has previously been failed by the controller may seem to be

operational after the system is power-cycled or (for a hot-pluggable drive) after the drive has been

removed and reinserted. However, continued use of such marginal drives may eventually result in
data loss. Replace the marginal drive as soon as possible.

The following items indicate a hard drive failure:

The following POST messages appear when the system is restarted, as long as the controller detects at
least one functional drive:

o

1784: Drive Array Failure

o

1786: Drive Array Recovery Needed

ACU represents failed drives with a distinctive icon.

HP SIM can detect failed drives remotely across a network. For more information about HP SIM, see the
documentation on the HP website (

http://www.hp.com/go/support

).

The HP SMH indicates that a drive has failed.

iLO/AMS indicates that a drive has failed.

In Windows operating systems, the Event Notification Service posts an event to the Microsoft Windows
system event log and the IML.

In Linux operating systems, systems events are logged to /var/log/messages.

ACU lists all failed drives (on systems supported by ACU v8.28.13.0 or later).

For more information about diagnosing hard drive problems, see HP ProLiant Gen8 Troubleshooting Guide,
Volume I: Troubleshooting
on the HP website (

http://www.hp.com/go/bizsupport

).

Effects of a hard drive failure

When a hard drive fails, all logical drives that are in the same array are affected. Each logical drive in an
array might be using a different fault-tolerance method. Therefore, each logical drive can be affected

differently.

RAID 0 configurations cannot tolerate drive failure. If any physical drive in the array fails, all RAID 0
logical drives in the same array also fail.

RAID 1 configurations can tolerate one drive failure. If one physical drive in a RAID 1 configuration
fails, the RAID volume is still intact as a degraded RAID 1.

RAID 1+0 configurations can tolerate up to two drive failures as long as no failed drives are mirrored
to one another.
A RAID 1+0 configuration of four drives consists of two RAID 1 volumes of two drives each. One drive
from each RAID 1 volume can fail for a total of two failed drives. If both drives in one RAID 1 volume fail,

the entire RAID 1+0 volume fails.

RAID 5 configurations can tolerate up to one drive failure. Data protection is provided by parity data.
This parity data is calculated stripe by stripe from the user data that is written to all other blocks within

that stripe. The blocks of parity data are distributed evenly over every physical drive within the logical
drive.

IMPORTANT:

RAID 5 is available only when the optional 512 MB FBWC module is installed. For

more information, see "Upgrading to 512 MB FBWC (on page

7

)."

Advertising