Replacing a failed disk drive, Partitioning the replacement drive – Dell Software RAID with Red Hat Enterprise Linux4 User Manual

Page 8

Advertising
background image

6

Software RAID with Red Hat Enterprise Linux 4

www

.dell.com | support.dell.com

Restoring the RAID Configuration After a Drive Failure

When a hard drive in a software RAID 1 array fails, you can restore the RAID array onto a new drive
by following a three-step process:

Replacing the failed drive

Partitioning the replacement drive

Adding RAID partitions back into the md devices

If you suspect a drive failure, you can check the status of each RAID device by using the
following command:

cat /proc/mdstat

For example, a system missing a partition from the md0 device would show the following:

md0 : active raid1 sda1[0]

104320 blocks [2/1] [U_]

This output indicates that md0 is active as a RAID 1 device and that partition sda1 is currently
active in that RAID device. The output [2/1] denotes that two partitions should be available to
the device (the first value), but only one is currently available (the second value). The output [U_]
shows that the first partition is available (denoted by the letter "U") and the second partition is
offline (denoted by the underscore).

Replacing a Failed Disk Drive

Once a hard disk drive fails, replace it immediately to preserve the data redundancy that
RAID 1 provides.

See your system documentation for instructions on replacing your failed hard drive with a
new hard drive.

Partitioning the Replacement Drive

Once the failed disk drive has been replaced, restore the partitions that were saved earlier in
the /raidinfo directory.

For replacing drive sda, restore the original partition scheme for sda to the new hard drive
by typing:

sfdisk /dev/sda < /raidinfo/partitions.sda

For replacing drive sdb, restore the original partition scheme for sdb to the new hard drive
by typing:

sfdisk /dev/sdb < /raidinfo/partitions.sdb

Advertising