Posts Tagged ‘mdadm’

Raid 1… or maybe not

Friday, July 19th, 2013

We received a client machine a while back. At some point they needed more disk space and sent us a pair of drives with instructions for mirroring data from the existing machine to the new drives.

When I looked at the drive I noticed something odd:

md4 : active raid1 sdb8[1](S) sda8[0]
      1942145856 blocks super 1.2 [1/1] [U]

While this was a Raid 1 set, notice that the indicator shows that sdb8 is a Spare (S) and that the Raid 1 set has 1 of 1 drive and isn’t broken. This usually occurs when someone is doing an in-place migration to a larger drive and creates a Raid 1 partition with a single device and forces it online with the intention of later adding the second drive. Had the primary drive failed, they would have experienced data loss.

To fix it:

mdadm --remove /dev/md4 /dev/sdb8
mdadm --grow /dev/md4 --raid-devices=2 --force
mdadm --add /dev/md4 /dev/sdb8

and we see the resulting md status:

md4 : active raid1 sda8[0]
      1942145856 blocks super 1.2 [2/1] [U_]

After the reconstruction we now have a properly configured Raid 1 set.

Abort mdadm consistency check

Tuesday, June 8th, 2010

One of our client systems has a Raid 1 setup using two 1 Terabyte drives. Last night, Debian’s consistency check launched, but, his system was doing some heavy disk IO due to some scripts that were being processed and the system was estimating close to 1000 hours to complete the check.

md3 : active raid1 sdb8[1] sda8[0]
      962108608 blocks [2/2] [UU]
      [===>.................]  check = 15.1% (145325952/962108608) finish=60402.6min speed=224K/sec

To abandon the check, we issued:

echo idle > /sys/block/md3/md/sync_action

Which allowed the machine to skip the rest of the test. While I don’t like disabling the checks, we’ll reschedule this one to do the check after they are done doing their work.

Entries (RSS) and Comments (RSS).
Cluster host: li