Online Services

Fixing a DegradedArray Event

Identify the failed device, remove it safely, replace it and monitor the rebuild until /proc/mdstat reports [UU].

Raid recovery console

What triggers a DegradedArray event?

The kernel notifies you as soon as mdadm detects a missing or failed disk. In the sample output below the array /dev/md1 went degraded because /dev/sdb3 dropped out and was marked with (F) inside /proc/mdstat.

A DegradedArray event had been detected on md device /dev/md/1.
md1 : active raid1 sdb3[2](F) sda3[0]
      1948488512 blocks super 1.0 [2/1] [U_]
unused devices: <none>
                            
root@server:~# mdadm --detail /dev/md1
  Raid Level : raid1
  Array Size : 1.8 TiB
  Active Devices : 1
  Failed Devices : 1
Number   Major   Minor   RaidDevice State
   2       8       19        -      faulty spare   /dev/sdb3
                            

Diagnose

Use mdadm and /proc/mdstat to confirm the degraded state and the failed device.

  • mdadm --detail /dev/md1
  • Look for [U_] / (F) markers
  • Identify the affected disk (e.g. /dev/sdb3)

Replace

Remove the failed device and reinsert a healthy disk before re-adding it to the array.

  • mdadm --remove /dev/md1 /dev/sdb3
  • Swap or re-enable the disk
  • mdadm --add /dev/md1 /dev/sdb3

Monitor

Watch the rebuild until the array reports [UU] and the state goes from degraded to clean.

  • watch -n 5 cat /proc/mdstat
  • Wait for recovery percentages to finish
  • Ensure no faulty devices remain

Typical command workflow

root@server:~# mdadm --remove /dev/md1 /dev/sdb3
root@server:~# mdadm --add /dev/md1 /dev/sdb3
root@server:~# cat /proc/mdstat
# recovery [>....................] 0.1% ...
                    

Once synchronization finishes, validation ensures each RAID member reports healthy status.

Need live help?

Our operations team assists with RAID health checks, disk replacements and proactive monitoring.

Reach support