Bug#601198: mdadm: Says it sets faulty even when it actually doesn't

Mike Hommey mh+reportbug at glandium.org
Sun Oct 24 10:35:08 UTC 2010


Package: mdadm
Version: 3.1.4-1+8efb9d1
Severity: normal

Just had this scary experience this morning, where I thought I did
something really wrong, being that I setted faulty the only remaining
active device from a raid 1 array during recovery, when I meant to
set the other one faulty:

# mdadm --detail /dev/md0
(snip)
    Update Time : Sun Oct 24 12:13:30 2010
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
(snip)
    Number   Major   Minor   RaidDevice State
       3       8        2        0      spare rebuilding   /dev/sda2
       2       8       18        1      active sync   /dev/sdb2
# mdadm --set-faulty /dev/md0 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md0

Here I thought I was screwed, but it turns out it really didn't happen.

For people's blood pressure, it would better if it managed to say it
failed doing the requested operation.

Mike

-- System Information:
Debian Release: squeeze/sid
  APT prefers unstable
  APT policy: (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 2.6.32-5-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash





More information about the pkg-mdadm-devel mailing list