Bug#423724:

sknauert at wesleyan.edu sknauert at wesleyan.edu
Mon May 14 05:06:53 UTC 2007


Package: mdadm
Version: 2.5.6-9
Severity: Critical

After initiating a RAID5 rebuild, a few minutes (15-30) into the rebuild,
the system restarts and seems to log no errors.

mdadm --assemble /dev/md1 /dev/sd{a1,b1,c1,d1,e1,f1,g1,h1}
more /proc/mdstat
md1 : active raid5 sda1[0] sdh1[8] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2]
sdb1[1]
      1367508352 blocks level 5, 128k chunk, algorithm 2 [8/7] [UUUUUUU_]
      [>....................]  recovery =  3.4% (6812728/195358336)
finish=119.6min speed=26264K/sec

md0 : active raid1 hda2[0] hdc2[1]
      77168128 blocks [2/2] [UU]

unused devices: <none>

After checking the usual suspects, power, drives, etc. with no change, I
noticed that this does not happen with the oldstable version 1.9.0-4sarge1
with all other system variables held the same.

I have 8 250GB Maxtor Diamond Max 9 Plus drives on the 3ware Escalade 8500
RAID controller in 8 disk JBOD for use with a software RAID5. Using a
stock Debian etch install with kernel 2.6.18-4-486. I have checked for
power, disk, and other hardware issues. Internet searches indicate that
other Debian and Debian-biased users have experienced similar problems on
a variety of support controllers (3ware, Highpoint, LSI, etc.). These all
suggest that this is an mdadm bug.





More information about the pkg-mdadm-devel mailing list