Bug#607565: initramfs-tools: initramfs fails to assemble Intel RAID array

maximilian attems max at stro.at
Sun Dec 19 20:08:21 UTC 2010


reassign 607565 mdadm
stop

On Sun, Dec 19, 2010 at 08:47:28PM +0100, Rainmaker wrote:
> Package: initramfs-tools
> Version: 0.98.6
> Severity: grave
> Tags: patch
> Justification: renders package unusable

No, not booting on a specific setup is bad, but doesn't render it
unusable for the big X nr of boxes, where it does work.
Please reread the severity explanations.
 
> I just installed Debian (Sid) on my PC and was in for a nasty surprise; the
> system wouldn't boot after install.
> 
> I have 2 disks setup in an ICH9R raid set, which devides these 2 disks in 2
> arrays; 1 is a RAID0 array, and 1 is a RAID1 array. This is a completely valid
> configuration for an Intel RAID set.
> 
> I installed Debian on a LVM setup, using a partition in the RAID0 set as a PV.
> The /boot partition is also an LV.
> 
> As I found out, the Intel RAID system works with a "container", which contains
> the arrays. The container was assembled during boot, but the arrays fail to
> assemble, giving a "device or resource busy". The problem is almost exactly the
> same as seen in this URL: http://www.linux-archive.org/debian-user/454103-how-
> recreate-dmraid-raid-array-mdadm.html
> 
> I did, however find a different workaround. Removing the /etc/mdadm/mdadm.conf
> and running mdadm -Ss && mdadm -As also assembles the arrays correctly.
> 
> For people stumbling into the same problem; here are steps to work around this
> problem:
> - Wait for the ramdisk to put you in a (initramfs) shell.
> - mdadm -Ss
> - rm /etc/mdadm/mdadm.conf
> - mdadm -As
> - lvm vgscan
> - vgchange -ay
> 
> Now, mount the boot partition to a directory, e.g. /root
> - mount /dev/vgMain/lvBoot /root
> - mkdir /root/temp
> - cd /root/temp
> - gunzip ../<initrd image> | cpio -i
> - rm etc/mdadm/mdadm.conf
> - find . | cpio -H newc -o > ../initrd.new
> - cd ..
> - gzip -9 initrd.new
> - cd
> - umount /root
> - reboot
> 
> And use the "edit" option in grub to select /initrd.new.gz as your new initrd
> image.
> 
> Some information on my particulair configuration:
> root at Medusa:~# mdadm -E -s
> ARRAY metadata=imsm UUID=5a17be47:4c36e982:9fd7aa92:6b23c688
> ARRAY /dev/md/BootEnBackup container=5a17be47:4c36e982:9fd7aa92:6b23c688
> member=0 UUID=9e351111:67d59d42:043dbdde:fe757582
> ARRAY /dev/md/Data container=5a17be47:4c36e982:9fd7aa92:6b23c688 member=1
> UUID=8a981b80:aa2c2f06:e4ec50a5:9045f323
> root at Medusa:~# mdadm -E /dev/sda /dev/sdb
> /dev/sda:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.2.00
>     Orig Family : b68e12bc
>          Family : b68e12bc
>      Generation : 00008053
>            UUID : 5a17be47:4c36e982:9fd7aa92:6b23c688
>        Checksum : 7f6d2eb4 correct
>     MPB Sectors : 2
>           Disks : 2
>    RAID Devices : 2
> 
>   Disk00 Serial : S13PJDWS255231
>           State : active
>              Id : 00000000
>     Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
> 
> [BootEnBackup]:
>            UUID : 9e351111:67d59d42:043dbdde:fe757582
>      RAID Level : 1
>         Members : 2
>           Slots : [UU]
>       This Slot : 0
>      Array Size : 66056192 (31.50 GiB 33.82 GB)
>    Per Dev Size : 66056456 (31.50 GiB 33.82 GB)
>   Sector Offset : 0
>     Num Stripes : 258032
>      Chunk Size : 64 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
> 
> [Data]:
>            UUID : 8a981b80:aa2c2f06:e4ec50a5:9045f323
>      RAID Level : 0
>         Members : 2
>           Slots : [UU]
>       This Slot : 0
>      Array Size : 3774918656 (1800.02 GiB 1932.76 GB)
>    Per Dev Size : 1887459592 (900.01 GiB 966.38 GB)
>   Sector Offset : 66060552
>     Num Stripes : 7372888
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
> 
>   Disk01 Serial : S13PJDWS255348
>           State : active
>              Id : 00010000
>     Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
> /dev/sdb:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.2.00
>     Orig Family : b68e12bc
>          Family : b68e12bc
>      Generation : 00008053
>            UUID : 5a17be47:4c36e982:9fd7aa92:6b23c688
>        Checksum : 7f6d2eb4 correct
>     MPB Sectors : 2
>           Disks : 2
>    RAID Devices : 2
> 
>   Disk01 Serial : S13PJDWS255348
>           State : active
>              Id : 00010000
>     Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
> 
> [BootEnBackup]:
>            UUID : 9e351111:67d59d42:043dbdde:fe757582
>      RAID Level : 1
>         Members : 2
>           Slots : [UU]
>       This Slot : 1
>      Array Size : 66056192 (31.50 GiB 33.82 GB)
>    Per Dev Size : 66056456 (31.50 GiB 33.82 GB)
>   Sector Offset : 0
>     Num Stripes : 258032
>      Chunk Size : 64 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
> 
> [Data]:
>            UUID : 8a981b80:aa2c2f06:e4ec50a5:9045f323
>      RAID Level : 0
>         Members : 2
>           Slots : [UU]
>       This Slot : 1
>      Array Size : 3774918656 (1800.02 GiB 1932.76 GB)
>    Per Dev Size : 1887459592 (900.01 GiB 966.38 GB)
>   Sector Offset : 66060552
>     Num Stripes : 7372888
>      Chunk Size : 128 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : normal
>     Dirty State : clean
> 
>   Disk00 Serial : S13PJDWS255231
>           State : active
>              Id : 00000000
>     Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
> 

rassigning to mdadm as md is responsible for it's boot hooks.
already removed the patch tag, as no patch relative to those boot hooks was given.
please talk to the mdadm people what needs to be done to improve the situation.

thanks for the feedback.

-- 
maks



More information about the pkg-mdadm-devel mailing list