Bug#830770: mdadm: initramfs broken
Dimitri John Ledkov
xnox at debian.org
Mon Jul 11 14:51:06 UTC 2016
Hello,
On 11 July 2016 at 10:35, Jamie Heilman <jamie at audible.transient.net> wrote:
> Package: mdadm
> Version: 3.4-2
> Severity: important
>
> With 3.4-2 my lvm on partitioned md system has stopped initializing
> correctly. For starters when upgrading from 3.4-1 to 3.4-2 I get
> this:
> W: mdadm: the array /dev/md/d0 with UUID 4c4bcb71:267c92fc:fe08e7ef:27569e0b
> W: mdadm: is currently active, but it is not listed in mdadm.conf. if
> W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
> W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
> W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
>
> This is thanks to the change of:
> if ! grep -qi "uuid=$uuid" $DESTMDADMCONF; then
> to
> if ! grep -q "UUID=$uuid" $DESTMDADMCONF; then
>
That's indeed buggy, I shall fix this.
> The manpage uses uuid=, the examples use both uuid= and UUID=, and the
> uuid itself had better be treated as case-insensitive... so if you ask
> me, it's cleaner and more correct, to just do:
>
> uuid=`echo "$params" | grep -oi ' uuid=[0-9a-f:]\+'`
> if ! grep -qi "$uuid" $DESTMDADMCONF; then
> ...
>
> That said, this isn't why my system stopped booting after updating to
> 3.4-2, it's just an irritating distraction.
>
Previously, local-top script was used to wait a little and assemble all arrays.
Now, udev rules are used to incrementally assemble arrays.
In the mean time local-block "main-loop" scripts are used to poll and
activate LVM volumes.
In case of incomplete arrays, local-block/mdadm script is used to
force start incomplete mdadm arrays after 2/3rds of iterations.
> AFAICT, the new initramfs-tools scripts simply never assemble
> any arrays. The old local-top/mdadm script ran either
> $MDADM --assemble --scan --run --auto=yes${extra_args:+ $extra_args}
> or $MDADM --assemble --scan --run --auto=yes $dev
> depending on $MD_DEVS.
>
> If run I mdadm --assemble --scan manually after the initramfs has
> dumped me to a shell, and then keep kicking various lvm scripts until
> I get my root device to show up, I can proceed, but this new package
> strikes me as very, very broken.
>
> Here's some of the blather from /usr/share/bug/mdadm/script but let me
> prefix this with, I've pruned out detail that wasn't germane, and my
> topology is lvm on top of md raid1 on a traditional sysvinit system.
>
Is udev available in your initramfs?
I shall try to recreate your setup and debug this further.
> --- mdadm.conf
> DEVICE /dev/sd[ab]
> HOMEHOST <system>
> MAILADDR root
> ARRAY /dev/md/d0 metadata=1.0 UUID=4c4bcb71:267c92fc:fe08e7ef:27569e0b name=cucamonga:d0
>
> --- /etc/default/mdadm
> AUTOCHECK=true
> START_DAEMON=true
> DAEMON_OPTIONS="--syslog"
> VERBOSE=false
>
> --- /proc/mdstat:
> Personalities : [raid1]
> md_d0 : active raid1 sda[0] sdb[1]
> 976762448 blocks super 1.0 [2/2] [UU]
>
> unused devices: <none>
>
> --- /proc/partitions:
> major minor #blocks name
>
> 8 0 976762584 sda
> 8 16 976762584 sdb
> 253 0 976762448 md_d0
> 253 1 968373248 md_d0p1
> 253 2 8388176 md_d0p2
>
> --- LVM physical volumes:
> File descriptor 3 (/dev/pts/5) leaked on pvs invocation. Parent PID 16632: /bin/bash
> PV VG Fmt Attr PSize PFree
> /dev/md_d0p1 S lvm2 a-- 923.51g 545.51g
>
> --- /proc/modules:
> dm_crypt 24576 1 - Live 0xffffffffa0222000
> raid1 32768 1 - Live 0xffffffffa0016000
> md_mod 98304 4 raid1, Live 0xffffffffa0021000
> dm_mod 86016 26 dm_crypt, Live 0xffffffffa0000000
>
> --- volume detail:
> /dev/sda:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 4c4bcb71:267c92fc:fe08e7ef:27569e0b
> Name : cucamonga:d0 (local to host cucamonga)
> Creation Time : Fri Feb 11 07:29:10 2011
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
> Array Size : 976762448 (931.51 GiB 1000.20 GB)
> Super Offset : 1953525152 sectors
> Unused Space : before=0 sectors, after=256 sectors
> State : clean
> Device UUID : 03d87172:57951a8c:b9b40189:051ed01b
>
> Update Time : Mon Jul 11 09:27:06 2016
> Checksum : b10e71e7 - correct
> Events : 2479
>
>
> Device Role : Active device 0
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> --
> /dev/sdb:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 4c4bcb71:267c92fc:fe08e7ef:27569e0b
> Name : cucamonga:d0 (local to host cucamonga)
> Creation Time : Fri Feb 11 07:29:10 2011
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
> Array Size : 976762448 (931.51 GiB 1000.20 GB)
> Super Offset : 1953525152 sectors
> Unused Space : before=0 sectors, after=256 sectors
> State : clean
> Device UUID : dcee8016:570d79ee:40fd87f0:26c35131
>
> Update Time : Mon Jul 11 09:27:06 2016
> Checksum : 3483ee6a - correct
> Events : 2479
>
>
> Device Role : Active device 1
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> --- /proc/cmdline
> BOOT_IMAGE=/boot/vmlinuz-4.6.1 root=/dev/mapper/S-root ro rng-core.default_quality=512
>
> --- udev:
> ii udev 230-7 amd64 /dev/ and hotplug management daem
> aa83f41de49462d05e446cfc5e14e74b /lib/udev/rules.d/63-md-raid-arrays.rules
> 5de7d0b70cd948d00bb38ca75ad5f288 /lib/udev/rules.d/64-md-raid-assembly.rules
>
> --- /dev:
> brw-rw---- 1 root disk 253, 0 Jul 11 05:47 /dev/md_d0
> brw-rw---- 1 root disk 253, 1 Jul 11 05:47 /dev/md_d0p1
> brw-rw---- 1 root disk 253, 2 Jul 11 05:47 /dev/md_d0p2
>
>
>
>
> --
> Jamie Heilman http://audible.transient.net/~jamie/
>
> _______________________________________________
> pkg-mdadm-devel mailing list
> pkg-mdadm-devel at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-mdadm-devel
--
Regards,
Dimitri.
More information about the pkg-mdadm-devel
mailing list