md+lvm problems

Norbert Veber nveber at certicom.com
Thu Aug 18 21:14:41 UTC 2005


Hi,

I have a system thats configured with two disks mirrored in a raid1
array.  On top of the mirror I have several lvm2 partitions.  This
worked ok until I upgraded to woody and started using an initrd based
kernel.  Previously the kernel had MD support built-in, and started all
the md devices by itself..

The problem I have is that /etc/rcS.d/S25lvm is run before
/etc/rcS.d/S25mdadm-raid.  Since the disks are mirrored, lvm finds its
signature inside both disks:
Found duplicate PV Tzt5Eb3lhL3MbFn2R4ky3wXvaelCqojW: using /dev/sdb4 not /dev/sda4

It grabs the device /dev/sdb4 and starts the volume group.  After that
S25mdadm-raid runs, and tries to start the md device.  However since
sdb4 is already claimed by lvm, it starts it with just sda4 (in degraded
mode).  As a result the system is left as a mess.

Instead of the expected configuration of vg00 running on top of the md
device (which is made up of mirroring sdb4 and sda4), you end up with
vg00 running on sda4, and the md device running in degraded mode (and
unused) on sdb4.

Is this intended?  It seems like a bug to me.  Shouldn't the mdadm-raid
script start before the lvm script?

Thanks,

Norbert



More information about the pkg-mdadm-devel mailing list