Bug#512601: multipath-tools: kpartx does not handle multi-Tb filesystems on i386
Guido Günther
agx at sigxcpu.org
Wed Jan 28 08:20:37 UTC 2009
Hi Vincent,
On Wed, Jan 28, 2009 at 03:02:33PM +1100, Vincent.McIntyre at csiro.au wrote:
> We next tried applying your patch to 0.4.8-13.
> It applies cleanly and the packages build ok.
Fine!
> We copied the new kpartx binary to /sbin/kpartx.new and tested as before:
> # /etc/init.d/multipath-tools stop
> # multipath -F
> # ls /dev/mapper/mp*
> /bin/ls: /dev/mapper/mp*: No such file or directory
> # multipath -v2 -l
> # ls /dev/mapper/mp*
> /dev/mapper/mpath0
> # /sbin/kpartx.new -a
> # ls /dev/mapper/mp*
> /dev/mapper/mpath0 /dev/mapper/mpath0p1
> # mount /dev/mapper/mpath0p1 /mnt
> # df /mnt
> /dev/mapper/mpath0p1 10253771740 544 10253771196 1% /mnt
> # multipath -l
> mpath0 (222a60001559596f8) dm-7 Promise,VTrak E610f
> [size=9.5T][features=1 queue_if_no_path][hwhandler=0]
> \_ round-robin 0 [prio=0][active]
> \_ 2:0:2:0 sdd 8:48 [active][undef]
> \_ 2:0:3:0 sde 8:64 [active][undef]
> \_ 2:0:4:0 sdf 8:80 [active][undef]
> \_ 2:0:5:0 sdg 8:96 [active][undef]
Great. So this issue is fixed!
> To get everything to work across reboots, we had to make a couple
> of modifications to the system.
>
> 1. move multipath-tools down the order in rcS.d
> ls /etc/rcS.d/*multipath-tools-boot*
> /etc/rcS.d/S28multipath-tools-boot
>
> see #445825. we moved it even later than suggested there (S04).
> I am not entirely sure why so late.
>
> For the record, rcS.d looks like this now -
> #ls /etc/rcS.d/
> README S18ifupdown-clean S40networking
> S01glibc.sh S20module-init-tools S43portmap
> S02hostname.sh S22scsitools.sh S45mountnfs.sh
> S02mountkernfs.sh S25libdevmapper1.02 S46mountnfs-bootclean.sh
> S03udev S26lvm S47lm-sensors
> S04dmraid S28multipath-tools-boot S48console-screen.sh
> S04mountdevsubfs.sh S29multipath-kpartx S55bootmisc.sh
> S05bootlogd S30checkfs.sh S55urandom
> S05keymap.sh S30procps.sh S70screen-cleanup
> S09scsitools-pre.sh S35mountall.sh S70x11-common
> S10checkroot.sh S36mountall-bootclean.sh S75sudo
> S11hwclock.sh S36udev-mtab S99stop-bootlogd-single
> S12mtab.sh S39ifupdown
>
>
> 2. modify /etc/udev/rules.d/multipath.rules
>
> - ACTION=="add", SUBSYSTEM=="block", KERNEL=="dm-*", \
> + ACTION=="change", SUBSYSTEM=="block", KERNEL=="dm-*", \
> PROGRAM="/sbin/dmsetup -j %M -m %m --noopencount --noheadings -c -o name info", \
> - RUN+="/sbin/kpartx -a /dev/mapper/%c"
> + RUN+="/sbin/kpartx.new -a /dev/mapper/%c"
>
>
> this step may not be necessary.
> The new kpartx does not seem to help with this step,
> ie no mpath0p1 device gets created.
> We think this is because mpath0 has not been set up yet.
>
> In the boot log we get
> device-mapper: multipath round-robin: version 1.0.0 loaded
> Starting multipatherror calling out /sbin/scsi_id -g -u -s /block/sdg
You can run this command by hand to see what causes the problem. This
might be related to the kernel you're running or udev, in case you're
using another version than the one in etch.
> (sdg is the last of the multipathed devices)
>
> 3. add another boot-time script
> /etc/rcS.d/S29multipath-kpartx
>
> This contains -
> #!/bin/sh
> # NB: assumes you have 'user_friendly_names yes' in multipath.conf
> #
> . /lib/lsb/init-functions
> set -x
> set -v
> for m in /dev/mapper/mpath?
> do
> log_action_begin_msg "Fixing multipath device $m"
> echo "$0 fixing multipath device $m"
> /sbin/kpartx.new -a $m
> log_action_end_msg $?
> done
>
> This does create /dev/mapper/mpath0p1 correctly.
We had something similar for sarge, this really shouldn't be necessary
for etch. That said, we never fixed #445248 for etch (I wasn't even
aware that the issue is still open).
[..snip..]
> Thanks for your help with this problem.
Thank you for testing.
> I don't know when we will get time to look at the problem on 'lenny',
Don't bother. Since the patch only modifies kpartx it will work for
lenny too. I'd just be interesting to know if you encounter any boot up
problems caused by wrong script ordering on lenny too. This should be
fixed for lenny as noted in 445825, but it'd be nice to be sure.
Cheers and thanks again,
-- Guido
More information about the pkg-lvm-maintainers
mailing list