Bug#549691: lvm2: lvremove fails to remove open logical volume which is not opened

anis.ocellata anis.ocellata at gmail.com
Sun Nov 11 21:16:54 UTC 2012


Hi,

I probably have the same problem.

It occurs in about 20% of the calls to lvremove,
but can be reproduced reliably using the description bellow.

Symptoms:

$ lvremove vg/lv-snap
Do you really want to remove active logical volume lv-snap? [y/n]: y
   /sbin/dmeventd: stat failed: No such file or directory
   Unable to deactivate open vg-lv-real (254:13)
   Failed to resume lv.
   Node /dev/mapper/vg-lv-snap-cow was not removed by udev. Falling back 
to direct node removal.

The dmeventd bug is unrelated.

Package versions:

wheezy/sid
linux-image-2.6.32-5-686   2.6.32-46
liblvm2app2.2:i386         2.02.95-4
lvm2                       2.02.95-4
libdevmapper1.02.1:i386    2:1.02.74-4
libudev0:i386              175-7
dmsetup                    2:1.02.74-4
udev                       175-7
udisks                     1.0.4-7

Steps to reproduce:

$ dd if=/dev/zero of=/tmp/data bs=1M count=64
$ losetup -f /tmp/data
$ dev="$(losetup -a | sed -n 's@^\(/dev/loop.\).*(/tmp/data)$@\1 at p')"
$ vgcreate vg $dev
$ lvcreate -n lv -L 20m vg
$ for i in $(seq 1 50); do echo -n "[$i] "; date; lvcreate -n lvs -L 20m 
-s vg/lv || break; lvremove -f vg/lvs || break; sleep 1; done

Before the fail:

$ dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID 

vg-lvs-cow      254  11 L--w    1    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADbtS2ezxFQ80z6AyYJQVGrNrBm2KmYdiG-cow
vg-lv-real      254  10 L--w    2    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADPedDWlTT8u7iGzBUJatBw7XCaCJ2tazL-real
vg-lv           254   8 L--w    0    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADPedDWlTT8u7iGzBUJatBw7XCaCJ2tazL
vg-lvs          254   9 L--w    0    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADbtS2ezxFQ80z6AyYJQVGrNrBm2KmYdiG
$ dmsetup ls --tree
vg-lv (254:8)
  └─vg-lv-real (254:10)
     └─ (7:2)
vg-lvs (254:9)
  ├─vg-lv-real (254:10)
  │  └─ (7:2)
  └─vg-lvs-cow (254:11)
     └─ (7:2)
$ dmsetup table
vg-lvs-cow: 0 24576 linear 7:2 43008
vg-lv-real: 0 40960 linear 7:2 2048
vg-lv: 0 40960 snapshot-origin 254:10
vg-lvs: 0 40960 snapshot 254:10 254:11 P 8

After the fail:

$ dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID 

vg-lv-real      254  10 L--w    0    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADPedDWlTT8u7iGzBUJatBw7XCaCJ2tazL-real
vg-lv           254   8 L--w    0    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADPedDWlTT8u7iGzBUJatBw7XCaCJ2tazL
vg-lvs          254   9 L--w    0    1      0 
LVM-VXLpQ9Kays4OYlk4sa3YBygqFDzMXZADwd2xkixwuc6xoErS15OFrK22jInum17j
$ dmsetup ls --tree
vg-lv-real (254:10)
  └─ (7:2)
vg-lv (254:8)
  └─ (7:2)
vg-lvs (254:9)
  └─ (7:2)
$ dmsetup table
vg-lv-real: 0 40960 linear 7:2 2048
vg-lv: 0 40960 linear 7:2 2048
vg-lvs: 0 24576 linear 7:2 43008

Commenting out
 > KERNEL=="dm-*", OPTIONS+="watch"
from /lib/udev/rules.d/80-udisks.rules didn't help, but stopping the 
udev daemon removes the symptoms.

Does anybody have at least a reasonable workaround?

Thanks.



More information about the pkg-lvm-maintainers mailing list