[Pkg-iscsi-maintainers] Bug#833917: open-iscsi: LVM fails to see iSCSI targets for non-mounted devices with systemd.

IOhannes m zmllllllnig (Debian/GNU) umlaeute at debian.org
Wed Aug 10 10:23:34 UTC 2016


Package: open-iscsi
Version: 2.0.873+git0.3b4b4500-8+deb8u1
Severity: normal

Dear Maintainer,

i'm expecting boot problems with a virtual machine using a disk(partition)
that is an LVM-volume with the physical volumes on an iSCSI target.

## so here's my setup:
- HOST was setup as a squeeze system, but has been upgraded to
  jessie+backports and system!!
- HOST runs libvirtd/qemu to run a number of virtual machines
- HOST's system resides on a normal built-in disk
- additionally, HOST is connected to a SAN via iSCSI
- HOST has a volumegroup 'virthosts' to manage the disks for the virtual
  machines. originally the volumegroup only contained local physical devices.
- HOST hosts a virtual machine FOO, which has a system-disk
  /dev/virthosts/foo
- later on an iSCSI LUN has been added to the VG
- HOST also hosts a virtual machine BAR which has a system-disk
  /dev/virthosts/bar (which happens to be allocated on the local physical
  devices) and a data-disk /dev/virthosts/bardata, which has been forced to
  be allocated on the iSCSI LUN

## and here's the problem
when i reboot the HOST machine LVM gets initialized, obviously *before* the
iSCSI LUN comes up.
this results in /dev/virthosts/bardata to be missing from the VG.
in turn, this results in the virtual machine BAR not starting up.
(the virtual machine FOO that has all its LVs on local PV comes up without a
problem).

afaict, the "solution" to this problem would be to add the VG 'virthosts' to
the LVMGROUPS variable in /etc/default/open-scsi.
unfortunately (as you can see from my conf-snippets) this is already the case,
and did not help.
i suspect this is because the VG is not used (e.g. "mounted") by HOST, but
instead its LVs are passed on as raw devices to the virtual machines.

here's some relevant (i hope) output from `journalctl`:

> Aug 08 12:01:30 HOST systemd[1]: lvm2-activation-early.service: main process exited, code=exited, status=5/NOTINSSTALLED
> Aug 08 12:01:30 HOST systemd[1]: Failed to start Activation of LVM2 logical volumes.
> Aug 08 12:01:30 HOST systemd[1]: Unit lvm2-activation-early.service entered failed state.
> Aug 08 12:01:30 HOST lvm[856]: Couldn't find device with uuid aLUEMH-r0mE-JubT-GB13-H6Hr-4ECS-wcHCBp.
> Aug 08 12:01:30 HOST lvm[856]: Refusing activation of partial LV virthosts/bardata.  Use '--activationmode partial' to ov
> Aug 08 12:01:30 HOST lvm[856]: 2 logical volume(s) in volume group "virthosts" now active
> Aug 08 12:01:30 HOST systemd[1]: lvm2-activation.service: main process exited, code=exited, status=5/NOTINSSTALLED
> Aug 08 12:01:30 HOST systemd[1]: Failed to start Activation of LVM2 logical volumes.
> Aug 08 12:01:30 HOST systemd[1]: Unit lvm2-activation.service entered failed state.
> Aug 08 12:01:30 HOST lvm[858]: Couldn't find device with uuid aLUEMH-r0mE-JubT-GB13-H6Hr-4ECS-wcHCBp.
> Aug 08 12:01:30 HOST lvm[858]: 2 logical volume(s) in volume group "virthosts" monitored
> Aug 08 12:01:31 HOST kernel: igb 0000:02:00.0: changing MTU from 1500 to 9000
> Aug 08 12:01:31 HOST kernel: Bridge firewalling registered
> Aug 08 12:01:31 HOST kernel: device eth0 entered promiscuous mode
> Aug 08 12:01:31 HOST kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
> Aug 08 12:01:31 HOST kernel: IPv6: ADDRCONF(NETDEV_UP): br0: link is not ready
> Aug 08 12:01:33 HOST kernel: igb 0000:02:00.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
> Aug 08 12:01:33 HOST kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> Aug 08 12:01:33 HOST kernel: br0: port 1(eth0) entered forwarding state
> Aug 08 12:01:33 HOST kernel: br0: port 1(eth0) entered forwarding state
> Aug 08 12:01:33 HOST kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br0: link becomes ready
> Aug 08 12:01:42 HOST kernel: IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
> Aug 08 12:01:44 HOST kernel: igb 0000:02:00.1 eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
> Aug 08 12:01:44 HOST kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
> Aug 08 12:01:52 HOST networking[862]: Configuring network interfaces...done.
> Aug 08 12:01:52 HOST ifup[1077]: /sbin/ifup: interface eth0 already configured
> Aug 08 12:01:52 HOST kernel: Loading iSCSI transport class v2.0-870.
> Aug 08 12:01:52 HOST kernel: iscsi: registered transport (tcp)

after that, LVM does not attempt any activation anymore.

instead i must re-activate the VG manually:
    # vgchange -a y
      3 logical volume(s) in volume group "virthosts" now active

after that, i can start the FOO virtual machine.

given that open-iscsi already has the LVMGROUPS directive, I suspect that the
problem is (solveable) in open-iscsi. please re-assign if this is not the
case.



-- System Information:
Debian Release: 8.5
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 3.16.0-4-amd64 (SMP w/8 CPU cores)
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages open-iscsi depends on:
ii  libc6  2.19-18+deb8u4
ii  udev   215-17+deb8u4

open-iscsi recommends no packages.

open-iscsi suggests no packages.

-- Configuration Files:
/etc/default/open-iscsi changed:
LVMGROUPS="virthosts"
HANDLE_NETDEV=1

/etc/iscsi/initiatorname.iscsi [Errno 13] Permission denied: u'/etc/iscsi/initiatorname.iscsi'
/etc/iscsi/iscsid.conf changed:
node.startup = automatic
node.leading_login = No
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.initial_login_retry_max = 8
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.xmit_thread_priority = -20
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
node.session.iscsi.FastAbort = Yes


-- no debconf information



More information about the Pkg-iscsi-maintainers mailing list