Bug#759849: multipath-tools: FTBFS: uxsock.c:20:31: fatal error: systemd/sd-daemon.h: No such file or directory

Ritesh Raj Sarraf rrs at debian.org
Wed Sep 3 09:28:54 UTC 2014


On Tuesday 02 September 2014 01:33 PM, Ritesh Raj Sarraf wrote:
>> Could you elaborate a bit more, why those are needed?
>> What is upstream doing about this?
>
> The block storage has many components that work closely with one another.
>
> Take an example, root fs on LVM on Multipath on iSCSI.
>
> The flow for such a setup is to:
> 1) Start iSCSI and discover the LUNs
> 2) Detect and create mulitpath maps for matching LUNs in DM Multipath
> 3) Detect and Activate Volume Group out of the newly detected DM 
> Multipath Physical Volumes
> 4) Mount the file system.
>
> The same can be applied to the shutdown sequence. You want to have 
> proper checks in place before initiating a shutdown of the service. 
> One would argue that the service should not stop if it has active 
> services.
>
> Many of the services (mulitpath, iscsi, for example) have a 2 part 
> component. One in the kernel and the other in userspace. The kernel 
> space service will not terminate if any service is active. But the 
> userspace is not so forgiving.
>
> In open-iscsi, if you ask the daemon to shutdown, it will. If there 
> are active sessions, the kernel component will not terminate the 
> current sessions. But the userspace daemon will be shutdown. That 
> means, that when there is the next state failure, open-iscsi will have 
> no idea of determining that a LUN state has changed
>
> Similar is the case with DM Multipath. The userspace DM Multipath 
> daemon is responsible for polling and keeping an up-to-date status of 
> the Device Mapper maps. If the userspace daemon is inactive, and 
> underneath there is a fabric state change, there is no way to 
> propagate that error to the upper layers.
>
> These design issues, since they are part of the core storage stack, if 
> triggered, leave you with a machine with no access to your root disk. 
> Any process at that time, may get into a 'DI' process state or an 
> immediate device failure. The only action then would be to hardware 
> reset your machine.
>
> This is why we do a lot of checks in the init scripts to warn the user.
>
>
> Similar approaches were taken in RHEL (5 and 6) and SLES (10 and 11). 
> I'm not sure what Red Hat or SUSE has chosen for their latest 
> releases, as I don't work on those products any more.
>
>
> My inclination is to ship both, the systemd service files and the init 
> scripts, in their current form along with whatever limitations each 
> may have, and let the user choose.

Hi,

I did not get any comment on this. How are others doing similar stuff 
while migrating to systemd ?

-- 
Ritesh Raj Sarraf | http://people.debian.org/~rrs
Debian - The Universal Operating System



More information about the pkg-lvm-maintainers mailing list