Multipath-tools

Eric Diologeant - XtremIOPS eric_diologeant at xtremiops.com
Wed Apr 22 21:32:56 UTC 2015


Hello,

 

I contact you regarding the multipath tool on debian wheezy, I hope you can
highlight us on how it is expected to work regarding parameters we can set.

 

Just to summarize our configuration  a Linux Debian Wheezy 7.8 server with
two FC ports connected on two differents FC Fabric and interconnected to
FalconStor NSS solution in cluster (Storage Virtualization). 

 

1)      I f one Fabric/hba is down, we would like the server to switch as
fast as possible on the second hba in order to avoid waiting on the faulty
path for retry.

2)      If we have a failure with the NSS solution (before failover to the
secondary node)  both storage path are down during at least 30 seconds, we
would like to make sure the server survive to the failover and the device
are not lost.

 

So to resume our expectation, we would like to switch as fast as possible on
a surviving port but if all path are down wait for ever.

 

We have try to increase no_path_retry (from one to 300 and queue)
polling_interval keep at the same value (5). Then in this case loss_dev_tmo
increase at the maximum of 600 secondes (no_path_retry set to 120 seems to
be the maximum). Parameters 300 and queue are not working and don’t change
the loss_dev_tmo parameters.

 

We have used path_grouping_policy  failover and multibus but that didn’t
change and solve our problem.

 

I know is not common behaviors but if there is a way to answer this request
please advice.

 

Best Regards

 

Eric

 

 

Eric Diologeant               

Owner / General Manager

XtremIOPS

“Au Service de la Donnée”

 

763 Rue de Cocherel

ZI de Netreville

27000 Evreux

 

Mob: +33.763.210.284

Fixe: +33.974.771.082

Fax: +33.974.771.081

Standard : +33.974.771.080

 <mailto:Eric_Diologeant at xtremiops.com> Eric_Diologeant at xtremiops.com

 

Suivez-moi sur les réseaux sociaux !

 <http://twitter.com/xtremiops>  <http://www.youtube.com/user/xtremiops>
<https://plus.google.com/115793362950442566210>
<http://fr.linkedin.com/in/ericdiologeant>
<http://www.viadeo.com/fr/profile/eric.diologeant>  

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 776 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0003.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 865 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0004.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 1477 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.png
Type: image/png
Size: 1394 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.jpg
Type: image/jpeg
Size: 858 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20150422/123c8cd8/attachment-0005.jpg>


More information about the pkg-lvm-maintainers mailing list