[Pkg-openmpi-maintainers] preferred MPI: openmpi or mpich?

Alastair McKinstry alastair.mckinstry at sceal.ie
Fri Jul 7 08:22:52 UTC 2017


Yes, I agree.

The bigger question here is : how do we decide between them?
At $work when we make the decision on default MPI to recommend when we
install a new cluster, testing our standard codes and benchmarking:
first does it compile and work (nearly always), secondly whats the speed
/ latency / variance?

Given our user base, the most useful criteria is probably: does it have
the necessary feature base. We can't benchmark as we don't know general
users' hardware.  OpenMPI appears to have more complete hardware
support, and should be the default unless it is shown to be 'too buggy'.

How to test ? can we go through our package base and turn on MPI
threading in testing?

regards
Alastair
> It sounds like OpenMPI had more bugs in the past (possibly a
> consequence of trying to do more, with Infiniband and plugin support)
> but have now sorted out the more egregious bugs. Threading seems to be
> OpenMPI's weak point at the moment.
>
> Drew
>
> On Thu, 2017-07-06 at 15:10 +0100, Alastair McKinstry wrote:
>> Having worked on OpenMPI I'm slightly biased in favour of it over
>> MPICH.
>>
>> I'd still favour OpenMPI, as I believe it has better hardware support
>> -
>> in particular for Infiniband and Intel Omnipath.
>>
>> Historically MPICH had better MPI-3 support and MPI_THREAD_MULTIPLE
>> support. OpenMPI now has full MPI-3,  but MPI_THREAD_MULTIPLE is
>> still
>> said to be 'only lightly tested and probably still has bugs'. The
>> reality of this vs. MPICH I'm not so sure of.
>>
>> It would be good to know what bugs PETSc upstream talks of, and if
>> they're logged. A "this is more buggy than that" conversation becomes
>> difficult otherwise.
>>
>> Best regards
>> Alastair
>>
>>
>> On 05/07/2017 08:57, Drew Parsons wrote:
>>> I'm curious (mainly just asking out of interest, but also to
>>> discuss
>>> whether we're configured the best way), which MPI would people
>>> recommend in 2017?
>>>
>>> Debian currently recommends openmpi, via mpi-defaults.  But if I
>>> remember the history correctly, this decision was made because at
>>> the
>>> time openmpi built successfully over a wider set of architectures,
>>> in
>>> particular on Debian's release architectures, while mpich failed to
>>> build on some arches.
>>>
>>> That criterion no longer applies: both openmpi and mpich now build
>>> successfully on all Debian arches (except sh4, where both fail to
>>> build).
>>>
>>> PETSc upstream, for comparison, supports openmpi but prefers mpich,
>>> asserting that mpich has a lot fewer bugs.
>>>
>>> Should we consider switching mpi-defaults to mpich on all arches?
>>>
>>> Drew
>>>
>>

-- 
Alastair McKinstry, <alastair at sceal.ie>, <mckinstry at debian.org>, https://diaspora.sceal.ie/u/amckinstry
Commander Vimes didn’t like the phrase “The innocent have nothing to fear,”
 believing the innocent had everything to fear, mostly from the guilty but in the longer term
 even more from those who say things like “The innocent have nothing to fear.”
 - T. Pratchett, Snuff




More information about the Pkg-openmpi-maintainers mailing list