[Pkg-openmpi-maintainers] preferred MPI: openmpi or mpich?

Drew Parsons dparsons at debian.org
Fri Jul 7 08:05:11 UTC 2017


I asked the PETSc maintainers to say more about their MPI experience.
One of them replied,

    "There were a bunch of bugs in Open MPI one-sided, but I think they all
    claim to be fixed now.  Some examples that persisted for many years
    include

    https://svn.open-mpi.org/trac/ompi/ticket/1905
https://svn.open-mpi.org/trac/ompi/ticket/2656

While these (MPI-2 bugs) are fixed in recent versions, there are still
lots of users on old versions.  I had to write a lot of code to work
around these despite Open MPI falsely claiming to provide MPI-2.  There
also tends to be a lot more valgrind noise than with MPICH and MPICH
threading is much more mature.

Meanwhile, Open MPI has better plugin support (though MPICH 3.3 will
have a new OFI layer) and the implementation of opaque types as
pointer-to-incomplete offers better safety and more convenient
debuggability than MPICH's typedef to int."

If we want to hear more, they can dig up more examples. PETSc
maintainers can be contacted at petsc-maint at mcs.anl.gov

It sounds like OpenMPI had more bugs in the past (possibly a
consequence of trying to do more, with Infiniband and plugin support)
but have now sorted out the more egregious bugs. Threading seems to be
OpenMPI's weak point at the moment.

Drew

On Thu, 2017-07-06 at 15:10 +0100, Alastair McKinstry wrote:
> Having worked on OpenMPI I'm slightly biased in favour of it over
> MPICH.
> 
> I'd still favour OpenMPI, as I believe it has better hardware support
> -
> in particular for Infiniband and Intel Omnipath.
> 
> Historically MPICH had better MPI-3 support and MPI_THREAD_MULTIPLE
> support. OpenMPI now has full MPI-3,  but MPI_THREAD_MULTIPLE is
> still
> said to be 'only lightly tested and probably still has bugs'. The
> reality of this vs. MPICH I'm not so sure of.
> 
> It would be good to know what bugs PETSc upstream talks of, and if
> they're logged. A "this is more buggy than that" conversation becomes
> difficult otherwise.
> 
> Best regards
> Alastair
> 
> 
> On 05/07/2017 08:57, Drew Parsons wrote:
> > I'm curious (mainly just asking out of interest, but also to
> > discuss
> > whether we're configured the best way), which MPI would people
> > recommend in 2017?
> > 
> > Debian currently recommends openmpi, via mpi-defaults.  But if I
> > remember the history correctly, this decision was made because at
> > the
> > time openmpi built successfully over a wider set of architectures,
> > in
> > particular on Debian's release architectures, while mpich failed to
> > build on some arches.
> > 
> > That criterion no longer applies: both openmpi and mpich now build
> > successfully on all Debian arches (except sh4, where both fail to
> > build).
> > 
> > PETSc upstream, for comparison, supports openmpi but prefers mpich,
> > asserting that mpich has a lot fewer bugs.
> > 
> > Should we consider switching mpi-defaults to mpich on all arches?
> > 
> > Drew
> > 
> 
> 



More information about the Pkg-openmpi-maintainers mailing list