[Pkg-openmpi-maintainers] preferred MPI: openmpi or mpich?

Alastair McKinstry alastair.mckinstry at sceal.ie
Thu Jul 6 14:10:07 UTC 2017


Having worked on OpenMPI I'm slightly biased in favour of it over MPICH.

I'd still favour OpenMPI, as I believe it has better hardware support -
in particular for Infiniband and Intel Omnipath.

Historically MPICH had better MPI-3 support and MPI_THREAD_MULTIPLE
support. OpenMPI now has full MPI-3,  but MPI_THREAD_MULTIPLE is still
said to be 'only lightly tested and probably still has bugs'. The
reality of this vs. MPICH I'm not so sure of.

It would be good to know what bugs PETSc upstream talks of, and if
they're logged. A "this is more buggy than that" conversation becomes
difficult otherwise.

Best regards
Alastair


On 05/07/2017 08:57, Drew Parsons wrote:
> I'm curious (mainly just asking out of interest, but also to discuss
> whether we're configured the best way), which MPI would people
> recommend in 2017?
>
> Debian currently recommends openmpi, via mpi-defaults.  But if I
> remember the history correctly, this decision was made because at the
> time openmpi built successfully over a wider set of architectures, in
> particular on Debian's release architectures, while mpich failed to
> build on some arches.
>
> That criterion no longer applies: both openmpi and mpich now build
> successfully on all Debian arches (except sh4, where both fail to
> build).
>
> PETSc upstream, for comparison, supports openmpi but prefers mpich,
> asserting that mpich has a lot fewer bugs.
>
> Should we consider switching mpi-defaults to mpich on all arches?
>
> Drew
>

-- 
Alastair McKinstry, <alastair at sceal.ie>, <mckinstry at debian.org>, https://diaspora.sceal.ie/u/amckinstry
Commander Vimes didn’t like the phrase “The innocent have nothing to fear,”
 believing the innocent had everything to fear, mostly from the guilty but in the longer term
 even more from those who say things like “The innocent have nothing to fear.”
 - T. Pratchett, Snuff




More information about the Pkg-openmpi-maintainers mailing list