mpi4py + OpenMPI 3.0.0 breaks subprocess launching of mpiexec
Importing MPI
prevents subprocess
from successfully launching mpiexec
. This was reported by a user on Arch Linux and has been reproduced on MacOS. OpenMPI 3.0.0 appears to be the common thread. The issue was not present on OpenMPI 2.1.1
Steps to reproduce:
- Install OpenMPI 3.0.0 (e.g.
brew upgrade open-mpi
) python3 -m pip install --no-binary mpi4py mpi4py
(I did this in a venv for isolation).
The following fails:
from subprocess import check_call
from mpi4py import MPI
call = ["mpiexec", "-n", "1", "hostname"]
check_call(call)
with the following exception:
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
<ipython-input-1-9f0f7b8c0c96> in <module>()
2 from mpi4py import MPI
3 call = ["mpiexec", "-n", "1", "hostname"]
----> 4 check_call(call)
/usr/local/Cellar/python/3.6.4_4/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py in check_call(*popenargs, **kwargs)
289 if cmd is None:
290 cmd = popenargs[0]
--> 291 raise CalledProcessError(retcode, cmd)
292 return 0
293
CalledProcessError: Command '['mpiexec', '-n', '1', 'hostname']' returned non-zero exit status 1.
Doing any of the following removes the error:
- not importing
MPI
call = ['hostname']
Comments (6)
-
-
- changed status to invalid
-
I have the same problem: is there any workaround to avoid the issue? The only solution is to avoid importing mpi4py?
-
reporter We're switching to MPICH...
-
@yger You should request this feature to Open MPI folks. AFAIC, there is nothing I can do to prevent this annoyance. Maybe there is a way to make it work, but again, you will not get the answer from me but rather Open MPI developers.
-
Thanks a lot for your quick answer, and for the amazing work. Indeed, I'll contact them and meanwhile, I just checked that everything works fine with MPICH
- Log in to comment
AFAICT, Open MPI does not support recursive invocations of
mpiexec
. This is not actually an mpi4py issue, the problem is in the backend MPI implementation. Rather than importing mpi4py, try todlopen()
the MPI library with ctypes and callMPI_Init()
, you should get the exact same error.