mpi4py & numba example/demo/tutorial (CPU, nopython mode)

Issue #164 resolved
Sylwester Arabas created an issue

Hello,

Are there any examples/tutorials on using mpi4py from numba-jitted code?

Trying to do it using Numba’s built-in CFFI support, I’ve only managed so far to succesfully call MPI_Initialized, any other routine call fails as Numba does not support passing void* pointers - explained with Numba-CFFI-mpi4py code example here:

https://github.com/numba/numba/issues/4115#issuecomment-642474009

Numba does supports calling Cython functions:

https://numba.pydata.org/numba-doc/latest/extending/high-level.html#importing-cython-functions

Perhaps this would be the way ahead? Any hints or examples on how to use the mpi4py Cython code from outside of mpi4py?

Thanks!

Sylwester

Comments (6)

  1. Lisandro Dalcin

    This is not the appropriate place of such general questions. We use the issue tracker to report actual bugs found in mpi4py. Use mpi4py’s mailing list hosted at Google Groups.

  2. Lisandro Dalcin

    So, it seems that you are trying to call MPI directly inside numba-jitted nonpython code, effectively bypassing mpi4py.

    You will not be able to use mpi4py’s Cython code, it was not designed for such low-level usage and I’m not interested in making it work that way, basically because I do not see the point. mpi4py can communicate just fine NumPy array data using regular Python code. I would be really surprised if making low-level MPI communication calls in a numba-jitted nonpython code brings any meassurable performance benefit.

  3. Sylwester Arabas reporter

    Thank for quick reply! It’s not about benefitting from doing low-level MPI calls - it’s the issue of huge overhead when going back and forth between Python and Numba-JIT code (i.e., even without MPI) at every timestep of a numerical solver. Disabling the @numba.njit decorator at the timestepping-logic function (and keeping everything below with @numba.njit) already causes at least a 10-fold drop in performance for small-scale benchmarks of the solver we work on (of course this gets less noticeable for larger domains). Will then try to access the Fortran API from numba/CFFI - seems simpler (sic!) as one may be able to stick to PODs.

  4. Log in to comment