Cannot build against openmpi 4.0.0

Issue #115 resolved
Robert Manson-Sawko created an issue


pip doesn't seem to be able to build mpi4pi-3.0.0 against openmpi-4.0.0 on the ppc64le platform. Here's the first few relevant lines from my log file before it breaks:

MPI configuration: [mpi] from 'mpi.cfg'
  MPI C compiler:    /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpicc
  MPI C++ compiler:  /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpicxx
  MPI F compiler:    /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpifort
  MPI F90 compiler:  /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpif90
  MPI F77 compiler:  /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpif77
  checking for library 'lmpe' ... 
  /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpicc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes
  /gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpicc -Wl,-rpath=/gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi
  /gpfs/paragon/local/apps/dev/core/gcc/6.4.0/bin/ld: cannot find -llmpe
  collect2: error: ld returned 1 exit status

Running the same pip command against openmpi 3.1.2 seems to work fine.

Comments (9)

  1. Lisandro Dalcin

    Are you 100% sure the build breaks? The MPE support is optional, if does not find MPE, it should continue gracefully. Can you please attach the full output of python build? In the mean time, I'll try to reproduce on my side.

  2. Lisandro Dalcin

    Branch maint built just fine with Open MPI 4.0.0 on my x86_64 platform. At this point, you have to show us the full build log using python build.

  3. Robert Manson-Sawko reporter

    Thanks for a very quick reply.

    I've just attached my build. Note that I submit it to the compute cluster via an LSF script. The script content is at the end of the file. There are some software inconsistencies on local clusters due to multi-cluster design and I got into a habit of compiling everything in a job script.

    I think you're right... The actual error is probably at the end and perhaps related to this passage:

        src/mpi4py.MPI.c:76881:37: error: MPI_UB undeclared (first use in this function)
           __pyx_t_1 = ((__pyx_v_datatype == MPI_UB) != 0);
        In file included from src/MPI.c:4:0:
        src/mpi4py.MPI.c: In function PyInit_MPI:
        src/mpi4py.MPI.c:167040:62: error: MPI_UB undeclared (first use in this function)
           __pyx_t_3 = ((PyObject *)__pyx_f_6mpi4py_3MPI_new_Datatype(MPI_UB)); if (unlikely(!__pyx_t_3)) __PYX_ERR(21, 862, __pyx_L1_error)
        src/mpi4py.MPI.c:167054:62: error: MPI_LB undeclared (first use in this function)
           __pyx_t_3 = ((PyObject *)__pyx_f_6mpi4py_3MPI_new_Datatype(MPI_LB)); if (unlikely(!__pyx_t_3)) __PYX_ERR(21, 863, __pyx_L1_error)
        error: command '/gpfs/paragon/local/apps/dev/compiler/gcc/6.4/openmpi/4.0.0/bin/mpicc' failed with exit status 1
  4. Lisandro Dalcin

    Oh, Sorry! I totally forgot. Yes, this is a known issue, Open MPI decided to remove legacy stuff from the MPI-1 standard . A workaround is already in place in mpi4py git repository (both maint as master branches).

    Could you install mpi4py this quick way?

    $ pip install
  5. Robert Manson-Sawko reporter

    @dalcinl thanks and sorry for a slow reply. I can confirm that this has installed successfully. I had to download it first as I install things on a cluster which has no internet access. So it was

    pip download
    pip install


  6. Robert Manson-Sawko reporter

    I am happy to press resolve, but should I wait until this is inside the main installation?

  7. Lisandro Dalcin

    Just mark it as resolved. I need to find some time to make a new release, maybe in a few days.

  8. Log in to comment