Default test deadlocks with OpenMPI 1.7.3

Issue #56 resolved
Elliott Sales de Andrade
created an issue

In running the default test (make PETSC_DIR=... PETSC_ARCH=... test) with OpenMPI 1.7.3, I find that the program deadlocks on exit. Here is the (shortened) trace:

(gdb) bt
#0  __lll_lock_wait ()
#1  0x000000349b81126a in _L_lock_55
#2  0x000000349b8111e1 in __lll_lock_elision
#3  0x000000349b80a02c in __GI___pthread_mutex_lock
#4  0x00000031a802c78c in opal_mutex_lock
#5  ompi_attr_get_c
#6  0x00000031a8052d67 in PMPI_Attr_get
#7  0x00000000004650cc in Petsc_DelComm_Outer
#8  0x00000031a802d2a0 in ompi_attr_delete_impl
#9  ompi_attr_delete
#10 0x00000031a8052c7c in PMPI_Attr_delete
#11 0x0000000000446d27 in PetscCommDestroy

I believe this is related to OpenMPI's change from two locks to one. Now, ostensibly, this is an OpenMPI bug, but I'm opening this issue for two reasons:

  1. It appears from this mailing list message, that PETSc could be refactored to not use the MPI_Attr_get call in the delete function, thereby avoiding the deadlock entirely.
  2. I'm not familiar enough with the inner workings of PETSc with respect to attributes and split communicators to cut this down into a small test case. I tried using the test case from the above mailing list thread, but it did not trigger the problem.

Comments (8)

  1. Jed Brown

    To reduce slightly, can you reproduce with this trivial program?


    I'm spinning up a new Open MPI to see if I can reproduce.

  2. Elliott Sales de Andrade reporter

    Yes, that deadlocks as well:

    #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
    #1  0x000000349b81126a in _L_lock_55 () from /lib64/
    #2  0x000000349b8111e1 in __lll_lock_elision (futex=futex@entry=0x31a82c4470 <attribute_lock+16>, 
        adapt_count=adapt_count@entry=0x31a82c4486 <attribute_lock+38>, private=0) at ../nptl/sysdeps/unix/sysv/linux/x86/elision-lock.c:94
    #3  0x000000349b80a02c in __GI___pthread_mutex_lock (mutex=mutex@entry=0x31a82c4470 <attribute_lock+16>) at ../nptl/pthread_mutex_lock.c:91
    #4  0x00000031a802c78c in opal_mutex_lock (m=0x31a82c4460 <attribute_lock>) at ../opal/threads/mutex_unix.h:109
    #5  ompi_attr_get_c (attr_hash=0xc995c0, key=key@entry=12, attribute=attribute@entry=0x7fffffffd6e0, flag=0x7fffffffd6fc)
        at attribute/attribute.c:758
    #6  0x00000031a8052d67 in PMPI_Attr_get (comm=0xc99290, keyval=12, attribute_val=0x7fffffffd6e0, flag=<optimized out>) at pattr_get.c:61
    #7  0x0000000000408a44 in Petsc_DelComm_Outer (comm=0x8e27a0 <ompi_mpi_comm_world>, keyval=11, attr_val=0xc99290, extra_state=0x0)
        at /home/elliott/code/petsc/src/sys/objects/pinit.c:406
    #8  0x00000031a802d2a0 in ompi_attr_delete_impl (predefined=false, key=11, attr_hash=0xaa1bf0, object=0x8e27a0 <ompi_mpi_comm_world>, 
        type=COMM_ATTR) at attribute/attribute.c:977
    #9  ompi_attr_delete (type=type@entry=COMM_ATTR, object=object@entry=0x8e27a0 <ompi_mpi_comm_world>, attr_hash=0xaa1bf0, key=11, 
        predefined=predefined@entry=false) at attribute/attribute.c:1018
    #10 0x00000031a8052c7c in PMPI_Attr_delete (comm=0x8e27a0 <ompi_mpi_comm_world>, keyval=<optimized out>) at pattr_delete.c:59
    #11 0x000000000050d22f in PetscCommDestroy (comm=0xc97c10) at /home/elliott/code/petsc/src/sys/objects/tagm.c:256
    #12 0x00000000005121d2 in PetscHeaderDestroy_Private (h=0xc97c00) at /home/elliott/code/petsc/src/sys/objects/inherit.c:114
    #13 0x000000000043c517 in VecDestroy (v=0x7fffffffd948) at /home/elliott/code/petsc/src/vec/vec/interface/vector.c:550
    #14 0x00000000004066d3 in main (argc=1, argv=0x7fffffffda38) at test.c:9
  3. BarryFSmith

    Looks like this problem has become worse with recent email reports

    [petsc-maint] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

    1. PetscCommDestroy calls MPI_Attr_delete
    2. MPI_Attr_delete acquires a lock
    3. MPI_Attr_delete calls Petsc_DelComm_Outer (through a callback)
    4. Petsc_DelComm_Outer calls MPI_Attr_get
    5. MPI_Attr_get wants to also acquire the lock from step 2.

    Looking at the OpenMPI source code, it looks like you can't call an MPI_Attr_* function from inside the registered deletion callback. The OpenMPI source code notes that all of the functions acquire a global lock, which is where the problem is coming from - here are the comments and the lock definition, in ompi/attribute/attribute.c of OpenMPI 1.8.3:

    404 / 405 * We used to have multiple locks for semi-fine-grained locking. But 406 * the code got complex, and we had to spend time looking for subtle 407 * bugs. Craziness -- MPI attributes are not high performance, so 408 * just use a One Big Lock approach: there is no concurrent access. 409 * If you have the lock, you can do whatever you want and no data will 410 * change/disapear from underneath you. 411 / 412 static opal_mutex_t attribute_lock;

  4. Log in to comment