MeshFunction does not load from file when called from different processes

Issue #738 resolved
matteo croci created an issue

I would need to solve independent PDEs on different processes as in https://fenicsproject.org/qa/2682/solve-independent-equations-on-different-cores . MeshFunction crashes when loading external file with error "Unable to get shared mesh entities. Shared mesh entities have not been computed for dim 2.".

The following code ('bug_test.py') shows the bug. Run with 'python bug_test.py && mpirun -n 2 python bug_test.py'.

from dolfin import *

mpiSize = MPI.size(mpi_comm_world())

mesh = UnitSquareMesh(mpi_comm_self(),5,5)

if mpiSize == 1: 
    markers = MeshFunction('size_t',mesh, 2)
    File('markers_test.xml.gz') << markers
else:
    markers = MeshFunction('size_t', mesh,'./markers_test.xml.gz')

Workaround for the moment is to load 'markers_test.xml.gz' on a single core, extract the array with markers.array() and save the array to file. Then:

markers = MeshFunction('size_t', mesh, 2)
m = # load external markers array file (using pickle, say)
markers.array()[:] = m

Comments (5)

  1. Jan Blechta

    What about using HDF5 IO? It's much suitable for parallel. DOLFIN XML format is being considered for removal. Let's mark it WONTFIX unless @chris_richardson wants to fix it.

    This should do the job. Could you test it?

    from dolfin import *
    
    mpiSize = MPI.size(mpi_comm_world())
    
    mesh = UnitSquareMesh(mpi_comm_self(),5,5)
    
    if mpiSize == 1:
        markers = MeshFunction('size_t',mesh, 2)
        f = HDF5File(mesh.mpi_comm(), 'markers_test.h5', 'w')
        f.write(markers, 'mf')
        f.close()
    else:
        markers = MeshFunction('size_t', mesh)
        f = HDF5File(mesh.mpi_comm(), 'markers_test.h5', 'r')
        f.read(markers, 'mf')
        f.close()
    
  2. Log in to comment