MeshEditor and Mesh.init() not initializing parallel connectivity

Issue #403 wontfix
Stephan Schmidt created an issue

Sorry if this stems from a misuse of the MeshEditor on my behalf. I am using dolfin 1.4.0+ and python.

I am using the MeshEditor to create a very simple 2D unit square mesh consisting of just 4 vertices and two elements in parallel on two processors.

When I try to solve Laplace's equation on this mesh, I receive the error message:

Error: Unable to get shared mesh entities.
Reason: Shared mesh entities have not been computed for dim 0.
Where: This error was encountered inside MeshTopology.cpp.
Process: unknown

The entire python script is attached and needs to be executed by exactly two processors.

Comments (24)

  1. Prof Garth Wells

    MeshEditor creates a local Mesh only. From C++, a local Mesh can be distributed by:

    MeshPartitioning::build_distributed_mesh(mesh);
    

    MeshPartitioning isn't exposed in the Python interface. If there is demand, we could add it.

  2. Stephan Schmidt reporter

    That would be really helpful for the geometry / shape optimization tool box and much appreciated if it's not too much work.

  3. Jan Blechta

    Fix in branch jan/fix-issue-403.

    DOLFIN 1.5 workaround:

    code="""
    #include <dolfin/mesh/MeshPartitioning.h>
    
    namespace dolfin {
      void build_distributed_mesh(std::shared_ptr<Mesh> mesh)
      {
        MeshPartitioning::build_distributed_mesh(*mesh);
      }
    };
    """
    
    build_distributed_mesh = compile_extension_module(code).build_distributed_mesh
    
  4. Jaroslav Hron

    Just to note that this doesn't seem to solve the original problem. The MeshPartitioning.build_distributed_mesh(mesh) distributes the mesh from rank==0 to all ranks, so one should use the MeshEditor to create the mesh on rank==0 process only.

  5. Jaroslav Hron

    It works as follows, and the resulting mesh after the MeshPartitioning.build_distributed_mesh(mesh) call is parallel and works fine with subsequent parallel operations.

    from dolfin import *
    import numpy as np
    
    code="""
    #include <dolfin/mesh/MeshPartitioning.h>
    
    namespace dolfin {
    void build_distributed_mesh(std::shared_ptr<Mesh> mesh)
    {
    MeshPartitioning::build_distributed_mesh(*mesh);
    }
    };
    """
    
    build_distributed_mesh = compile_extension_module(code).build_distributed_mesh
    
    rank=MPI.rank(mpi_comm_world())
    
    #create the mesh by mesheditor 
    m=Mesh()
    
    if rank==0 :
            editor = MeshEditor()
            editor.open(m,2,2)
            editor.init_vertices_global(4,4)
            editor.init_cells_global(2,2)
            editor.add_vertex_global(0,0, Point(0.0,0.0))                
            editor.add_vertex_global(1,1, Point(1.0,0.0))                
            editor.add_vertex_global(2,2, Point(1.0,1.0))                
            editor.add_vertex_global(3,3, Point(0.0,1.0))
            editor.add_cell(0,0,np.array([0,1,2],dtype=np.uintp))
            editor.add_cell(1,1,np.array([0,2,3],dtype=np.uintp))                
            editor.close()
            m.init()
            m.order()
            info(m)
    
    build_distributed_mesh(m)
    info(m)
    plot(m,interactive=True)
    
  6. Jan Blechta

    I was thinking that with Stephan's example add_cell with signature

       template<typename T>
       void add_cell(std::size_t local_index, std::size_t global_index,
                     const T& v)
    

    should rather be used which is not available in Python. But this snippet did not help:

    code = """
    #include <dolfin/mesh/MeshEditor.h>
    
    namespace dolfin {
      void mesh_ed_add_cell(MeshEditor& ed, std::size_t loc, std::size_t glob,
                            const std::vector<std::size_t>& vert)
      {
        ed.add_cell(loc, glob, vert);
      }
    };
    """
    mesh_ed_add_cell = compile_extension_module(code).mesh_ed_add_cell
    ...
    if Rank == 0:
        mesh_ed_add_cell(editor, 0, 0, numpy.array((0, 1, 2), dtype='uintp'))
    if Rank == 1:
        mesh_ed_add_cell(editor, 0, 1, numpy.array((0, 1, 2), dtype='uintp'))
    

    Canonical example of using MeshEditor for building distributed mesh is in MeshPartitioning::build_mesh(). Nevertheless even there usage of MeshEditor is followed by

    mesh.topology().shared_entities(0) = shared_vertices;
    

    where shared_vertices is computed from elsewhere. So it seems like everything is there but it just lacks interface...

  7. Chris Richardson

    I think the solution could be in LocalMeshData. build_distributed_mesh will do everything correctly, if it is presented with the correct LocalMeshData.

    So it is just a question of how to build the LocalMeshData on each process. If you look at my chris/boxmesh-parallel branch, you can see how I do this.

    At the moment, the constructor LocalMeshData(Mesh) assumes the existing Mesh is on the root process. I think this could be fixed to accept distributed meshes.

  8. Chris Richardson

    I had a look at this, but it is not a straightforward fix. It would be possible, and maybe desirable, for LocalMeshData(Mesh) to extract from meshes on all processes. The problem is that if it is used like this for the current RectangleMesh etc., then some processes have no data, causing a problem with the partitioner. Also, what is the behaviour which is desired? Do you actually want to call the partitioner, or just set the topology and geometry yourself on each process explicitly. I think the latter. So, I am going to close this issue, in favour of issue #845, which should do that...

  9. Log in to comment