- edited description
MeshEditor and Mesh.init() not initializing parallel connectivity
Sorry if this stems from a misuse of the MeshEditor on my behalf. I am using dolfin 1.4.0+ and python.
I am using the MeshEditor to create a very simple 2D unit square mesh consisting of just 4 vertices and two elements in parallel on two processors.
When I try to solve Laplace's equation on this mesh, I receive the error message:
Error: Unable to get shared mesh entities.
Reason: Shared mesh entities have not been computed for dim 0.
Where: This error was encountered inside MeshTopology.cpp.
Process: unknown
The entire python script is attached and needs to be executed by exactly two processors.
Comments (24)
-
reporter -
reporter - removed milestone
-
- changed status to invalid
MeshEditor creates a local Mesh only. From C++, a local Mesh can be distributed by:
MeshPartitioning::build_distributed_mesh(mesh);
MeshPartitioning
isn't exposed in the Python interface. If there is demand, we could add it. -
reporter That would be really helpful for the geometry / shape optimization tool box and much appreciated if it's not too much work.
-
Fix in branch
jan/fix-issue-403
.DOLFIN 1.5 workaround:
code=""" #include <dolfin/mesh/MeshPartitioning.h> namespace dolfin { void build_distributed_mesh(std::shared_ptr<Mesh> mesh) { MeshPartitioning::build_distributed_mesh(*mesh); } }; """ build_distributed_mesh = compile_extension_module(code).build_distributed_mesh
-
@blechta Nice work around!
-
Fix in
master
. I.e. one can callMeshPartitioning.build_distributed_mesh(mesh)
from Python. -
- changed milestone to 1.6
-
assigned issue to
- changed version to 1.5
- marked as trivial
Just check that Python docstring is generated.
-
- changed status to open
-
- changed status to resolved
Python docstring is issued also for ignored overloads. This seem to be known issue.
-
Just to note that this doesn't seem to solve the original problem. The
MeshPartitioning.build_distributed_mesh(mesh)
distributes the mesh fromrank==0
to all ranks, so one should use theMeshEditor
to create the mesh onrank==0
process only. -
reporter It's a great bummer if you still can't build meshes this way in parallel...
-
There seem to be members
MeshEditor::*_global
for use in a distributed fashion. Doesn't it do what expected? -
- changed status to open
Ok, now I see that the original report demonstrated this approach not working.
-
-
It works as follows, and the resulting mesh after the
MeshPartitioning.build_distributed_mesh(mesh)
call is parallel and works fine with subsequent parallel operations.from dolfin import * import numpy as np code=""" #include <dolfin/mesh/MeshPartitioning.h> namespace dolfin { void build_distributed_mesh(std::shared_ptr<Mesh> mesh) { MeshPartitioning::build_distributed_mesh(*mesh); } }; """ build_distributed_mesh = compile_extension_module(code).build_distributed_mesh rank=MPI.rank(mpi_comm_world()) #create the mesh by mesheditor m=Mesh() if rank==0 : editor = MeshEditor() editor.open(m,2,2) editor.init_vertices_global(4,4) editor.init_cells_global(2,2) editor.add_vertex_global(0,0, Point(0.0,0.0)) editor.add_vertex_global(1,1, Point(1.0,0.0)) editor.add_vertex_global(2,2, Point(1.0,1.0)) editor.add_vertex_global(3,3, Point(0.0,1.0)) editor.add_cell(0,0,np.array([0,1,2],dtype=np.uintp)) editor.add_cell(1,1,np.array([0,2,3],dtype=np.uintp)) editor.close() m.init() m.order() info(m) build_distributed_mesh(m) info(m) plot(m,interactive=True)
-
-
assigned issue to
Seems like Chris' area of interest...
-
assigned issue to
-
I was thinking that with Stephan's example
add_cell
with signaturetemplate<typename T> void add_cell(std::size_t local_index, std::size_t global_index, const T& v)
should rather be used which is not available in Python. But this snippet did not help:
code = """ #include <dolfin/mesh/MeshEditor.h> namespace dolfin { void mesh_ed_add_cell(MeshEditor& ed, std::size_t loc, std::size_t glob, const std::vector<std::size_t>& vert) { ed.add_cell(loc, glob, vert); } }; """ mesh_ed_add_cell = compile_extension_module(code).mesh_ed_add_cell ... if Rank == 0: mesh_ed_add_cell(editor, 0, 0, numpy.array((0, 1, 2), dtype='uintp')) if Rank == 1: mesh_ed_add_cell(editor, 0, 1, numpy.array((0, 1, 2), dtype='uintp'))
Canonical example of using
MeshEditor
for building distributed mesh is inMeshPartitioning::build_mesh()
. Nevertheless even there usage ofMeshEditor
is followed bymesh.topology().shared_entities(0) = shared_vertices;
where
shared_vertices
is computed from elsewhere. So it seems like everything is there but it just lacks interface... -
I think the solution could be in
LocalMeshData
.build_distributed_mesh
will do everything correctly, if it is presented with the correctLocalMeshData
.So it is just a question of how to build the
LocalMeshData
on each process. If you look at mychris/boxmesh-parallel
branch, you can see how I do this.At the moment, the constructor
LocalMeshData(Mesh)
assumes the existingMesh
is on the root process. I think this could be fixed to accept distributed meshes. -
- removed milestone
Removing milestone: 1.7 (automated comment)
-
- changed milestone to 2016.2
-
- changed milestone to 2017.1
-
@chris_richardson Did you push a fix for this?
-
- changed status to wontfix
I had a look at this, but it is not a straightforward fix. It would be possible, and maybe desirable, for
LocalMeshData(Mesh)
to extract from meshes on all processes. The problem is that if it is used like this for the currentRectangleMesh
etc., then some processes have no data, causing a problem with the partitioner. Also, what is the behaviour which is desired? Do you actually want to call the partitioner, or just set the topology and geometry yourself on each process explicitly. I think the latter. So, I am going to close this issue, in favour of issue #845, which should do that... - Log in to comment