Shared vertices are required in create_submesh for higher order spaces

Issue #6 new
Francesco Ballarin created an issue

Dear cbcpost developers,

I am interested in the creation of submeshes in parallel using your create_submesh utility.

Near the end of the create_submesh utility there is this comment:

# FIXME: Set up shared entities
# What damage does this do?

I think that the answers is: DOFs of higher order spaces are not properly mapped. Please see the attached shared_vertices_are_required_figure_1.png to see an example (using three processors, dof plot for proc 1 is not shown as it is not intersting). DOFs on shared vertices are correct, but the DOF on the midpoint on the shared facet is duplicated.

I provide in shared_vertices_are_required.py a very naive way (lots of communication) of fixing this (function create_submesh_with_shared_vertices), and I am hoping that you can improve it and include it in the library. Please note that, even though I copied some of the code which is currently commented on master, even uncommenting it will not make that implementation working for this particular case, which is characterized by marked cells on only one processor. I think that the problem in the current commented implementation is that shared vertices are checked based on the original mesh (see line 54), but that does not take into account the fact that distribute_meshdata (on line 105) may create additional shared vertices which were not on the original mesh. Figure shared_vertices_are_required_figure_2.png shows that this problem is fixed using the modified function I provide.

As a minor question, is there a way to modify distribute_meshdata to handle the case where the are more processors then cells? I experienced in this case that it gets stuck in the loop.

ps: in order to print DOF plots in the attach script you should patch fenicstools to plot global dof ids: in file fenicstools/dofmapplotter/dofhandler.py replace line 155 with

dof = self.dofmaps[j].local_to_global_index(dof)

Thanks!

Francesco

Comments (0)

  1. Log in to comment