parallel assembly broken with CGAL unstructured meshes

Issue #315 wontfix
Corrado Maurini created an issue

The minimal example below shows that the assembly in parallel is broken when using CGAL-generated meshes. Everything works correctly with the structured UnitSquareMesh or with a single processor.

from dolfin import *

geom = Rectangle(0,0,2,1)
mesh = Mesh()
mesh_generator = CSGCGALMeshGenerator2D(geom) 
mesh_generator.generate(mesh)

#mesh = UnitSquareMesh(32, 32) # working with this

V = FunctionSpace(mesh,'CG',1)
u = TrialFunction(V)    
v = TestFunction(V)    
a =u*v*dx 
A = assemble(a)

The output of mpiexec -n 2 python test.py is

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics@fenicsproject.org
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to successfully call PETSc function 'MatAssemblyEnd'.
*** Reason:  PETSc error code is: 63.
*** Where:   This error was encountered inside /opt/HPC/src/FEniCS/dolfin/dolfin/la/PETScMatrix.cpp.
*** Process: unknown
*** 
*** DOLFIN version: 1.3.0+
*** Git changeset:  2c00d40e3e6926b70140747ad38aab3bf8778d9a
*** -------------------------------------------------------------------------

I use current dolfin master and petsc master.

Comments (2)

  1. Benjamin Dam Kehlet

    Yes, please check if this is a problem with the generated mesh (eg. by checking the output of dolfin.MeshQuality.radius_ratio_min_max(mesh) ).

    We had a bug (#224) which coudl cause degenerate cells in 2D, but it should have been fixed for the version of Dolfin you are running.

  2. Log in to comment