- edited description
Assembling a scalar with split MPI communicator
There are currently two way to assemble a scalar, only one works with split MPI communicators:
form = UFL_form
a = assemble(form)
or
s = Scalar()
assemble(form, s)
The second you can give an MPI communicator easily, 's=Scalar(mpi_comm_self()). The first assumes mpi_comm_world() so hangs if run using split communicators.
Comments (10)
-
reporter -
Simplest fix would be to deprecate the first case.
-
The first case should take the comm from the mesh.
Can you provide a MWE?
-
reporter from dolfin import * rank = MPI.rank(mpi_comm_world()) if rank == 0: mesh = UnitIntervalMesh(10) V = FunctionSpace(mesh, "CG", 1) v = TrialFunction(V) form = v*dx a = assemble(form) info("reached end")
-
reporter I think the problem is line 77 from assemble.cpp. It creates
Scalar s
which in turn uses MPI_COMM_WORLD -
I would expect this to deadlock long before assemble, mpi_comm_self() should be passed to the mesh constructor.
-
reporter The python example I gave can be fixed by giving the correct communicator to the mesh. I'm trying to see if I can simple the example in cpp where I originally found it.
-
reporter I have made a cpp example that I think hangs in the assemble step:
#include <dolfin.h> #include "integral.h" using namespace dolfin; int main() { SubSystemsManager::init_petsc(); MPI_Comm comm = MPI_COMM_WORLD; int rank; MPI_Comm_rank(comm, &rank); if (rank == 0) { auto mesh1 = std::make_shared<UnitSquareMesh>(MPI_COMM_SELF, 2, 2); auto V1 = std::make_shared<integral::FunctionSpace>(mesh1); Function u(V1); u.interpolate(Constant(1.0)); integral::Form_int integral(mesh1, std::make_shared<Function>(u)); info("made form"); double ints = assemble(integral); } info("end"); }
with the ufl in
integral.ufl
element = FiniteElement("Lagrange", triangle, 1) v = TestFunction(element) u = Coefficient(element) a = v*dx int = u*dx forms = [int,a]
I had a look in the python code assembling.py. Here there is an extra line
comm = dolfin_form.mesh().mpi_comm()
that is used to make the tensor, which I think could explain why the python works and not the cpp. -
- changed status to resolved
Get MPI communicator from mesh when using shortcut assembly for functional. Fixes
#901.→ <<cset 82e3d403a4ed>>
-
@ettieunwin Can you test the branch https://bitbucket.org/fenics-project/dolfin/branch/garth/fix-issue-901?
- Log in to comment