GenericMatrix.compress fails in parallel with non-square matrices
Illustrated with the following example:
from dolfin import *
mesh = UnitSquareMesh(10, 10)
space1 = FunctionSpace(mesh, "CG", 1)
space2 = FunctionSpace(mesh, "CG", 2)
a = inner(TestFunction(space1), TrialFunction(space2)) * dx
mat = assemble(a)
mat.compress()
Leads to a segmentation fault when executed on 4 processes (with FEniCS 1.2).
The issue is possibly caused by:
local_range[1] = this->local_range(0);
in GenericMatrix.cpp.
Comments (27)
-
reporter -
We shouldn't really support
local_range[1]
- it makes assumptions on the linear algebra backend. -
But the column range is used in creating the matrix, so why not allow retrieving it? The compress function needs the column range to work properly and I just tested that the following change to local_range() in PETScBaseMatrix fixes the problem for the PETSc backend:
if (_A) { PetscInt m(0), n(0); if (dim == 0) MatGetOwnershipRange(*_A, &m, &n); else MatGetOwnershipRangeColumn(*_A, &m, &n); return std::make_pair(m, n); }
There's probably a similar fix for Epetra.
-
I would suggest testing the PETSc matrix option
MAT_IGNORE_ZERO_ENTRIES
, and if it works remove GenericMatrix::compress. -
reporter This wouldn't give exactly the same behaviour as
compress
uses a tolerance. -
Agree with James and I don't quite see how
MAT_IGNORE_ZERO_ENTRIES
would work?I use compress as a postprocessing tool to reduce the number of near zero entries in the fully assembled matrix. However, I still need the fully assembled matrix with the regular sparsity pattern, though, because that is required for efficient axpy operations with other matrices. I typically just call compress before a solve to reduce the time of costly matrix vector products. I don't think
MAT_IGNORE_ZERO_ENTRIES
could replace this functionality since it seems to be set before assembling and thus would directly lead to a matrix with different sparsity pattern from other matrices assembled on the same functionspace. -
@garth-wells This is quite old, is it relevant anymore after all the changes to parallel implementation the last year?
-
The code in question breaks abstractions, which makes it an ongoing maintenance problem. I would like to remove it.
-
Please don't. I didn't know there was a maintenance problem though. What exactly are you thinking about? I wrote it so I'm happy to fix it.
-
@mikael_mortensen The current code should go into user-developed libraries. In DOLFIN we can support the PETSc option
MAT_IGNORE_ZERO_ENTRIES
.If a user wants a tolerance on which entries to add, it should be done during assembly and not post-assembly. Sparse matrices should not be re-arranged post assembly.
-
MAT_IGNORE_ZERO_ENTRIES
cannot do the same ascompress
. It is an important feature thatcompress
is applied post assembly, because then the assembled matrix will have the same sparsity pattern as the other matrices and may be used in constructing composite coefficient matrices (e.g., convection plus pre-assembled diffusion). If applied just before solve,compress
can speed up a Krylov solver of aVectorFunctionSpace
with a factor of three, because the matrix vector product is significantly faster with a much smaller sparsity pattern. For example a Navier Stokes solver with aVectorFunctionSpace
can benefit greatly from this.I don't mind putting this in
fenicstools
, but I believe others are (or should be) using it. It's been there for quite a while. -
There are better ways to achieve what you want without using
compress
.Using
MAT_IGNORE_ZERO_ENTRIES
is no different fromGenericMatrix::compress
other than usingDOLFIN_EPS
rather than zero for the check.Assuming anything about the sparsity pattern post-
compress
is not good - it's ripe for bugs. -
MAT_IGNORE_ZERO_ENTRIES
is applied pre-assembly, hence it is different. What other options are there? -
Pre matrix assembly/apply is good
When adding convection and diffusion matrices, the sparsity patterns should be the same and the matrix shouldn't contain any zeros, so no need to compress.
For a vector function space with zeroes that can be removed + Krylov solvers, use
MAT_IGNORE_ZERO_ENTRIES
, implement (or wait for the imminent arrival of support for) nested matrices with PETSc, or implement a mat-vec product blockwise. -
Imminent arrival of support for nested matrices with PETSc sounds interesting. Is there a branch including a demo somewhere?
-
Pre assembly is no good for me. Like you say sparsity patterns need to be the same for diffusion and convection matrices and they will not be with
MAT_IGNORE_ZERO_ENTRIES
. Nested matrices sounds interesting. Block matrices are nice as well. However, they may still contain a lot of zero entries that can only be removed withcompress
. -
@meg Not yet - Chris and I have a new project to implement support for it. We'll start any day now.
There are some sub-optimal (in terms of performance) C++ examples at https://bitbucket.org/magma-dynamics/preconditioning
-
I agree with the point of @mikael_mortensen why
GenericMatrix::compressed
is useful.When having 2 matrices with distinct sparsity patterns wouldn't it be feasible to "sum" them using
LinearOperator
(see its unit test for usage) which defines just matvec. (There could be added convenientMatrixAXPY
class to support for linear combinations of matrices implementingLinearOperator
.) This framework would fit Garth's approach of preassembly compression resulting in non-compatibility of sparsity patterns. -
@blechta My point is not so much one usefulness, but on whether or not it should be in DOLFIN. My objection to having it in DOLFIN is that it breaks abstractions by delving into the details of how a matrix is stored, which in turn makes it a maintenance problem.
-
In merging at PR https://bitbucket.org/fenics-project/dolfin/pull-request/193, another issue with
compress
is that it loses/destroys any block size present in the original matrix, which hurts performance and messes up AMG. -
Unless someone volunteers to fix this, I'll remove GenericMatrix::compress to 'fix' this issue.
-
- changed milestone to 1.6
-
This tiny patch fixes the issue for PETSc matrices. It uses the matrix column range, and I do not think there's another way. If you still object to returning the column range @garth-wells, then close the issue as won't fix and I will put the compress routine in fenicstools.
-
@mikael_mortensen If it only works for PETSc, then it can't be in
GenericMatrix
. I would prefer to see this a tools package.Have you asked on the PETSc list if they can add this type of function? The functionality belongs in the backend library. If it's in the backend, then DOLFIN can support it.
-
- changed status to closed
@mikael_mortensen to put in fenicstools
-
- changed status to resolved
Fix Issue
#61(GenericMatrix::compressed()).→ <<cset 60d244f1c431>>
-
- removed milestone
Removing milestone: 1.6 (automated comment)
- Log in to comment
this->local_range(1) seems to be invalid in parallel, and I don't personally know how to retrieve this information by other means.