Static condensation

Issue #387 new
Corrado Maurini created an issue

The unknowns of DG fields are now treated as normal degrees of freedom, contributing to increasing the size of the problem to solve. In many finite element codes use the fact that DG fields can be eliminated at the element level in the assembling process. The procedure is usually known as static condensation. The key advantage is that DG fields do not contribute to the problem size. This can be important to reduce memory requirements for large 3D problems where the DG fields can be tensorial quantities, for example.

Static condensation is currently not an option in FEniCS, at least from the python interface, because the user does not have access to the assembling process.

I think it would be useful to add this functionality. In general it would be nice to have the possibility to eliminate some local variables during the assembling by using user-defined criteria. One may want to have specific local equations to be verified at specific points inside the elements and use them to eliminate some or all of the local variables. In general one may want to have the evaluation points to be not necessary the integration points, and the local equations can be linear or non-linear.

I do note have a precise idea of how this can be implemented in FEniCS. Any thought?

Comments (23)

  1. Martin Sandve Alnæs

    Please don't assign features to specific developers and set milestones for us.

    I think this is a rather nontrivial undertaking.

  2. Corrado Maurini reporter

    Sorry about that. FYI, this issue follows a preliminary informal discussion with Garth, who should have some ideas about a possible implementation.

  3. Martin Sandve Alnæs

    Ok, didn't know you already talked to Garth. It might be possible to implement specifically at the assembler level, avoiding code generation work.

  4. Jack Hale

    We now have a reasonably clean implementation of this for MITC plate elements within fenics-shells which could be generalised (very easily) and added into DOLFIN.

    The call is:

    A = mitc_assemble([a_00, a_01, a_10, a_11])
    

    In the case of a_11 not passed no inversion is performed (this is possible for certain orthogonal projections).

    As it is tagged for 1.7 there is no rush so lets discuss it during and post-conference.

  5. Jack Hale

    Our code is below. I have written to a v1.5 interface (we have decided to track the stable release of FEniCS). So quite a few changes would be needed for v1.6 but I have commented these in the C++ code.

    https://bitbucket.org/unilucompmech/fenics-shells/src/2f90bc3635b52e823ea33c3a21c6ad66a95d83e6/test/test_mitc_assembler.py?at=master https://bitbucket.org/unilucompmech/fenics-shells/src/2f90bc3635b52e823ea33c3a21c6ad66a95d83e6/fenics_shells/fem/assembling.py?at=master https://bitbucket.org/unilucompmech/fenics-shells/src/2f90bc3635b52e823ea33c3a21c6ad66a95d83e6/fenics_shells/fem/MITCAssembler.h?at=master

    I haven't written the wrapper yet for the inversion of a_11 but the C++ code is there.

  6. Jack Hale

    Still on our TODO list to merge some static condensations into DOLFIN next. After thinking about the algebra recently it turned out that MITC can be expressed uniformly as a triple block elimination on the Jacobian system, so significantly more complex than we originally anticipated.

  7. Marie Elisabeth Rognes

    I'm running into an application where static condensation would be good. What is the current status on this topic?

  8. Jack Hale

    We have a working implementation for static condensation of non-linear problems with what I think is a decent interface.

    Conceptually, we keep track of two FunctionSpaces: the primal space e.g. for Stokes, the velocity (u), and the full space e.g. velocity and pressure (u, p). You write your form on the full space, and the assembler (C++) generates the assembled tensors on the primal space using say the Schur complement operation. At each Newton iteration, the full space update is reconstructed from the primal space update, and the Newton procedure continues.

    As a side note, I think the design of Firedrake with the internal splits on blocks relating to each field make this type of feature a lot easier to implement. It is probably even possible to write a domain specific language for the problem specific static condensation with Firedrake's design.

    Implementation wise in DOLFIN, you need to write a SystemAssembler-like class for your problem which does the appropriate splitting of the cell tensor using Eigen and then the condensation.

    Usage/API:

    https://bitbucket.org/unilucompmech/fenics-shells/src/b519f8815e9b669a8d70b6f3d125de6fc77f2f1a/demo/vonkarman/demo_vonkarman-mitc-clamped.py?at=master&fileviewer=file-view-default

    Custom NonLinearProblem class:

    https://bitbucket.org/unilucompmech/fenics-shells/src/b519f8815e9b669a8d70b6f3d125de6fc77f2f1a/fenics_shells/fem/solving.py?at=master&fileviewer=file-view-default

    Assembler (most of this is boiler-plate code):

    https://bitbucket.org/unilucompmech/fenics-shells/src/b519f8815e9b669a8d70b6f3d125de6fc77f2f1a/fenics_shells/fem/ProjectedAssembler.h?at=master&fileviewer=file-view-default

    This could be adjusted to a different type of static condensation.

  9. David Ham

    Funny you should mention that, Jack. Thomas Gibson has been putting hybridisation (which is very close to static condensation) into Firedrake exactly by providing a small DSL for the condensation operations. You can then use this to build a PETSc preconditioner which takes the original ufl problem and produces the condensed problem, which is handed on to whichever solver the user selects. Expect a talk at FEniCS '17 (indeed there was a talk at CSE).

  10. Martin Genet

    Sorry if I missed it, but what is the status of static condensation in FEniCS? I see the (outstanding) implementation of ProjectedFunctionSpace, NonLinearProblem, etc. in fenics-shells. Are there similar classes in fenics? Anyway, they can be used for 3D problems, right? Thank you so much for the update. Martin

  11. Jack Hale

    @Martin Genet We won’t be revisiting this issue in the context of dolfin-old (in this repository). Like @Michal Habera has already said, these things are now (relatively easy) to accomplish in dolfinx using numba.

    fenics-shells will be updated to work with dolfinx later in the year.

    Once common aspects are identified between problems, we will put some high-level polish on these features (e.g. automatic block splitting and local sizing) so that user’s don’t need to do all of the low-level manipulations themselves. I think though for ‘advanced’ users what is in dolfinx is already pretty useable.

  12. Jack Hale

    I wanted to add one more small point: The Numba approach has the potential to be much faster than the generic Eigen code in fenics-shells because the sizes of linear algebra objects are provided at run-time and just-in-time-compiled.

  13. Log in to comment