Avoid quadrature representation in norm or silence the warning

Issue #949 resolved
Jan Blechta created an issue

MWE:

from dolfin import *
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, "P", 1)
u = Function(V)
norm(u)

stdout:

/vivid/home/jan/dev/fenics-master/src/ffc/ffc/jitcompiler.py:235: QuadratureRepresentationDeprecationWarning: 
*** ===================================================== ***
*** FFC: quadrature representation is deprecated! It will ***
*** likely be removed in 2018.1.0 release. Use uflacs     ***
*** representation instead.                               ***
*** ===================================================== ***
  issue_deprecation_warning()

Comments (11)

  1. Jan Blechta reporter

    The deprecation warning revealed that quadrature is forced because of its better stability compared to tensor. Can we be sure that uflacs will provide sufficient adequately stable replacement? Doesn't it use tensor-contraction trick too?

    Opinions @martinal, @miklos1, @garth-wells?

    If we don't reach a conclusion before the release, we can just silence the warning and keep using quadrature for some time.

  2. Miklós Homolya

    Is norm(u) just sqrt(assemble(u**2))? If so, then I don't see any reason not to use uflacs instead of quadrature.

  3. Miklós Homolya

    Quadrature was probably explicitly selected because FFC would otherwise have picked tensor, which is disastrous especially in the norm(u - v) case, since tensor has to rearrange (u - v)**2 to u**2 - 2*u*v + v**2, but that expansion is not valid with floating-point arithmetic, and might give a negative number even though (u - v)^2 >= 0. Then taking the square root a negative number fails...

  4. Miklós Homolya

    I might be wrong, but I am not aware of uflacs applying the tensor contraction trick for 0-forms. But as far as I can tell, tensor representation won't give you a drastic performance improvement even in the u^2 case, and is outright dangerous for (u - v)^2.

  5. Jan Blechta reporter

    Thanks for the comments, @miklos1.

    Yes, user might call norm(u-uh) and disaster can happen. I will check if uflacs can be told to avoid tensor contraction.

    On the other hand one can call errornorm(u, uh) which will

    1. interpolate u and uh to some same higher order space,
    2. compute function e = u_intepolant - uh_interpolant and call norm(e) (with single Argument e).

    Is there further cancellation possible by expanding e**2*dx with small e into tensor contractions?

  6. Miklós Homolya

    For polynomial degree p and topological dimension d, both quadrature and tensor will take O(p^2d) operations for both norm(u) and norm(u - v), but the constant factors are likely different for the two approaches, and it is hard to tell in general which one is faster.

  7. Jan Blechta reporter

    I am rather worried about stability rather then speed, i.e. whether cancellation can happen for u**2*dx with u without a sign when expanded into tensor contraction.

    Consider integral u**2*dx. Quadrature sum_k ( sum_i u_i phi_ik )**2 q_k is non-negative by construction. But tensor contraction sum_ij C_ij u_i u_j can possibly suffer with cancellation when u does not have a sign. Is it correct?

    So I speculate that tensor can be unstable even when integrating a single argument, just saying that errornorm() trick does not save the day.

  8. Lawrence Mitchell

    If the terms (n of them) all have the same sign, then any ordering of the summation gives a relative error of at most nu (where u is the unit roundoff). For heavy cancellation (sum_i |x_i| >> |sum_i x_i|), then summing with decreasing ordering is OK. But the error bounds are much worse. See Higham, Accuracy and Stability of Numerical Algorithms (pp79-90).

  9. Jan Blechta reporter

    I might be wrong, but I am not aware of uflacs applying the tensor contraction trick for 0-forms.

    I didn't find it either.

  10. Log in to comment