Add 'Linf' norm for Functions
norm(v)
supports a range of Sobolev norms but not the infinity (max) norm.
Implement this in the C++ layer and add as a new norm type in the Python layer.
Comments (33)
-
-
reporter I think it is: by evaluating the function at the vertices and taking the maximum. This will not be the exact infinity-norm but a good approximation, in the same way as the current L2 etc norms are good approximations (and sometimes exact depending on which quadrature rule is being used).
-
Why not just map it to max(abs(f.vector())) then. Easier and arguably better than vertex values in many cases.
-
reporter Because the dof values in the vector are different than point values. For Lagrange elements this would be the best solution, but not for the vector elements (BDM etc).
-
reporter The other option would be to implement it only in the Python layer and only support Lagrange elements.
-
reporter (We need this for the tutorial to enable easy checking of error in the maximum norm.)
-
An inf norm that isn't the inf norm . . . I don't like it.
-
reporter You mean it's not ok because it's only h^2 accurate?
-
reporter It could be called something else but I find it very useful. It enables things like this:
errornorm(u, u_h, 'Linf')
As long as the docstring explains that what is actually computed is the maximum taken over the vertices, I see no harm in adding it.
Plus its general usefulness.
Plus I already implemented it... ;-)
-
reporter - changed title to Add 'Linf' norm for Functions
-
reporter - changed status to resolved
Fix issue
#663: Add 'Linf' norm for Functions→ <<cset 80e221c3cb39>>
-
@logg I really don't like it because it's wrong. The inf norm of a function should be the inf norm, and not some approximation. It wouldn't be acceptable that the DOLFIN 2-norm of vector is kind of close to the 2-norm of a vector. Why should it be ok for the Function?
On the implementation, the norm type should be an enum. It makes tab-completion work nicely.
-
Could you check the element type and print out a warning if the implementation returns only an approximation of the Linf norm?
-
reporter What do you mean by "it wouldn't be acceptable that the DOLFIN 2-norm of vector is kind of close to the 2-norm of a function"?
The 2-norm for a function is computed by an integral that approximates the actual L2-norm of the function. If the quadrature rule is good enough then it will be exact to machine precision.
For the L-infinity norm there is no simple way (at least that comes to mind) that computes the exact maximum, but we can compute a very nice approximation of it (which in a typical case will be accurate to within 4-5 digits).
We have the exact same issue already with
errornorm
. It tries its best to compute the norm of the difference between two functions although that difference will also be inexact, but typically exact enough for most use cases (since knowing the size of the error with more than 2-3 digits is often not interesting). -
@logg I meant 'vector'. I've edited my comment.
Indeed, there is not simple way to compute the Linf norm of a function, so DOLFIN shouldn't mislead users that it has some clever way of doing it. If a user wants the max value from a vector, we provide that already so let them use it.
I don't (or advise) use of
errornorm
either. I control the polynomial interpolation of the exact solution manually. -
reporter @garth-wells It is not misleading if the docstring tells the user that it is an approximation (and a very good one). Everything we compute is an approximation, even l2 norms of vectors.
-
@logg Other approximations are to floating point precision (it is a computer after all), but are mathematically 'correct', and would be exact in exact arithmetic. The implementation of the Linf norm is not correct, and would not be correct in exact arithmetic and exact integration.
-
reporter @garth-wells I think you are being silly in opposing the addition of very useful functionality because it is not correct to machine precision. The use case for this functionality is to evaluate the accuracy of a computed solution against a known exact solution. Then it's enough to know that the error in the maximum norm is ca 1.1e-12. You don't need to know that it is exactly 1.14159265358979.
-
Silly? The concept is plain wrong. It's only valid for P1 elements.
-
reporter No, it is valid with good accuracy for all elements.
-
reporter @garth-wells Would you sleep better if I added two norms: "linf" and "linfapprox". The first one would give an error message for functions other than Lagrange.
-
@logg, generally agree here with @garth-wells unless you provide reference for rigorous error estimates supporting the claim "it is valid with good accuracy for all elements".
-
@logg I slept just fine :). Yes, I would be happier with 'approx' being added. My preference would still be for
u_inf = u.vector().norm("inf")
-
reporter @blechta See attached figure.
@garth
u.vector().norm("inf")
only works for Lagrange elements. -
There is missing proof of
$|v - \pi_h v| \le C h^2$
, the rest is trivial. Anyway one should check how $C$ behaves with polynomial degree. I bet it can blow-up pretty quickly.And the implementation is not correct for higher value dimensions.
-
I also suspect that the implementation is incorrect for discontinuous Lagrange (and other elements without continuity around vertices).
compute_vertex_values
will vertex value from random cell, right? One should rather doufc_lagrange_element_deg_1::evaluate_dofs
cell-by-cell. But one would need precompiled Lagrange 1 element, possibly for different value-dimensions. (Easy from Python.) -
reporter @blechta You don't believe that linear interpolation of a polynomial is 2nd order accurate? If not I leave it as a simple exercise.
Why is it not correct for higher value dimensions? I believe it is.
-
Have you checked how
$C$
looks like? One gets with increasing polynomial degree of interpolant worse and worse value.Anyway, one gets easily zero vertex values for bubbles. I'm afraid that typical errors
$u-u_h$
are bubble-like.For vector valued u,
$||u||_\infty = || (\sum_i |u_i|^2)^\frac12 ||_\infty$
is the most relevant in practice due to its rotational invariance. The implementation is rather doing$||u||_\infty := \max_i ||u_i||_\infty$
. Agree that this rather depends a chosen convention (from which I strongly prefer former one) but it should be at least stated in documentation.Are you sure also about discontinuous elements?
-
reporter Yes for bubbles you get zero and the error will be ~ h^2 * u'' where u'' is some norm of the 2nd derivative of the bubble. Errors will typically not be bubble like if you are estimating the error of a computed finite element solution since the finite element solution is not the nodal interpolant. Of course the error will be misleading if you would use it to estimate the error of the nodal interpolant itself.
Yes, I use the $L^{\infty}(\Omega; l_{\infty})$ norm which I think is more natural and also the easiest to implement.
For DG elements, the norm will be just as accurate as what we currently have for
compute_vertex_values
and for plotting. -
is more natural
I definitely don't agree with that.
$L^\infty(l^2)$
is significant by its rotational invariance and therefore being optimal in any estimate. Switching to$L^\infty(l^q)$
typically introduces additional factor$d^{|\frac12-frac1q|}$
. Could you, please, state used convention in the docstring? And also the DG (CR, RT, BDM, ...) flaw? -
@logg The FE solution in 1D is the nodal interpolant.
Using the inaccuracy of
compute_vertex_values
as an argument for adding something that is wrong is flawed (we have plans to fixcompute_vertex_values
for DG elements). -
The comments from Jan have hardened my resolve that this 'feature' should not be added.
-
reporter - removed milestone
Removing milestone: 1.7 (automated comment)
- Log in to comment
We've discussed this before, and I recall the conclusion being that it's not feasible other than for P1 element.