Function eval does not find a point that is clearly inside the domain

Issue #97 resolved
Mikael Mortensen created an issue

The following code tries to evaluate a Function for a point that is clearly inside the domain, but the point is not found. Running the code

from dolfin import *
from numpy import array

mesh = UnitCubeMesh(3, 3, 3)
V = FunctionSpace(mesh, 'CG', 1)
x0 = interpolate(Expression('x[0]'), V)
x = array([0.625, 0.625, 0.65625])
try:
    x0(x)
except:
    print 'Not finding ', x, " inside a UnitCubeMesh"

leads to output

Computed bounding box tree with 323 nodes for 162 entities.
Evaluating at x = <Point x = 0.625 y = 0.625 z = 0.65625>
Not finding  [ 0.625    0.625    0.65625]  inside a UnitCubeMesh

For now I have tracked it down to a problem with the new intersection code. If I replace line 380 in Function.cpp, like this

    //unsigned int id = mesh.bounding_box_tree()->compute_first_entity_collision(point, mesh);
    unsigned int id = mesh.bounding_box_tree()->compute_first_collision(point);

then it works, but I don't really understand why. Maybe someone else does?

I'm on the bitbucket master branch.

Comments (15)

  1. Jan Blechta

    Isn't it caused by issue 92? I originally started with same problem - failing evaluation inside mesh - but when I tracked problem down to Triangle::contains, I changed the description of the issue. Tetrahedron::contains is very likely to being subject to this also.

  2. Anders Logg (Chalmers)

    Fixed now by using a less strict tolerance. It cannot be expected for the tests to hold to within machine precision. Now using the less strict DOLFIN_EPS_LARGE = 1e-14.

  3. Mikael Mortensen reporter
    • changed status to open

    The DOLFIN_EPS_LARGE constant needs to go in the function point_in_bbox as well as the contains functions. Otherwise the following code fails:

    from dolfin import *
    from numpy import array
    
    N = 2
    L = 10.
    mesh = BoxMesh(0., 0., 0., L, L, L, N, N, N)
    
    V = FunctionSpace(mesh, 'CG', 1)
    x0 = interpolate(Expression('x[0]'), V)
    x = array([L/2., L/2., L/2.])
    print x0(x)
    

    There is no error for L < 9.

    I would make a fork and a pull request, but there seem to be quite some work on the bounding box files in branch logg/topic-ccfem and I get a bunch of conflicts. So for now I just report it.

  4. Mikael Mortensen reporter

    Great!

    But unfortunately it might not be the end of it as I'm still running into some problems. Change L in code above to 1000 and it will break again unless you change DOLFIN_EPS_LARGE to 1e-13 in point_in_bbox. I get some similar issues with another test that is fixed by increasing the constant in contains. DOLFIN_EPS_LARGE should probably be scaled by mesh or cell size? Perhaps use the vectors in contains for scaling?

    Just one more comment: squared_distance seems to be more robust than contains (that I now see will be renamed collides).

  5. Jan Blechta

    It is possible that rounding errors cause arbitrarily large error induced by ill-conditioned geometry - this is may be the reason why exact arithmetics is used in CGAL. I think that I saw a numerical analysis on ill-conditioning of a computation of simplex volume (which is just the determinant in the denominator of the alogorithm) somewhere but I can't find it.

    There is also also a possibility that solution of the respective linear system is better to be done by the Gauss elimination (with pivoting) rather than Cramer's rule currently used.

  6. Prof Garth Wells

    I think a tolerance should be a passed as an argument, with a default. It looks silly having DOLFIN_EPS_LARGE. Where does it stop - DOLFIN_EPS_SMALL, DOLFIN_EPS_VERY_SMALL, DOLFIN_EPS_EVEN_SMALLER, DOLFIN_EPS_LARGER, DOLFIN_EPS_EVEN_LARGE, DOLFIN_EPS_NOT_SO_SMALL_BUT_NOT_VERY_LARGE, . . .

  7. Anders Logg (Chalmers)

    I think it stops there. EPS is machine precision but we can seldom hope to get within machine precision for complex operations, which is why we need something slightly larger (EPS_LARGE). We also have SQRT_EPS which is the right size to use for finite differencing, but that's a remnant from the ODE solvers which have been removed so that could likely also be removed. So in summary there will be only EPS and EPS_LARGE, and EPS_LARGE is there so we don't need to write 100DOLFIN_EPS or 10DOLFIN_EPS in various places and be inconsistent. Regardless of this, I think setting it as a parameter (specific to the collision detection algorithms) is good. And the tests should definitely be scaled by the geometry size.

  8. James R. Maddison

    I have encountered exactly this problem before (with Fluidity). It was solved via the following algorithm (linear simplex meshes only):

    1. Construct an R-tree using marginally extended bounding boxes

    2. To determine the "owner" of a given point:

    a. Query the R-tree for the owner, likely returning many false positives

    b. Select, from those returned, the element with the largest minimum barycentric coordinate for the point

    This guarantees that, if a point is within the domain, the point is assigned a unique owning element. For points near the domain boundary the use of a tolerance or arbitrary precision arithmetic is still required. Otherwise the algorithm is precision robust, assuming the original bounding box expansion is big enough.

  9. Anders Logg (Chalmers)

    I think this issue should now be fixed (in logg/topic-ccfem) which will be merged in shortly. I have added Mikael's test case as a working unit test (with L = 1000). If there are more issues, please provide failing unit tests.

  10. Log in to comment