Pull requests

#20 Open

Use numpy.testing.assert_almost_equal in math_test

Bitbucket cannot automatically merge this request.

The commits that make up this pull request have been removed.

Bitbucket cannot automatically merge this request due to conflicts.

Review the conflicts on the Overview tab. You can then either decline the request or merge it manually on your local system using the following commands:

hg update default
hg pull -r assert_almost_equal https://bitbucket.org/takluyver/pygame
hg merge a2d31e438e70
hg commit -m 'Merged in takluyver/pygame/assert_almost_equal (pull request #20)'
  1. Thomas Kluyver

There were four failures due to floating point inaccuracies. Using assert_almost_equal clears them all up.

As currently written, this requires numpy to run these tests. If that's an issue, we can easily provide our own implementation of assert_almost_equal.

  • Learn about pull requests

Comments (9)

  1. illume


    I would want to check why the results are slightly different. Is precision being lost, and does that matter? If so, how much precision, and is it significant? Is the type being used different (long double or float)? It could be that something is being cast incorrectly, and that precision is being lost. Which might matter for some applications... even if it is a tiny bit.

  2. Thomas Kluyver author

    Some precision is being lost, but not a lot - e.g. 38.71999999999999 != 38.72. Whether that could be significant, I don't know. Probing, the second number (from the Python code path) is, more precisely, 38.719999999999998863131622783839702606201171875.

    It appears the C code is losing slightly more precision than the equivalent Python code.

    Looking at this line a in _vector_distance_helper:

    tmp = PySequence_GetItem_AsDouble(other, i) - self->coords[i];

    tmp is a double, and ...AsDouble should return a double, but is it possible that self->coords[i] is losing precision? The relevant lline at vector creation is vec->coords = PyMem_New(double, vec->dim);.

  3. illume

    That line is interesting, because it does not check for an error. PySequence_GetItem_AsDouble returns -1 on error... self->coords[i] is a double too, so that should be fine.

    diff.x * diff.x + diff.y * diff.y

    Is different from:

    distance_squared = 0
    distance_squared += diff.x * diff.x
    distance_squared += diff.y * diff.y

    The precision is being lost in the accumulator. So we should just have some cases in there for the length vectors we support, plus a fallback to the current code for other lengths. I think that is the same problem with the dot implementation too. I haven't tested this, but I reckon that is it.

  4. Thomas Kluyver author

    The Python code with an accumulator produces the same result for me:

    In [38]: v1 = Vector2((1.2, 3.4))
    In [39]: v2 = Vector2((5.6, 7.8))
    In [40]: diff = v1 - v2
    In [41]: diff.x*diff.x + diff.y*diff.y
    Out[41]: 38.72
    # 42-43 I made a mistake
    In [44]: ds = 0
    In [45]: ds += diff.x * diff.x
    In [46]: ds += diff.y * diff.y
    In [47]: ds
    Out[47]: 38.72
  5. donLorenzo

    sorry for being late to respond. I'm currently moving and my computer is still somewhere in a box :/

    The Pull Request requires numpy. why? what is wrong with python's stdlib unittest.TestCase.assertAlmostEqual() ?

    About the general discussion about precision: I did not pay attention to numeric stability of the code. It is pretty straight forward naive implementation. I hoped that for the typical use case (games) it wouldn't become an issue but I guess that Murphy's Law will strike at some point.