intermittent test failures in test_char_pointer_conversion, test_opaque_enum and test_dlopen_filename

Issue #384 resolved
Matěj Cepl
created an issue

When running tests on openSUSE, from time to time, some of them fail. For example (the full build logs are attached) this:

[  279s] =================================== FAILURES ===================================
[  279s] _________________________ test_char_pointer_conversion _________________________
[  279s] 
[  279s]     def test_char_pointer_conversion():
[  279s]         import warnings
[  279s]         assert __version__.startswith(("1.8", "1.9", "1.10", "1.11")), (
[  279s]             "consider turning the warning into an error")
[  279s]         BCharP = new_pointer_type(new_primitive_type("char"))
[  279s]         BIntP = new_pointer_type(new_primitive_type("int"))
[  279s]         BVoidP = new_pointer_type(new_void_type())
[  279s]         BUCharP = new_pointer_type(new_primitive_type("unsigned char"))
[  279s]         z1 = cast(BCharP, 0)
[  279s]         z2 = cast(BIntP, 0)
[  279s]         z3 = cast(BVoidP, 0)
[  279s]         z4 = cast(BUCharP, 0)
[  279s]         with warnings.catch_warnings(record=True) as w:
[  279s]             newp(new_pointer_type(BIntP), z1)    # warn
[  279s] >           assert len(w) == 1
[  279s] E           assert 0 == 1
[  279s] E            +  where 0 = len([])
[  279s] 
[  279s] c/test_c.py:3925: AssertionError
[  279s] _________________________ TestCTypes.test_opaque_enum __________________________
[  279s] 
[  279s] self = <testing.cffi0.test_ctypes.TestCTypes instance at 0x7f901bafb1b8>
[  279s] 
[  279s]     def test_opaque_enum(self):
[  279s]         import warnings
[  279s]         ffi = FFI(backend=self.Backend())
[  279s]         ffi.cdef("enum foo;")
[  279s]         with warnings.catch_warnings(record=True) as log:
[  279s]             n = ffi.cast("enum foo", -1)
[  279s]             assert int(n) == 0xffffffff
[  279s] >       assert str(log[0].message) == (
[  279s]             "'enum foo' has no values explicitly defined; "
[  279s]             "guessing that it is equivalent to 'unsigned int'")
[  279s] E       IndexError: list index out of range
[  279s] 
[  279s] testing/cffi0/backend_tests.py:1391: IndexError
[  279s] ___________________________ TestFFI.test_opaque_enum ___________________________
[  279s] 
[  279s] self = <testing.cffi0.test_ffi_backend.TestFFI object at 0x7f901d303050>
[  279s] 
[  279s]     def test_opaque_enum(self):
[  279s]         import warnings
[  279s]         ffi = FFI(backend=self.Backend())
[  279s]         ffi.cdef("enum foo;")
[  279s]         with warnings.catch_warnings(record=True) as log:
[  279s]             n = ffi.cast("enum foo", -1)
[  279s]             assert int(n) == 0xffffffff
[  279s] >       assert str(log[0].message) == (
[  279s]             "'enum foo' has no values explicitly defined; "
[  279s]             "guessing that it is equivalent to 'unsigned int'")
[  279s] E       IndexError: list index out of range
[  279s] 
[  279s] testing/cffi0/backend_tests.py:1391: IndexError
[  279s] = 3 failed, 1874 passed, 88 skipped, 4 deselected, 4 xfailed in 269.95 seconds =
[  280s] error: Bad exit status from /var/tmp/rpm-tmp.z3LJkf (%check)

Comments (9)

  1. Armin Rigo

    The patch is not useful: it's skipping the tests, hiding failures. What I need is a way to reproduce the failures. The tests always pass on the various Linux boxes I run them on. May be related to the exact way in which the tests are run. Can you paste the complete command line? Thanks!

  2. Matěj Cepl reporter

    That _log file attached has completely build logs from the building of OpenSUSE packages, so with all command lines etc. you can dream about. Particularly, if you are after the command line we used, it is

    PYTHONPATH=/home/abuild/rpmbuild/BUILDROOT/python-cffi-1.11.5-2.16.i386/usr/lib/
    python2.7/site-packages
    py.test-2.7 -k 'not test_init_once_multithread' c/ testing/
    

    (and the equivalent for Python 3) And yes that patch is nothing more than weirdly formatted list of unittests which are failing on us, and I have never claimed it to be anything more than that. For example, I have no idea how to reproduce it consistently (that would be awesome).

  3. Armin Rigo

    Note that you already skip a test by calling pytest with the arguments -k 'not test_init_once_multithread'. It looks like the new failures are of a similar kind. I'm trying to reproduce them by running the test with the same version of py.test in a loop, but without success so far. I fear I won't be able to do anything else and you need to investigate yourself.

  4. Armin Rigo

    Ah,maybe some values of the PYTHONWARNINGS environment variable could have this effect. Is there a way to know if this variable happens to be set when the tests fail?

  5. Armin Rigo

    Fixed in e2e324a2f13e for the catch_warnings issues. There are still some issues involving test_init_once_multithread though. Are they also random or do they show up always (if you let them run, by not giving -k 'not test_init_once_multithread')?

  6. Log in to comment