Test output changes every time some tests are run

Create issue
Issue #727 open
Ian Hinder created an issue

Each time a new run of the test suites is performed, the test output of IDAxiOddBrillBH and QuasiLocalMeasures changes a little. These changes are below the tolerances set in the test.ccl files, so the tests pass, but there should be no change at all when the tests are run on the same machine.

An example of the difference from one test run to the next is shown at http://git.barrywardell.net/EinsteinToolkitTestResults.git/blobdiff/95fe088e38298cac66df15223daaef5836e42356..d78c6b784027cf218ee51644004f50fe7cc3e1e5:/test_1/IDAxiOddBrillBH/E2xeon_test_axioddbh/gxx.xl.

Another example, from QuasiLocalMeasures, is at http://git.barrywardell.net/EinsteinToolkitTestResults.git/blobdiff/95fe088e38298cac66df15223daaef5836e42356..d78c6b784027cf218ee51644004f50fe7cc3e1e5:/test_2/QuasiLocalMeasures/qlm-ks-shifted/admbase::metric.average.asc.

The only thorns to exhibit this behaviour are IDAxiOddBrillBH and QuasiLocalMeasures. All other test data remains unchanged from one test run to the next. Note that the two test runs will be performed with different executables, since a new configuration is created for each run. I think I remember testing that simply re-running the same executable resulted in the same test output.

Is it possible that there are a small number of points which access uninitialised memory which changes on each run for these thorns?

Keyword: testsuites
Keyword: QuasiLocalMeasures
Keyword: IDAxiOddBrillBH

Comments (6)

  1. Erik Schnetter
    • removed comment

    Since the output changes only when the executable changes, it may be that the compiler is generating different code. Certain optimisations may depend on the order in which certain internal compiler data structures are laid out, which may depend e.g. on the time and date.

    Can you save two executables? I would like to compare the disassembled output of the routine generating these data, and see whether they differ.

  2. Ian Hinder reporter
    • removed comment

    I have removed the "rm" command, so executables will be saved from now on in the automated system. I will make them available when they appear.

  3. Ian Hinder reporter
    • removed comment

    I have seen something similar to this recently as well. The test GRHydro_test_tov_ppm_ML actually passed on one run and failed when it was run again. The only commit between these two modified unrelated test data and parameter files, and couldn't possibly have changed the behaviour of GRHydro.

    The first run was http://git.barrywardell.net/EinsteinToolkitTestResults.git/tree/10647caa473ab21d17afc6030c2f65189bc5169f and the test passed (http://git.barrywardell.net/EinsteinToolkitTestResults.git/blob/10647caa473ab21d17afc6030c2f65189bc5169f:/test_2/GRHydro/GRHydro_test_tov_ppm_ML_disable_internal_excision.diffs). The second run was http://git.barrywardell.net/EinsteinToolkitTestResults.git/tree/d7442650e1d7cfe5dca3a23a0eaa2df734c0035a and the tests failed (http://git.barrywardell.net/EinsteinToolkitTestResults.git/blob/d7442650e1d7cfe5dca3a23a0eaa2df734c0035a:/test_2/GRHydro/GRHydro_test_tov_ppm_ML.diffs).

    The only commit between these two runs was https://github.com/barrywardell/EinsteinExact/commit/26f0b04b7af196673b10043ca6cd44edda7cffaf which could not change the results for the GRHydro test.

    Note that this test has been consistently passing when run using the old test system (http://damiana2.aei.mpg.de/~ianhin/testsuites/einsteintoolkit/).

    The executables are available in datura:/lustre/datura/ianhin/projects/temp.

  4. Log in to comment