Caunda/Proca and Canuda/Scalar test faiilures after commits that "remove parameter eta_beta_dynamic"
Hello all, not sure if this is already internally tracked by Canuda, but the commits that "remove parameter eta_beta_dynamic" seem to have made tests fail. See https://einsteintoolkit.github.io/tests/build_683.html where it lists commit 12 and the failed tests. Unfortunately the ET test system had been offline for a bit (due to GitHub removing python2 from the system) and the change was not visible in the test results right away and so that report contains many (unrelated) changes. What I would guess are the relevant changes are:
updated submodules
Submodule repos/lean_public 0faaf83..7457e18:LeanBSSNMoL: remove parameter eta_beta_dynamic from test parameter files
NPScalars [schedule.ccl]: fix scheduling of CalcNPScalars
LeanBSSNMoL: remove unused parameters eta_beta_dynamic and moving_eta_transition
LeanBSSNMoL [schedule.ccl]: remove sync call in LeanBSSN_adm2bssn
LeanBSSNMoL [schedule.ccl]: make sure that ApplyBCs at CCTK_INITIAL is only called after Boundary_SelectGroupForBC are set
Comments (8)
-
reporter -
This was indeed a problem, but it should have been fixed with commit https://bitbucket.org/canuda/lean_public/commits/7457e1822e3a880efd67e11f163671b2f5a55c07
On my machine all tests are currently passing… Is this still a problem with the current master branch?
-
reporter Yes, this is still happening. You can take a look at the tests at https://einsteintoolkit.github.io/tests/
The LeanBSSN_Ei_mu0.4_c0.05.log test (you can click on the “log” link to access the output file) fails with
Major error in parameter file '/home/runner/simulations/TestJob01_temp_1/output-0000/arrangements/Proca/NPScalars_Proca/test/LeanBSSN_Ei_mu0.4_c0.05.par' line 150: Parameter 'LeanBSSNMoL::eta_beta_dynamic' not found
which should also fail on your system Did you maybe forget to push all changes?
I just tried and I get a test failure for the LeabBSSN Ei test as well after a fresh checkout on my workstation:
./GetComponents --parallel --shallow https://bitbucket.org/einsteintoolkit/manifest/raw/master/einsteintoolkit.th
Please note that there are apparently multiple tests in Canuda named
LeanBSSN_Ei_mu0.4_c0.05.par
and the one that fails is./repos/Proca/NPScalars_Proca/test/LeanBSSN_Ei_mu0.4_c0.05.par
. -
I see, sorry, I should have read properly the description that this is happening in the scalar and proca arrangements :-)
I've removed the parameter that was giving problems in the following commits:
- https://bitbucket.org/canuda/proca/commits/d24ddf835b0552410175fab2f6f84fe1a537575a
- https://bitbucket.org/canuda/scalar/commits/44df0f7bb4afa01b406a5876acd107ae2bf439d9
I hope this is now fixed.
-
reporter Still fails one (1) test: https://einsteintoolkit.github.io/tests/build_689.html
Note that the table lists 1 Failed Test and 4 Newly Passing Tests ie your commit fixed 4 tests.
There was a bit of a delay since I had to do some emergency cleanup so that the test do not run out of quota on GitHub (we have accumulated about ~20GB of test results in the repository and GitHub does not like this).
The tests all run right now, but the failing one (teukolsky in NPSacalars) shows significant (ie higher than the set threshold) differences from the recorded known-good values.
See eg the diffs files linked (the link labelled “diffs” in the table) on the website for the build: https://github.com/EinsteinToolkit/tests/blob/gh-pages/records/version_689/sim_689_1/NPScalars/teukolsky.diffs
I have (obviously) no clue if this due to data needing to be regenerated due to the committed code changes or if this is some sort of roundoff level sensitivity that would require a less stringent error threshold to pass on multiple machines due to different compiler settings.
-
That one was was due to another commit, and indeed the data needed to be regenerated. I’ve done so here: https://bitbucket.org/canuda/lean_public/commits/e13352d877a5a5b6699a9ec668df4f677deccf93
I hope this fixes it. The test now passes on my machine, but at the moment I have no other machine to try.
-
reporter Tests are all passing again.
https://einsteintoolkit.github.io/tests/build_693.html
Thank you!
I’ll close the ticket.
-
reporter - changed status to resolved
- Log in to comment
@Miguel Zilhão @helvi witek some more details. The Einstein Toolkit tests still all pass in test run 682 on https://einsteintoolkit.github.io/tests/ then first fail in run 683 https://einsteintoolkit.github.io/tests/build_683.html .
The HTML page shows a list of changes that happened to the full ET repositories since the last run (ie from 682 to 683). Most of them are for nrpytutorial so unlikely to affect anything in Canuda or Lean_public. The last recorded change though in LeanBSSLMoL and reports on the changes in git commits 0faaf83..7457e18 of lean_public (you can see them using
git log 0faaf83..7457e18
in your checkout):
The HTML page contains links to the log files for each failing test in the “Failed Tests and Changes” table. Eg. https://github.com/EinsteinToolkit/tests/blob/gh-pages/records/version_683/sim_683_2/NPScalars/teukolsky.log for the failing “teukolsky” test in NPScalars.
The teukolsky test seems to run without error message and then fails due to too large differences (see the diff link in the table that shows the diff file that the Cactus test system produces: https://github.com/EinsteinToolkit/tests/blob/gh-pages/records/version_683/sim_683_1/NPScalars/teukolsky.diffs )
The others seem to fail with a parameter file error (visible in their log files):
which, if one scrolls to the right, refers to a parameter
LeanBSSNMoL::eta_beta_dynamic
which is not found.“major” since it should be fixed before the next release and no know workaround (to make the tests pass) exists.