- changed status to open
- removed comment
Remove support of StaticConformal from ADMMacros
The attached patch removes support of StaticConformal from ADMMacros. The patch tries to do this in a minimal way, e.g. it set's the macros to 0. or 1. Optimizing this isn't worth it IMHO, we should instead look into a replacement of the thorn itself in the long run. However, in the meantime, this patch enables us to remove the dependency on StaticConformal from thorns using ADMMacros.
This lets testcases fail that actually use a static conformal metric (from ADM, ADMConstraints, Extract, and IDAnalyticBH) - which is expected. All others succeed.
Keyword:
Comments (13)
-
-
- changed status to open
- removed comment
Please apply (after the release). I agree that we should try and phase out ADMMacros.
-
reporter - removed comment
Ok - but this really is only about phasing out support of a static conformal metric first.
-
- removed comment
Yes. Actually I don't mind ADMMacros. I mind ADMCoupling.
-
- removed comment
This causes 20 test cases to fail (https://build.barrywardell.net/job/EinsteinToolkit/lastCompletedBuild/testReport/). These tests seem to use the static conformal metric. If they are designed just to test this, they can probably be removed. Otherwise, they should be converted to using the physical metric.
-
- changed status to open
- removed comment
-
reporter - removed comment
Indeed. Most (if not all) of them are expected to fail. I will propose to remove some of the affected thorns from the toolkit soon as well. Others we need to fix - which most likely means me. But yet - none of these seem to be urgent.
-
- removed comment
I agree this is not urgent in and of itself, but the danger is that if we get used to having tests failing, we might be less likely to notice new failures. It also complicated testing the code on a new machine, as you now have both expected and unexpected failures.
-
- removed comment
It is customary to have test cases that are (temporarily, or on certain machines) expected to fail. These are usually clearly marked as such, and do not count towards overall failure. (In fact, if they unexpectedly succeed it would be an overall failure.) We should introduce such a mechanism. Otherwise, it is not acceptable to have failing test cases.
-
- removed comment
I have looked in the past for such an annotation in Jenkins, but I have not found a way to do this. If we know that a test will fail on all machines, and we know why, we could disable the test, and prevent it from running. The low-tech solution would be to delete the parameter file from SVN until the test is fixed. A better approach might be to disable the test in the test.ccl file. The Cactus test system could report these as "Disabled tests", and we could parse this information and display it in Jenkins, as "N tests disabled", so that we don't forget about them. It would also be good to have a message which explains why the test is disabled.
I am less sure about how to deal with tests which fail on certain machines. I can't think of a legitimate reason for this; it seems there is either a problem in the code, the test or on the machine in that case, and the tests //should// fail.
-
reporter - removed comment
I think the right solution for the current issue would be to remove most of the affected thorns anyway, because I believe that they are not used by anybody anymore anyway. I just suggested this on the users mailing list and will suggest this also at one of our calls.
-
reporter - removed comment
And btw: not all of these failures are due to this commit. Some fail to find parameters within Carpet for example, e.g., https://build.barrywardell.net/job/EinsteinToolkit/928/testReport/%28root%29/Carpet/outer_buffers_1procs/
-
reporter - changed status to resolved
- removed comment
All of the failures due to this commit have been dealt with.
- Log in to comment