1. The Enzo Project
  2. Untitled project
  3. enzo-dev
  4. Pull requests

Pull requests

#80 Merged at 7ef221c
Repository
Deleted repository
Branch
week-of-code (38b7ff24e8a2)
Repository
enzo-dev
Branch
week-of-code

merge of fortran_types into week-of-code

Author
  1. Daniel Reynolds
Reviewers
Description

I've merged all of Matt's and my upgrades to the fortran subsystem that enforce precision without using compiler recasting. I've compile-tested on my laptop using GNU compilers for all possible values of CONFIG_PRECISION, CONFIG_PARTICLES and CONFIG_INTEGERS.

Update: I've fixed a problem in interp3d.F that was causing GravityTest to fail, and merged with the tip of enzo-dev. From my initial tests, it looks like this now passes both GravityTest and PhotonShadowing, which had been the two main failures in previous answer testing. We should re-run everything and identify any other issues, but I think it's close to ready.

Update2: I've found/fixed a few issues with integers in the fortran files. In a few spots, people were accessing values outside the declared length of an array. In other spots I needed to update how integer constants were passed through function/subroutine calls. I think it currently passes all the tests I was able to run with the test_runner, and should be ready for testing by the wider group.

Comments (15)

  1. Sam Skillman

    I just ran the quick suite on 27e5dea compared to the tip of the testing fork (cd233051015e), with the following broad results: Test Summary Tests Passed: 1343 Tests Failed: 115 Tests Errored: 65

    Looks like the main culprits are: test_photon_shadowing test_freefall test_gravity_test (of course!)

    I'm now going to merge the two forks (this one and enzo-dev-answer-testing) to see if it gets rid of these things. I won't push and of it, but I'm guessing it will get rid of many of the issues. This has also illuminated more issues with the test results summary, which I will attempt to fix up.

  2. Daniel Reynolds author

    Thanks Sam, that sounds great! It will be interesting to see if the merge fixes the remaining issues.

    Do you know how many of our test problems really exercise all of the chemistry machinery? Those were the hardest to convert to the new precision format. Given the sheer number of lines of code that I had to modify for this, I wouldn't be surprised if there was an error or two that crept in.

    I'm not sure what test_freefall does, but the one that concerns me the most out of the three you mentioned is test_photon_shadowing, since I imagine that it uses most of John's chemistry improvements. I don't have time for a sprint today or tomorrow, but I'd be happy to hammer on this as needed starting Wednesday.

    -Dan

    1. Britton Smith

      I just ran the freefall test and the answer is only marginally changed, but enough to cause failure when comparing with previous results. The freefall test should be considered as passing and we should regenerate the gold standard for it when this PR is accepted.

  3. Sam Skillman

    Hi Dan,

    Just to follow up, I get the same behavior with the merged version, so this will probably requires some additional eyes.

    I'm not sure about which tests would test all the bells and whistles in the chemstry machinery. Britton Smith, Matt Turk?

    I could work on this a bit on Wednesday as well.

    1. Britton Smith

      The OneZoneFreefallTest tests the chemistry over a pretty large range in densities and tests the molecular chemistry as well. I'm not sure how much there is that tests the chemistry that much.

    2. Daniel Reynolds author

      So the new fortran precision stuff (even after all merges) results in different answers for three test problems: test_photon_shadowing, test_freefall and test_gravity_test?

      When you say it "fails" what does that mean? Does it still run to completion? When checking the answer does it do a bitwise comparison or a difference norm? Does it compare against a previous solution or an analytical solution?

      Since this also changes how we handle fortran constants, many of which had values hard-coded to low precision (e.g. pi = 3.14), I'd be surprised if the results are identical with previous results. Or maybe I made an error (or many)? So, I imagine that actual verification/debugging could be somewhat challenging. I'd be happy to help work on this soon. Can anyone think of a good test to focus on that (a) fortran_types fails on, and (b) has an analytical solution?

  4. Michael Kuhlen

    I just ran the most recent tip (38b7ff2) through the quick suite test_runner with -O2, and found:

    Number of errors: 2
    Number of failures: 40
    Number of successes: 1487
    

    We understand the errors, they're just due to GravityTestSphere and ExtremeAdvectionTest currently not being run with -DSAB.

    The failures all occur in OneZoneFreefallTest.

        1. Britton Smith

          I ran the OneZoneFreefallTest and the results, while different enough from the gold standard to fail, are totally reasonable. The temperatures are off by less than 0.1%. I think this is entirely due to some constants being redefined to different precision and not something being screwed up. I think we should consider that test passed and remake the gold standard for this after the PR is accepted.

          1. Daniel Reynolds author

            Awesome. I can't explain how frustrating that was trying to track down the errors. I imagine that the documentation will need some adjustment. I know that I'll need to update the Programming Guide and the Variable Precision sections of the Developer's Guide. Any thoughts on what else needs modification?

            1. Matt Turk

              Hi Dan -- those two for sure, but I think we also really need to hit hard in the Makefile information. That can be a part of an isolated email to the mailing list; I think one of the (best) user-facing changes is that the precision specifiers for compilers, which have gotten particularly tricky with recent advances in GCC, are no longer necessary.

              1. Daniel Reynolds author

                Well, to head off that kind of interference from the users, in the fortran types PR I also completely removed the functionality in which Make.<stuff> uses those user-supplied precision definitions. So with the new approach, you can set them all day long in your MACH_FFLAGS_INTEGER_X and MACH_FFLAGS_REAL_X definitions, but they won't be used. The only mechanism a user has for overriding this is to add those flags directly to their MACH_FFLAGS and MACH_F90FLAGS definitions.

                I also removed the precision-related definitions from all of the Make.mach.* files in the repository.

                Of course, educating the user base is important, and I/we can put something together for distribution soon.