1. The Enzo Project
  2. Untitled project
  3. enzo-dev
  4. Pull requests

Pull requests

#239 Merged at eedc9eb
Repository
chummels
Branch
week-of-code
Repository
enzo
Branch
week-of-code

Specifying 17 digits of precision (maximum precision of a double) for conversion factor and units written to output parameter files. Aids in accuracy of field conversion to CGS when data output and read into analysis programs (e.g. yt).

Author
  1. chummels
Reviewers
Description

As mentioned on the mailing list, this PR increases the precision in the conversion factors output in the parameter file. It increases it from the default (6 significant digits to 17, which is the maximum precision of a c++ double). Now one can better follow things like mass conservation with precision in post-processing without inaccuracies due to significant round-off errors in the CGS conversion.

This has the downside that now all parameters involving time now have 17 digits of precision too, but I figure it isn't bad to have additional precision. As an example of an AMRCosmology test problem output:

http://paste.yt-project.org/show/4906/

I didn't see any other parameters that necessitated higher precision, but if others are desired, I can modify them.

UPDATE1: I've updated the code with John Wise 's suggestions so that the output precision is automatically tied to the precision at compile time. An updated "output" parameter file (precision-64) is found here:

http://paste.yt-project.org/show/4907/

NOTE: When I attempt to test this PR with precision-32, I get a bunch of errors unrelated to my changes. It appears that people have only been testing the results of their changesets with precision-64, and so enzo currently will not compile with precision-32. Two sets of errors are in the FLD infrastructure, so Daniel Reynolds should take a look at this. Further errors appear in the CosmologyComputeExpansionFactor.C declaration. I could make the FLD errors go away by declaring things with the default precision of the sim, but I don't want to break any functionality.

I believe this needs to be addressed before the new version goes out. We want enzo to compile with 32-bit precision, I think. But I don't think it should hold up this PR.

UPDATE2: OK. I just made the modification that we discussed (going to GOUTSYM). I've tested and it works appropriately when both particles-32 and particles-64 are set. I've also made a slight change in ReadParameterFile.C which enables the code to compile with precision-32. I think this is good to go.

Comments (19)

  1. John Wise

    Can you use the macro GOUTSYM instead of hard coding the precision? I know that it depends on particles-XX instead of precision-XX. But if we really want to be consistent, you can define a new macro like GOUTSYM but make it dependent on CONFIG_BFLOAT_X instead of CONFIG_PFLOAT_X.

    Also, why the "*" on the density units? It adds all of those spaces between the value and equals sign.

    1. Matt Turk

      I'm personally somewhat disinclined to have a separate macro here for conversions. My understanding is that things like Time are modified by particles precision, not by baryon precision -- which then are input into the computation of the conversion factors. I'd be more interested in modifying all of this if computing the conversion factors by hand from the redshift and box size was wrong for the output files. Are those components sufficiently well-described in the output?

      1. chummels author

        According to the description of the units here,

        length: ComovingBoxSize/HubbleConstantNow * Mpc / (1+z)
        

        ComovingBoxSize and HubbleConstantNow are specified at initialization by the user (and are FLOATs defined by compile-time precision), but the Mpc definition in phys_const.def is set to: 3.0857e24 -- only 5 significant digits. This appears to be the source of the 5 significant digits that propagates to the unit conversion factors (prior to this PR). While it may be true that true accuracy of our unit conversion at initialization is only known to those 5 significant digits, the internal CGS conversion is consistently known to higher precision--to that of a float or double depending on the compile-time precision set.

        So to answer your question, one can only calculate the conversion factors by hand to 5 significant digits based on the outputs and knowing the Mpc macro, whereas I think it's necessary to do it to higher precision for reasons mentioned in my email (e.g. inconsistent mass conservation when automatically converted using the output CGS conversion factors).

        But you do raise a good question on the calculation of the Time-related variables. As far as I can tell in WriteParameterFile.C, all of the Time variables (e.g. InitialTime, StopTime, TimeUnits) are output as macro GSYM, which is defined in macros_and_parameters.h according to the precision of the particles, not by baryon precision.

        So I'm not sure what to do here. Perhaps leaving TimeUnits tied to the precision specified for the particles, but making the Density and Velocity CGS conversion (and Temperature Units?) tied to the precision for the baryons?

        Thoughts?

          1. chummels author

            Using GOUTSYM for TimeUnits or for all of the conversion factors? Because the CGS length conversion should be tied to baryon precision not particle precision, right? Particle precision seems like it applies to things like the mass of particles, but not the spatial size of individual grid cells, which more directly reflects baryon precision.

            Either way, I guess this isn't a huge deal, since typically when people jump from 32-bit to 64-bit, they do it for both particle and baryon precision simultaneously. In fact, it seems to me that the perhaps in future versions of enzo we should remove this distinction and just have precision 32 or 64 apply to all of the datatypes and wipe out all of these confusing options (inits, io, precision, particles, particle_ids). But maybe there are good reasons for retaining this subtle distinction in precision between different elements.

            1. Matt Turk

              I think we definitely should remove the distinction.

              Additionally, I do not think it should be tied to baryon precision, particularly since your concern was not related to baryon precision at all, but particle mass. Particle mass itself is a baryon quantity, which is mandated to be modified by "particle" quantities -- time, as well as length scale. The spatial size of individual grid cells is a "particle" quantity, not a "precision" quantity.

                1. Brian O'shea

                  No, I am favor of using GOUTSYM. Go for it.

                  (p.s., sorry for delays in response - I'm in Germany, so it's East Coast + 6 hours, and in a hotel with wireless only in the lobby, and only until 11 p.m... because apparently that's when the router gets tired.)

  2. chummels author

    Oops. The "" on the density units was leftover from an intermediate change where I added "." to the precision to try to get all of the significant digits, but it ended up getting all digits significant or not (and bloating the output. I'll remove that.

    I'll make the other correction you suggest. Thanks for the feedback.

  3. Daniel Reynolds

    Cameron, thanks for the heads-up on the compilation error in FLD with precision-32. I had some difficulty reproducing your error since I typically compile with Intel (where it works fine), but the issue came up with GNU and precision-32. I'll set up a PR for that fix momentarily.

      1. chummels author

        For those following this. Dan has made pull request #240 which corrects the FLD issues and mostly allows compilation. If that is accepted, I can update this PR with the remaining tweak which enables precision-32 to work out of the box.

    1. chummels author

      Are you running with particles-64? Presumably things shouldn't fail standard tests with the defaults, since strict='low' for defaults which only tests things to within 1e-3 precision, and this should only affect things in the >1e-6 range. But you're right that bitwise tests will all fail.

      1. Sam Skillman

        Automatic testing uses 64bit everything, for the push suite, on bitwise. Please do not accept this until the test suite runs on Daniel Reynolds PR is accepted, as I'd rather not have to generate more gold standards until both of those PRs are ready.