This switches the calculation of a(t) and t(z) to interpolation of numerically integrated tables. This was originally done to make it possible to include the radiation component (via CosmologyOmegaRadiationNow parameter) of the energy density of the universe, but it can be used generally. By default, this uses a table of a vs. t with 1000 bins, evenly spaced in log(a). To do the reverse calculation, I find the interpolation index by bisection and store the last used value so it can be used in the next calculation.
I ran tests for the 3 existing cosmologies for which the analytical solutions exist in the code. The differences with the tabulated method are shown below.
Omega_matter = 0.3 Omega_Lambda = 0.7
Omega_matter = 1 Omega_Lambda = 0
Omega_matter < 1 Omega_Lambda = 0
This looks great to me! Only minor point I can see: maybe raise a bit more noise if passed a value outside of the table during the lookup?
@gbryan, ok that's a good idea. I could also make it so that the bounds of the table are adjusted if they don't extend from CosmologySimulationInitialRedshift to CosmologySimulationFinalRedshift. What do you think of that?
I like that a lot!
I've just added this. I think this is ready to go, but I'm happy to wait until merging PR #392 so we can rerun the tests here and know which ones have changed.
Hi @brittonsmith @gbryan @jwise77 , just a quick update: I was feeling particularly paranoid about this one, since a ton of tests were failing. As expected, all of the failed tests are due to the slight differences in timestep. Many of the tests that fail are due to differences in stopping time in cosmological simulations, with differences typically at the ~10^-5 level or below (as would be expected by the graphs shown above). There were a couple of tests - in particular, some MHD tests (MHDAdiabaticExpansion and MHDZeldovichPancake) that were showing differences at what appeared to be rather high fractional values, but upon closer inspection those numbers come from the way that projections and so on are done in the test suite rather than large cell-by-cell differences.
Anyway, my point is that after digging into the failing tests in some detail I am now totally convinced that the code is doing the right thing and the differences in the results are acceptable. I'm going to merge the changes, and I think that the next step is to update the gold standard!
Thanks for looking into this in detail, @gbryan! I'll issue a new PR now updating the gold standard.