Right now enzo does not do flux correction on the energy fields when cooling is turned on. That is fine, I think. However, it still makes the energy fields conservative before flux correction and converts them back to specific form afterwards. This is a problem. Even though the energy fields aren't getting flux corrected, the density field is getting flux corrected, so the specific energies are not guaranteed to stay constant before and after flux correction.
This pull request prevents the energy fields from being converted to conservative form. This means that the internal energy field is guaranteed to stay constant during flux correction, making it impossible for grid::CorrectForRefinedFluxes to create temperature fluctuations in simulations that include cooling.
I've also included some updates for the NASA-pleiades makefile.
I've tested this extensively with my isolated galaxy simulations and no longer see high temperature crashes using this code. That said, this may change the results of production simulations so it should only be accepted after sufficient vetting.
This fails the push suite (http://srs.slac.stanford.edu/hudson/job/enzo-dev/74/testReport/), presumably because PhotonTestAMR has radiative cooling and flux correction on. Given that, I think we need to look at the resulting simulation and see if this is an improvement in the solution. @ngoldbaum Can you run the enzo before/after this change on the failing tests and figure out if we shouldn't be worried about this, or if we should just generate a new gold standard? @dcollins4096 , @bwoshea , thoughts on this change? @chummels does this fix any of the runs you had going? @devinsilvia can you test the new pleiades makefile, and if you have time test this on one of your cosmo sims?
Hi @samskillman I actually no longer have an active Pleiades allocation so I can't test out that make file.
The two runs are qualitatively identical. There are small differences in the plots, but these are only visible to my eye if you blink the plots.
I've uploaded the images produced by scripts.py to two imgur albums:
The differences in PhotonTestAMR are small enough not to be a concern. I don't think it should be a deal-breaker for approval.
I'll test this ASAP and get back to you.
Hi Cameron: followup here - what's up with the tests?
I'm working on it today. I'll let you know. As you know, these tests are not trivial to run/interpret.
Cameron, just following up on your tests: any updates?
I've done extensive testing on this PR, and I have found that for runs with cooling, this appears to fix the hot-spot issue when using FluxCorrection and CorrectParentBoundaryFlux turned on. Prior to this PR, my tests would crash after about .3 Gyrs (cosmo runs starting at z=99), but now the run continues without problems for about 1 Gyr (the longest I've tested it).
However, the hot spot issue still occurs when these two parameters are on, when ConservativeInterpolation is also turned on (which is on by default). Keep in mind, this is on stampede using intel compilers with mvapich2 mpi implementation. @bwoshea and @ngoldbaum appear to have found other results using gnu + openmpi, which I have been unable to reproduce (simply because I cannot get gnu + openmpi + enzo running on any of the machines to which I have access).
For reference, I've generated some plots demonstrating global mass loss across the simulation for three sim nearly identical sim setups. All are cosmo runs with grackle metal cooling, star formation, feedback, 2 nested grids, and 8 levels of AMR, so nearly a production level run. The total mass in the simulation is ~1e15 Msun. They are all using the PPM hydro method. The plots show the absolute magnitude of the delta(mass) from DD0000 to DD0120 (~1 Gyr to z=4.5) for each component of mass (DM, stars, gas, etc.). Upward arrows indicate mass gain, whereas downward arrows indicate mass loss. The title of each sim shows the net mass change from DD0001 to DD0120.
As you can see, there is significant mass loss in gas when FluxCorrection is turned off--implying that one would lose ~1e12 Msun over a simulation to z=0!! When FluxCorrection is turned on, mass conservation is almost 40x better, but still not amazing. When FluxCorrection and CorrectParentBoundaryFlux are both turned on, we get substantially better mass conservation implying losses of ~3e7 Msun in the total simulation from z=99 to z=0, which is totally reasonable.
In summary, this PR should be added, and I think we should turn off ConservativeInterpolation by default and have FluxCorrection and CorrectParentBoundaryFlux turned on by default. Furthermore, I don't think the problem of hot spots and mass conservation is totally resolved, as I get crummy mass conservation in @bwoshea 's adiabatic runs, even when using FluxCorrection and CorrectParentBoundaryFlux (run to z~4.5) as per: http://i.imgur.com/Sx4lits.png . But that result doesn't pertain to this PR directly. Great detective work, @ngoldbaum . Thank you for this code!
@chummels , thanks for following up on this and giving really detailed feedback. I've just realized that the reason I didn't notice the huge mass change in my adiabatic simulations is that I was comparing two intermediate data dumps (DD0010 and DD0018 using the dtDataDump in the parameter file I gave you), rather than using the t=0 data dump as my baseline. We definitely need to look into that a bit more, and I agree that it's probably independent of this PR.
I tested this PR on top of the changes in PR #217 that fixed the temperature spikes. I applied these changes, and the adiabatic simulation is still well-behaved. I haven't compared the absolute differences around the temperature spike, but I plan on testing them more rigorously this week.
Hopefully they should be bitwise identical - this change should only affect simulations that include cooling.
John, I'm following up on this comment: did you see anything alarming in terms of temperature differences?
Sorry I didn't follow up on this, but I didn't see any differences in the two versions with this change.
No problem, thank you. Want to hit 'approve'?
I'm going to hit merge on this PR momentarily. Thanks again to all for the vigorous discussion, and to many people (particularly @ngoldbaum , @chummels , @jwise77 ) for doing a bunch of investigation.