Wiki

Clone wiki

Sunrise / ConvergenceTests

Introduction

Unlike grid refinement in adaptive mesh hydro codes, which can be determined based on well-defined local criteria, the long propagation of radiation means there is no local criterion that can ensure that the solution is accurate. Grid refinement in Sunrise is based on several different criteria and it is much more complicated to know how high resolution you need for your results to be converged. This text, distilled from an email conversation with Chris Hayward, will hopefully help you understand what's going on and what you need to think about in deciding what parameters to use.

For context, here's Chris's original question:

"I'd appreciate some guidance on what parameters I need to explore, how I can tell what quantities are not converging (e.g., intensity vs. dust temperature), etc. Here's what I'm thinking so far:

The relevant grid parameters are metal_tolerance, nrays_estimated, opacity, tau_tolerance, and max_level. It's clear to me that metal_tolerance, tau_tolerance, and max_level could all be really important, and I should probably just vary these and see how the result changes. It seems like I don't need to change opacity, and nrays_estimated should be increased if I increase the number of rays. Is there some way I can tell I have "good" parameters besides just seeing if the SED converges (as the more steps, and thus parameters involved, the harder it will be to understand what is causing a problem)?

My general method for thesting is going to be: 1. Change one of the parameters (perturb it a bit around recommended value). 2. See how the output changes. 3. Repeat as necessary. Does this all sound okay?"

For the exact descriptions of what the different parameters do, see the sfrhist keyword descriptions.

Grid refinement and IR equilibrium parameters

As long as you are not using dust/gas but only dust/metals it is correct that metal_tolerance, tau_tolerance and max_level are the most important ones. nrays_estimated will be important in regions of lower density and I don't think your present problems are in that regime.

For these, testing for insensitivity to higher resolutions is the only way to know you are converged. In general, tau_tolerance is important for the dust temp, while metal_tolerance deals with the attenuation of the stellar light. This is because metal_tolerance emphasizes resolving changes in the density, ie describing the density field rho(x) to a certain accuracy. This is the only thing affecting the primary calculation of the attenuation of starlight, there is no upper limit on the optical depth in the cells as long as it describes the density correctly.

The density field will be uniform on scales smaller than the smoothing lengths and, in the absence of tau_tolerance, you could in principle set maxlevel to arbitrarily high and there would be no cells that small anyway because there is no structure on those scales. This fact can be verified -- first set tau_tolerance to 1e100, and look at the number of particles in the different refinement levels. Now crank up maxlevel and set size_factor to 0.1. There should be no cells at those levels.

That's metal_tolerance. Now tau_tolerance is an additional criterion that does not care about differences between cells but only about the optical depth through the cell. It will further subdivide cells that have a V-band optical depth greater than tau_tolerance. The rationale, as I said before, is that in principle the radiation field (and thus dust temp) can change from one end of the cell to the other if that cell is optically thick, because the cell itself absorbs the radiation. With this rationale we want to avoid having cells that are very optically thick because we can miss this structure in the temperature field, even if the density field is described accurately.

Note that this is essentially an orthogonal criterion to using metal_tolerance. Once the emerging stellar radiation is converged, the heating radiation is correctly calculated. The problem is that the recorded radiation field is the average over a cell, and this can miss structure in the radiation field. tau_tolerance is there to divide cells where it can be suspected that this is the case.

(As an aside, it's also possible for the radiation field to change significantly across a cell even if that cell is optically thin, for example if the radiation is due to a single point source. Just the 1/r^2 falloff in radiation intensity will mean the temperature will differ from one end of the cell to the other. Clearly, it's impossible to know this without global knowledge of the problem, and this is why determining the correct grid resolution is such an ill-defined problem.)

So what i suggest you do is turn off the ir stage (nrays_ir=0 and nrays_intensity=0) and essentially test for convergence in the L_lambda_scatteri SEDs that record the attenuated source emission by changing metal_tolerance (and do check increasing nrays_estimated by 10x to see what happens).

When that is converged, see what happens to the primary dust emission (i.e., comment out the xxx_equilibrium_tolerance keywords) when you put tau_tolerance back and decrease it. this is where you begin to be on your own, because the sims used in Jonsson, Groves & Cox 2010 are too optically thin for this to matter.

Hopefully, the scatter SED should not change much when you do this (but it will change some because you are changing the description of the density field). The thing to watch is if the L_lambda_iri converges wrt tau_tolerance and maxlevel.

When you've done this, try the full run including equilibrium tolerance. I suggest you retry the same tau_tolerance and maxlevel stuff you tried without equilibrium tolerance just to make sure nothing weird is happening. I'm not sure what to expect, it's conceivable that the dust self-absorption will be more sensitive but at first glance i see no reason for that to be the case. It should rather be less sensitive because things are much more transparent in the IR.

You should definitely test for the equilibrium_tolerance parameters too. The numbers used in JGC10 are tested to not make a difference on the isolated spirals but self-absorption is essentially not important there so it's not much of a test. Drop them by an order of magnitude at a time (and be prepared to potentially wait a long time for the run to end, as it can take many iterations for the radiation field to converge in optically thick cases).

Number of rays

Next, Chris wondered about the various numbers of rays:

For mcrx, there are the various nrays, the various intensity criteria (i_forced, i_split), and the IR tolerances. I recall you said we typically need nrays_ir = 10 x the number of grid cells, but I can easily (and probably should) check that this is sufficient. (It is of course straightforward, and worthwhile, to check all the other nrays.) Should I worry about changing i_forced, i_split, etc., or do you think the parameters used in JGC10 are sufficient?

These are likely to be of less importance and only affect the MC noise in the calcualtion. This is easy to evaluate, just run 5 realizations with different seeds and look at the variance of the output SED. In our spirals, it's < 1% and for most wavelengths substantially less. I don't think we're concerned with that kind of random error, it's hard-to-bound systematic (ie unconverged) errors that are our main worry.

A tricky thing about testing convergence with MC calcualtions is that as you crank up the grid resolution, you will get uncorrelated random sequences. This means that to be fully certain of what's happening you should in principle estimate the variance in all cases to make sure you are not looking at noise, as changes smaller than the random uncertainty are not significant. As you increase the resolution and get more grid cells, you will need more rays to keep signal in the cells or the noise will go up. I suggest that when your convergence test indicates changes smaller than 5% you do a set of independent realizations with that grid resolution to again estimate the noise level.

Opacity

Opacity is a physical constant and should not be changed. It's the conversion from column density to V-band optical depth. Different dust models have slightly different opacities, but this is really only an order of magnitude number.

What does enough resolution mean?

One opinion regarding the refinement that was suggested was to never increase the resolution over what you believe in the Gadget run, since then you are over-resolving the simulations at scales where you don't trust them. Does this make sense?

This is really quite a difficult question to answer as it's gone from trying to calculate the right answer based on what the hydro simulations say about the geometry to second-guessing what you thing the answer from the simulations should be. My opinion is that what we've talked about here regarding the convergence of the IR emission (since when it comes to attenuation of the primary emission the results are essentially insensitive to resolving scales on which there is no structure from the simulations) is not about "resolving scales where you don't believe the simulations", it's just a matter of calculating the correct solution to the emission resulting from the geometry in the simulations. That the nature of the radiation-transfer problem requires the geometry to be resolved on scales less than that of the hydro simulations does not imply anything beyond that we believe the density field from the simulations.

In hydrodynamics, structure on scales less than your resolved scale may not affect your solution on the scales you do resolve. Unfortunately for us calculating radiation transfer, that is not true for us. Clumping on scales below the resolution will systematically change the global solution, by changing the effective opacity of the dust on larger scales. This is just one of the facts that make radiation transfer problems so tricky to solve.

Updated