Commits

John Wise  committed ac7f69c Merge

Merging in changes from WoC.

  • Participants
  • Parent commits 8458f8a, 9f1830f
  • Branches active_particles

Comments (0)

Files changed (374)

 DEPEND.bak
 auto_show_*.C
 enzo.exe
+inits.exe
+ring.exe
+anyl.exe
+enzohop.exe
 out.compile
 out.make.DEPEND
 svn_version.def
 Make.config.override
 *.mod
 
-src/inits/inits.exe
 src/mpgrafic/degraf/Makefile
 src/mpgrafic/degraf/config.h
 src/mpgrafic/degraf/config.log
 src/P-GroupFinder/P-GroupFinder
 
 doc/manual/build
+
+run/results.js

File doc/manual/source/EnzoParameters.rst

+.. _parameters:
+
 Enzo Parameter List
 ===================
 
     Causes the simulation to immediately stop when a specified level is
     reached. Default value 0 (off), possible values are levels 1
     through maximum number of levels in a given simulation.
+``NumberOfOutputsBeforeExit`` (external)
+    After this many datadumps have been written, the code will exit.  If 
+    set to 0 (default), this option will not be used.  Default: 0.
 ``StopCPUTime`` (external)
     Causes the simulation to stop if the wall time exceeds ``StopCPUTime``.
     The simulation will output if the wall time after the next
     methods). If using multiple species (i.e. ``MultiSpecies`` > 0), then
     this value is ignored in favor of a direct calculation (except for
     PPM LR) Default: 5/3.
+``Mu`` (external)
+    The molecular weight. Default: 0.6.
 ``ConservativeReconstruction`` (external)
     Experimental.  This option turns on the reconstruction of the
     left/right interfaces in the Riemann problem in the conserved
     completely baryon dominated. It is used to remove the discreteness
     effects of the few remaining dark matter particles. Not used if set
     to a value less than 0. Default: -1
+
+External Gravity Source
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+These parameters set-up an external static background gravity source that is
+added to the acceleration field for the baryons and particles.
+
 ``PointSourceGravity`` (external)
-    This flag (1 - on, 0 - off) indicates if there is to be a
-    (constant) point source gravitational field. Default: 0
+    This parameter indicates that there is to be a
+    (constant) gravitational field with a point source profile (``PointSourceGravity`` =
+    1) or NFW profile (``PointSourceGravity`` = 2). Default: 0
 ``PointSourceGravityConstant`` (external)
-    The magnitude of the point source acceleration at a distance of 1
-    length unit. Default: 1
+    If ``PointSourceGravity`` = 1, this is the magnitude of the point
+    source acceleration at a distance of 1
+    length unit (i.e. GM in code units). If ``PointSourceGravity`` =
+    2, then it takes the mass of the dark matter halo in CGS
+    units. ``ProblemType`` = 31 (galaxy disk simulation) automatically calculates
+    values for ``PointSourceGravityConstant`` and
+    ``PointSourceGravityCoreRadius``. Default: 1
+``PointSourceGravityCoreRadius`` (external)
+    For ``PointSourceGravity`` = 1, this is the radius inside which
+    the acceleration field is smoothed in code units. With ``PointSourceGravity`` =
+    2, it is the scale radius, rs, in CGS units (see Navarro, Frank & White,
+    1997). Default: 0
 ``PointSourceGravityPosition`` (external)
     If the ``PointSourceGravity`` flag is turned on, this parameter
-    specifies the center of the point-source gravitational field.
-    Default: 0 0 0
+    specifies the center of the point-source gravitational field in
+    code units. Default: 0 0 0
+``ExternalGravity`` (external)
+   This fulfills the same purpose as ``PointSourceGravity`` but is
+   more aptly named. ``ExternalGravity = 1`` turns on an alternative
+   implementation of the NFW profile with properties
+   defined via the parameters ``HaloCentralDensity``, ``HaloConcentration`` and ``HaloVirialRadius``. Boxsize is assumed to be 1.0 in this case. ``ExternalGravity = 10`` gives a gravitational field defined by the logarithmic potential in Binney & Tremaine, corresponding to a disk with constant circular velocity.  Default: 0 
+``ExternalGravityConstant`` (external)
+    If ``ExternalGravity = 10``, this is the circular velocity of the disk in code units. Default: 0.0
+``ExternalGravityDensity`` 
+   Reserved for future use.
+``ExternalGravityPosition`` (external)
+    If ``ExternalGravity = 10``, this parameter specifies the center of the gravitational field in code units. Default: 0 0 0
+``ExternalGravityOrientation`` (external)
+    For ``ExternalGravity = 10``, this is the unit vector of the disk's angular momentum (e.g. a disk whose face-on view is oriented in the x-y plane would have ``ExternalGravityOrientation = 0 0 1``). Default: 0 0 0 
+``ExternalGravityRadius`` (external)
+   If ``ExternalGravity = 10``, this marks the inner radius of the disk in code units within which the velocity drops to zero. Default: 0.0
 ``UniformGravity`` (external)
     This flag (1 - on, 0 - off) indicates if there is to be a uniform
     gravitational field. Default: 0
 ``RadiativeCooling`` (external)
     This flag (1 - on, 0 - off) controls whether or not a radiative
     cooling module is called for each grid. There are currently several
-    possibilities, controlled by the value of another flag. Default: 0
+    possibilities, controlled by the value of another flag. See :ref:`cooling` 
+    for more information on the various cooling methods.  Default: 0
     
     -  If the ``MultiSpecies`` flag is off, then equilibrium cooling is
        assumed and one of the following two will happen. If the parameter
     is valid for temperatures greater than 10,000 K. This requires the
     file ``TREECOOL`` to execute. Default: 0
 ``MetalCooling`` (external)
-    This flag (0 - off, 1 - metal cooling from Glover & Jappsen 2007, 2
-    - Cen, 3 - Cloudy cooling from Smith, Sigurdsson, & Abel 2008)
-    turns on metal cooling for runs that track metallicity. Option 1 is
-    valid for temperatures between 100 K and 10\ :sup:`8`\  K because
-    it considers fine-structure line emission from carbon, oxygen, and
-    silicon and includes the additional metal cooling rates from
-    Sutherland & Dopita (1993). Option 2 is only valid for temperatures
-    above 10\ :sup:`4`\  K. Option 3 uses multi-dimensional tables of
-    heating/cooling values created with Cloudy and optionally coupled
-    to the ``MultiSpecies`` chemistry/cooling solver. This method is valid
-    from 10 K to 10\ :sup:`8`\  K. See the Cloudy Cooling parameters below.
-    Default: 0.
+    This flag (0 - off, 1 - metal cooling from Glover & Jappsen 2007,
+    2 - Cen et al (1995), 3 - Cloudy cooling from Smith, Sigurdsson, &
+    Abel 2008) turns on metal cooling for runs that track
+    metallicity. Option 1 is valid for temperatures between 100 K and
+    10\ :sup:`8`\ K because it considers fine-structure line emission
+    from carbon, oxygen, and silicon and includes the additional metal
+    cooling rates from Sutherland & Dopita (1993). Option 2 is only
+    valid for temperatures above 10\ :sup:`4`\ K. Option 3 uses
+    multi-dimensional tables of heating/cooling values created with
+    Cloudy and optionally coupled to the ``MultiSpecies``
+    chemistry/cooling solver. This method is valid from 10 K to 10\
+    :sup:`8`\ K. See the Cloudy Cooling parameters below.  Default: 0.
 ``MetalCoolingTable`` (internal)
     This field contains the metal cooling table required for
     ``MetalCooling`` option 1. In the top level directory input/, there are
     ``MultiSpecies`` and ``RadiationFieldType`` are forced to 0 and
     ``RadiativeCooling`` is forced to 1.
     [Not in public release version]
-
+``PhotoelectricHeating`` (external)
+    If set to be 1, Gamma_pe = 5.1e-26 erg/s will be added uniformly
+    to the gas without any shielding (Tasker & Bryan 2008). At the
+    moment this is still experimental. Default: 0
 ``MultiMetals`` (external)
     This was added so that the user could turn on or off additional
     metal fields - currently there is the standard metallicity field
     (Metal_Density) and two additional metal fields (Z_Field1 and
     Z_Field2). Acceptable values are 1 or 0, Default: 0 (off).
 
+.. _cloudy_cooling:
+
 Cloudy Cooling
 ~~~~~~~~~~~~~~
 
     are made with the intention that only the cooling rates are to be
     used. Default: 0 (off).
 
-``IncludeCloudyMMW`` (external)
-    An integer (0 or 1) specifying whether the additional mean
-    molecular weight contributed by the metals be used in the
-    conversion from internal energy to temperature. These values will
-    come from the Cloudy dataset. For metallicities less than solar,
-    this addition will be negligible. Default: 0 (off).
-
 ``CMBTemperatureFloor`` (external)
     An integer (0 or 1) specifying whether a temperature floor is
     created at the temperature of the cosmic microwave background
     code by subtracting the cooling rate at T\ :sub:`CMB`\  such that
     Cooling = Cooling(T) - Cooling(T\ :sub:`CMB`\ ). Default: 1 (on).
 
-``CloudyMetallicityNormalization`` (external)
-    A float value used in the conversion of metal density into
-    metallicity. This value will change depending on the specific
-    abundance patterns used to make the Cloudy dataset. The value of
-    this factor is calculated as the sum of (A\ :sub:`i`\  \*
-    m\ :sub:`i`\ ) over all elements i heavier than He, where
-    A\ :sub:`i`\  is the solar number abundance relative to H and
-    m\ :sub:`i`\  is the atomic mass. For the solar abundance pattern
-    from the latest version of Cloudy, using all metals through Zn,
-    this value is 0.018477. Default: 0.018477.
-
 ``CloudyElectronFractionFactor`` (external)
     A float value to account for additional electrons contributed by
     metals. This is only used with Cloudy datasets with dimension
     or 1.  See :ref:`distributed_feedback` for an illustration.
     Default: 0.
 
+``StarMakerTypeIaSNe`` (external)
+    This parameter turns on thermal and chemical feedback from Type Ia
+    supernovae.  The mass loss and luminosity of the supernovae are
+    determined from `fits of K. Nagamine
+    <http://www.physics.unlv.edu/~kn/SNIa_2/>`_.  The ejecta are
+    traced in a separate species field, ``MetalSNIa_Density``.  The
+    metallicity of star particles that comes from this ejecta is
+    stored in the particle attribute ``typeia_fraction``.  Can be used
+    with ``StarParticleCreation`` = 0, 1, 2, 5, 7, and 8.  Default:
+    0.
+
+``StarMakerPlanetaryNebulae`` (external) 
+    This parameter turns on thermal and chemical feedback from
+    planetary nebulae.  The mass loss and luminosity are taken from
+    the same `fits from K. Nagamine
+    <http://www.physics.unlv.edu/~kn/SNIa_2/>`_.  The chemical
+    feedback injects gas with the same metallicity as the star
+    particle, and the thermal feedback equates to a 10 km/s wind.  The
+    ejecta are not stored in its own species field.  Can be used
+    with ``StarParticleCreation`` = 0, 1, 2, 5, 7, and 8.  Default: 0.
+
 Normal Star Formation
 ^^^^^^^^^^^^^^^^^^^^^
 
     returned to the gas phase as thermal energy. Default: 1e-5
 ``StarEnergyToStellarUV`` (external)
     The fraction of the rest-mass energy of the stars created which is
-    returned as UV radiation with a young star spectrum. This is used when calculating the radiation background. Default: 3e-6
+    returned as UV radiation with a young star spectrum. This is used
+    when calculating the radiation background. Default: 3e-6
 ``StarEnergyToQuasarUV`` (external)
-    The fraction of the rest-mass energy of the stars created which is returned as UV radiation with a quasar spectrum. This is used when calculating the radiation background. Default: 5e-6
+    The fraction of the rest-mass energy of the stars created which is
+    returned as UV radiation with a quasar spectrum. This is used when
+    calculating the radiation background. Default: 5e-6
 
 Population III Star Formation
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         #order: MBH mass (in Ms), MBH location[3], MBH creation time
         100000.0      0.48530579      0.51455688      0.51467896      0.0
 
+.. _radiation_backgrounds:
 
 Background Radiation Parameters
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 ``RadiationFieldType`` (external)
     This integer parameter specifies the type of radiation field that
-    is to be used. It can currently only be used if ``MultiSpecies`` = 1
-    (i.e. no molecular H support). The following values are used.
-    Default: 0
+    is to be used. Except for ``RadiationFieldType`` = 9, which should
+    be used with ``MultiSpecies`` = 2, UV backgrounds can currently
+    only be used with ``MultiSpecies`` = 1 (i.e. no molecular H
+    support). The following values are used. Default: 0
 
    ::
   
      1. Haardt & Madau spectrum with q_alpha=1.5
      2. Haardt & Madau spectrum with q_alpha = 1.8
-     3. reserved for experimentation
+     3. Modified Haardt & Madau spectrum to match observations
+     	(Kirkman & Tytler 2005).
      4. H&M spectrum (q_alpha=1.5. supplemented with an X-ray Compton heating
-         background from Madau & Efstathiou (see astro-ph/9902080)
+        background from Madau & Efstathiou (see astro-ph/9902080)
      9. a constant molecular H2 photo-dissociation rate
      10. internally computed radiation field using the algorithm of Cen & Ostriker
      11. same as previous, but with very, very simple optical shielding fudge
     hydrogen (H2) dissociation rate. There a normalization is performed
     on the rate by multiplying it with ``RadiationSpectrumNormalization``.
     Default: 1e-21
+``RadiationFieldRedshift`` (external)
+    This parameter specifies the redshift at which the radiation field
+    is calculated.  Default: 0
 ``RadiationShield`` (external)
     This parameter specifies whether the user wants to employ
     approximate radiative-shielding. This parameter will be
     automatically turned on when RadiationFieldType is set to 11. See
     ``calc_photo_rates.src``. Default: 0
+``RadiationRedshiftOn`` (external) The redshift at which the UV 
+    background turns on. Default: 7.0.
+``RadiationRedshiftFullOn`` (external) The redshift at which the UV
+    background is at full strength.  Between z =
+    ``RadiationRedshiftOn`` and z = ``RadiationRedshiftFullOn``, the 
+    background is gradually ramped up to full strength. Default: 6.0.
+``RadiationRedshiftDropOff`` (external) The redshift at which the 
+    strength of the UV background is begins to gradually reduce,
+    reaching zero by ``RadiationRedshiftOff``. Default: 0.0.
+``RadiationRedshiftOff`` (external) The redshift at which the UV 
+    background is fully off. Default: 0.0.
 ``AdjustUVBackground`` (external)
     Add description. Default: 1.
 ``SetUVAmplitude`` (external)
     Add description. Default: 1.8.
 ``RadiationSpectrumSlope`` (external)
     Add description. Default: 1.5.
-``PhotoelectricHeating`` (external)
-    If set to be 1, Gamma_pe = 5.1e-26 erg/s will be added uniformly
-    to the gas without any shielding (Tasker & Bryan 2008). At the
-    moment this is still experimental. Default: 0
 
 Minimum Pressure Support Parameters
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 ``UseMinimumPressureSupport`` (external)
     When radiative cooling is turned on, and objects are allowed to
-    collapse to very small sizes (i.e. a few cells), and they are
-    evolved for many, many dynamical times, then unfortunate things
-    happen. Primarily, there is some spurious angular momentum
-    generation, and possible some resulting momentum non-conservation.
-    To alleviate this problem, a very simple fudge was introduced: if
-    this flag is turned on, then a minimum temperature is applied to
-    grids with level == ``MaximumRefinementLevel``. This minimum
-    temperature is that required to make each cell Jeans stable
-    multiplied by the parameter below. If you use this, it is advisable
-    to also set the gravitational smoothing length in the form of
-    ``MaximumGravityRefineLevel`` to 2 or 3 less than
-    ``MaximumRefinementLevel``. Default: 0
+    collapse to very small sizes so that their Jeans length is no
+    longer resolved, then they may undergo artificial fragmentation
+    and angular momentum non-conservation.  To alleviate this problem,
+    as discussed in more detail in Machacek, Bryan & Abel (2001), a
+    very simple fudge was introduced: if this flag is turned on, then
+    a minimum temperature is applied to grids with level ==
+    ``MaximumRefinementLevel``. This minimum temperature is that
+    required to make each cell Jeans stable multiplied by the
+    parameter below.  More precisely, the temperature of a cell is set
+    such that the resulting Jeans length is the square-root of the
+    parameter ``MinimumPressureSupportParameter``.  So, for the
+    default value of 100 (see below), this insures that the ratio of
+    the Jeans length/cell size is at least 10.  Default: 0
 ``MinimumPressureSupportParameter`` (external)
-    This is the parameter alluded to above. Very roughly speaking, is
-    the number of cells over which a gravitationally bound small cold
-    clump, on the most refined level, will be spread over. Default:
-    100
+    This is the numerical parameter discussed above. Default: 100
 
 Radiative Transfer (Ray Tracing) Parameters
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     depth. To interpolate, we need to calculate the vertex interpolated
     density fields. Default: 0.
 ``RadiativeTransferSourceClustering`` (internal)
-    Set to 1 to turn on ray merging. Not fully tested and may still be
-    buggy. Default: 0.
+    Set to 1 to turn on ray merging from combined virtual sources on a
+    binary tree. Default: 0.
 ``RadiativeTransferPhotonMergeRadius`` (internal)
     The radius at which the rays will merge from their SuperSource,
     which is the luminosity weighted center of two sources. This radius
     is in units of the separation of two sources associated with one
     SuperSource. If set too small, there will be angular artifacts in
-    the radiation field. Default: 10.
+    the radiation field. Default: 2.5
 ``RadiativeTransferTimestepVelocityLimit`` (external)
     Limits the radiative transfer timestep to a minimum value that is
     determined by the cell width at the finest level divided by this
     ``mbh_particle_io.dat``
 
 Conduction
-^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~
 
 Isotropic and anisotropic thermal conduction are implemented using the
 method of Parrish and Stone: namely, using an explicit, forward
     a value of 0.5 or less.
     Default: 0.5
 
+Shock Finding Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~
+For details on shock finding in Enzo see :ref:`shock_finding`.
+
+``ShockMethod`` (external)
+    This parameter controls the use and type of shock finding. Default: 0
+    
+    ::
+
+	0 - Off
+	1 - Temperature Dimensionally Unsplit Jumps
+	2 - Temperature Dimensionally Split Jumps
+	1 - Velocity Dimensionally Unsplit Jumps
+	2 - Velocity Dimensionally Split Jumps
+
+``ShockTemperatureFloor`` (external)
+    When calculating the mach number using temperature jumps, set the
+    temperature floor in the calculation to this value.
+
+``StorePreShockFields`` (external)
+    Optionally store the Pre-shock Density and Temperature during data output.
+
+
 .. _testproblem_param:
 
 Test Problem Parameters

File doc/manual/source/developer_guide/ModificationIntro.rst

 The Patch Directory
 --------------------
 
-If you are experimenting with a code change or just debugging, then the patch directory, found in the top level of your Enzo directory, may be of use. Files put in here are compiled in preference to those in ``/src/enzo``, so you can implement changes without overwriting the original code. To use this feature, run ``make`` from inside ``/patch``.
+If you are experimenting with a code change or just debugging, then
+the patch directory, found in the top level of your Enzo directory,
+may be of use. Files put in here are compiled in preference to those
+in ``/src/enzo``, so you can implement changes without overwriting the
+original code. To use this feature, run ``make`` from inside
+``/patch``. You may need to add ``-I../src/enzo`` to the
+``MACH_INCLUDES`` line of your machine makefile
+(e.g. ``Make.mach.triton``) to ensure the .h files are found when compiling.
 
 As an example, suppose you wish to check the first few values of the acceleration field as Enzo runs through ``EvolveLevel.C``. Copy ``EvolveLevel.C`` from ``/src/enzo`` into ``/patch`` and put the appropriate print statements throughout that copy of the routine. Then recompile Enzo from inside the patch directory. When you no longer want those changes, simply delete EvolveLevel.C from ``/patch`` and the next compile of the code will revert to using the original ``/src/enzo/EvolveLevel.C``. If you make adjustments you wish to keep, just copy the patch version of the code into ``/src/enzo`` to replace the original.
 

File doc/manual/source/physics/cooling.rst

 Cooling and Heating of Gas
 ==========================
 
-.. sectionauthor:: Britton Smith <brittonsmith@gmail.com>
+Enzo features a number of different methods for including radiative
+cooling.  These range from simple tabulated, analytical approximations
+to very sophisticated non-equilibrium primoridal chemistry.  All of
+these methods require the parameter ``RadiativeCooling`` be set to 1.
+Other parameters are required for using the various methods, which are
+described below.
+
+MultiSpecies = 0: Sarazin & White
+---------------------------------
+*Source: solve_cool.F, cool1d.F*
+
+``RadiativeCooling`` = 1
+
+``MultiSpecies`` = 0
+
+This method uses an analytical approximation from Sarazin & White
+(1987, ApJ, 320, 32) for a fully ionized gas with metallicity of 0.5
+solar.  This cooling curve is valid over the temperature range from
+10,000 K to 10\ :sup:`9`\  K.  Since this assumes a fully ionized gas, the
+cooling rate is effectively zero below 10,000 K.
+
+*Note: In order use this cooling method, you must copy the file,
+cool_rates.in, from the input directory into your simulation directory.*
+
+MultiSpecies = 1, 2, or 3: Primordial Chemistry and Cooling
+-----------------------------------------------------------
+*Source: multi_cool.F, cool1d_multi.F*
+
+This method follows the nonequilibrium evolution of primordial
+(metal-free) gas.  The chemical rate equations are solved using a
+semi-implicit backward differencing scheme described by Abel et
+al. (1997, New Astronomy, 181) and Anninos et al. (1997, New
+Astronomy, 209).  Heating and cooling processes include atomic line
+excitation, recombination, collisional excitation, free-free
+transitions, Compton scattering of the cosmic microwave background and
+photoionization from a variety of metagalactic UV backgrounds.  For 
+``MultiSpecies`` > 1, molecular cooling is also included and UV
+backgrounds that include photodissociation may also be used.
+Numerous chemistry and cooling rates have been added or updated.  For
+the exact reference for any given rate, users are encouraged to
+consult *calc_rates.F*.
+
+#. Atomic
+
+   ``RadiativeCooling`` = 1
+
+   ``MultiSpecies`` = 1
+
+   Only atomic species, H, H\ :sup:`+`\, He, He\ :sup:`+`\, He\
+   :sup:`++`\, and e\ :sup:`-`\  are followed.  Since 
+   molecular species are not treated, the cooling is effectively zero for
+   temperatures below roughly 10,000 K.
+
+#. Molecular Hydrogen
+
+   ``RadiativeCooling`` = 1
+
+   ``MultiSpecies`` = 2
+
+   Along with the six species above, H\ :sub:`2`\, H\
+   :sub:`2`:sup:`+`\, and H\ :sup:`-`\  are also followed.
+   In addition to the rates described in Abel et al. (1997) and Anninos
+   et al. (1997), H2 formation via three-body reactions as described by
+   Abel, Bryan, and Norman (2002, Science, 295, 93) is also included.
+   This method is valid in the temperature range of 1 K to 10\
+   :sup:`8`\  K and up to number densities of roughly 10\ :sup:`9`\  cm\ :sup:`-3`\.
+   Additionally, three-body heating (4.48eV per molecule formed or dissociated)
+   is added as appropriate.
+
+#. Deuterium
+
+   ``RadiativeCooling`` = 1
+
+   ``MultiSpecies`` = 3
+
+   In addition to the nine species solved with ``MultiSpecies`` = 2,
+   D, D\ :sup:`+`\, and HD are also followed.  The range of validity
+   is the same as for ``MultiSpecies`` = 2.
+
+Metal Cooling
+-------------
+
+Three distinct methods to calculate the cooling from elements heavier
+than He exist.  These are selected by setting the ``MetalCooling``
+parameter to 1, 2, or 3.
+
+#. John Wise's metal cooling.
+
+   ``RadiativeCooling`` = 1
+
+   ``MetalCooling`` = 1
+
+#. Cen et al (1995) cooling. This uses output from a Raymond-Smith
+   code to determine cooling rates from T > 10\ :sup:`4`\ K.  No ionizing
+   background is used in computing cooling rates.  This method has
+   not been extensively tested in the context of Enzo.
+
+   ``RadiativeCooling`` = 1
+
+   ``MetalCooling`` = 2
+
+#. Cloudy cooling.
+
+   *Source: cool1d_cloudy.F*
+
+   ``RadiativeCooling`` = 1
+
+   ``MetalCooling`` = 3
+
+   ``MultiSpeces`` = 1, 2, or 3
+
+   Cloudy cooling operates in conjunction with the primordial
+   chemistry and cooling from ``MultiSpecies`` set to 1, 2, or 3.
+   As described in Smith, Sigurdsson, & Abel (2008), Cloudy cooling
+   interpolates over tables of precomputed cooling data using the
+   Cloudy photoionization software (Ferland et al. 1998, PASP, 110,
+   761, http://nublado.org).  The cooling datasets can be from one to
+   five dimensional.  The range of validity will depends on the
+   dataset used.
+
+   #. Temperature
+   #. Density and temperature.
+   #. Density, metallicity, and temperature.
+   #. Density, metallicity, electron fraction, and temperature.
+   #. Density, metallicity, electron fraction, redshift of UV
+      background, and temperature.
+
+   See :ref:`cloudy_cooling` for additional parameters that control
+   the behavior of the Cloudy cooling.  For more information on
+   obtaining or creating Cloudy cooling datasets, contact Britton
+   Smith (brittonsmith@gmail.com).
+
+UV Meta-galactic Backgrounds
+----------------------------
+*Source: RadiationFieldCalculateRates.C*
+
+A variety of spatially uniform photoionizing and photodissociating
+backgrounds are available, mainly by setting the parameter
+``RadiationFieldType``.  These radiation backgrounds are redshift
+dependent and work by setting the photoionization and photoheating
+coeffiecients for H, He, and He\ :sup:`+`\.  See
+:ref:`radiation_backgrounds` for the additional parameters that
+control the UV backgrounds.

File doc/manual/source/physics/cosmic_rays.rst

-.. _cosmic_rays:
-
-Cosmic Rays
-==================
-.. sectionauthor:: Sam Skillman <Samuel.Skillman@Colorado.EDU>
-.. versionadded:: 2.1
-

File doc/manual/source/physics/index.rst

    hydro_methods.rst
    cooling.rst
    radiative_transfer.rst
-   cosmic_rays.rst
    shock_finding.rst
 

File doc/manual/source/physics/radiative_transfer.rst

 
 Radiative Transfer
 ==================
-.. sectionauthor:: John Wise <jwise@astro.princeton.edu>
+.. sectionauthor:: John Wise <jwise@physics.gatech.edu>
 .. versionadded:: 2.0
+
+Adaptive Ray Tracing
+--------------------
+
+Solving the radiative transfer equation can be computed with adaptive
+ray tracing that is fully coupled with the hydrodynamics and energy /
+rate solvers.  The adaptive ray tracing uses the algorithm of Abel &
+Wandelt (2002) that is based on the HEALPix framework.
+
+For the time being, a detailed description and test suite can be found
+in the paper Wise & Abel (2011, MNRAS 414, 3458).
+
+Flux Limited Diffusion
+----------------------
+
+More details can be found in the paper Reynolds et al. (2009, Journal
+of Computational Physics 228, 6833).

File doc/manual/source/physics/shock_finding.rst

 
 Shock Finding
 ==================
-.. sectionauthor:: Sam Skillman <Samuel.Skillman@Colorado.EDU>
+.. sectionauthor:: Sam Skillman <samskillman@gmail.com>
 .. versionadded:: 2.1
+*Source: Grid_FindShocks.C*
 
+Shock finding is accomplished using one of four methods.  The primary
+method uses a coordinate-unsplit temperature jump (method 1), as described in
+`Skillman et. al. 2008
+<http://adsabs.harvard.edu/abs/2008ApJ...689.1063S>`_ with the
+exception that instead of searching across multiple grids for the pre-
+and post-shock cells, we terminate the search at the edge of the ghost
+zones within each grid.  
+
+Shock finding is controlled by the ``ShockMethod`` parameter, which
+can take the following values:
+
+0 - Off
+
+1 - Unsplit Temperature Jumps
+
+2 - Dimensionally Split Temperature Jumps
+
+3 - Unsplit Velocity Jumps
+
+4 - Dimensionally Split Velocity Jumps
+
+When ``ShockMethod`` nonzero, this will create a "Mach" field in the
+output files.   
+
+Note: Method 1 has been used the most by the developer, and therefore
+is the primary method.  Method 2 has been tested quite a bit, but the
+downsides of using a dimensionally split method are outlined in the
+above paper.  Methods 3 and 4 are more experimental and will run, but
+results may vary.
+
+Additional Shock Finding Parameters:
+
+``ShockTemperatureFloor`` - When calculating the mach number using temperature jumps, set the temperature floor in the calculation to this value.
+
+``StorePreShockFields`` - Optionally store the Pre-shock Density and Temperature during data output.
+
+
+
+
+

File doc/manual/source/physics/star_particles.rst

 A star may be formed from
 a cell of gas if all of the following conditions are met:
 
-  *. The cell is the most-refined cell at that point in space.
+#. The cell is the most-refined cell at that point in space.
   
-  *. The density of the cell is above a threshold.
+#. The density of the cell is above a threshold.
   
-  *. The cell of gas is in the region of refinement. For unigrid, or
-     AMR-everywhere simulations, this corresponds to the whole volume. But for
-     zoom-in simulations, this prevents star particles from forming in areas
-     that are not being simulated at high resolution.
+#. The cell of gas is in the region of refinement. For unigrid, or
+   AMR-everywhere simulations, this corresponds to the whole volume. But for
+   zoom-in simulations, this prevents star particles from forming in areas
+   that are not being simulated at high resolution.
 
 If a cell has met these conditions, then these quantities are calculated for
 the cell:
 
-  *. Cell star formation timescale (Eqn 21 from Springel & Hernquist).
+* Cell star formation timescale (Eqn 21 from Springel & Hernquist).
      :math:`t_0^{\ast}` and :math:`\rho_{\mathrm{th}}` are inputs to the model,
      and are the star formation time scale and density scaling value,
      respectively. Note that :math:`\rho_{\mathrm{th}}` is not the same as the
      critical density for star formation listed above. :math:`\rho` is the
      gas density of the cell.
+
      .. math::
-     
-        t_{\ast}(\rho)=t_0^{\ast}\left(\frac{\rho}{\rho_{\mathrm{th}}}\right)^{-1/2}
+
+     t_{\ast}(\rho)=t_0^{\ast}\left(\frac{\rho}{\rho_{\mathrm{th}}}\right)^{-1/2}
   
-  *. Mass fraction in cold clouds, :math:`x` (see Eqns. 16 and 18).
+* Mass fraction in cold clouds, :math:`x` (see Eqns. 16 and 18).
      :math:`y` is a dimensionless quantity
      calculated as part of the formulation;
      :math:`u_{\textrm{SN}}\equiv(1-\beta)\beta^{-1}\epsilon_{\textrm{SN}}` is
      :math:`\epsilon_{\textrm{SN}}` is the energy released from a nominal
      supernova and is set to 4e48 ergs; and finally :math:`\Lambda(\rho, T, z)`
      is the cooling rate of the cell of gas.
+
      .. math::
      
         y\equiv\frac{t_{\ast}\Lambda(\rho,T,z)}{\rho[\beta u_{\mathrm{SN}}-(1-\beta)u_{\mathrm{SN}}]}
 :math:`m` is the mass of gas in the cell;
 :math:`\Delta t` is the size of the simulation time step that
 is operative for the cell (which changes over AMR levels, of course).
+
 .. math::
 
-   p_{\ast}=\frac{m}{m_{\ast}}\left\{1-\exp\left[-\frac{(1-\beta)x\Delta t}{t_{\ast}}\right]\right\}\
+   p_{\ast}=\frac{m}{m_{\ast}}\left\{1-\exp\left[-\frac{(1-\beta)x\Delta t}{t_{\ast}}\right]\right\}
 
 If this star formula is used with AMR, some caution is required. Primarily,
 the AMR refinement can not be too aggressive. Values of ``OverDensityThreshold``

File doc/manual/source/user_guide/HierarchyFile.rst

 entry has a set number of fields that describe its position in
 space, as well as the fields that are affiliated with that grid:
 
+Note: We are in the process of transitioning to an `HDF5-formatted
+Hierarchy File`_.
+
 .. highlight:: none
 
 ::
 
 
 
+HDF5-formatted Hierarchy File
+-----------------------------
+
+We are transitioning to an HDF5-formatted hierarchy file. This is an
+improvement because reading a large (many thousand grid) ASCII
+hierarchy file take a long time. [Other improvements?]
+
+The structure of the file:
+
+Although HDF5 tools like 'h5ls' and 'h5dump' can be used to explore
+the structure of the file, it's probably easiest to use python and
+h5py. This is how to open an example hierarchy file (from
+run/Cosmology/Hydro/AMRCosmologySimulation) in python.
+
+::
+
+     >>> import h5py
+     >>> f = h5py.File('RD0007/RedshiftOutput0007.hierarchy.hdf5','r')
+
+The root group ('/') contains a number of attributes.
+
+::
+
+     >>> f.attrs.keys()
+     ['Redshift', 'NumberOfProcessors', 'TotalNumberOfGrids']
+     >>> f.attrs['Redshift']
+     0.0
+     >>> f.attrs['NumberOfProcessors']
+     1
+     >>> f.attrs['TotalNumberOfGrids']
+     44
+
+So we see that this is a z=0 output from a simulation run on a single
+core and it contains a total of 44 grids.
+
+Now let's look at the groups contained in this file.
+
+::
+
+     >>> f.keys()
+     ['Level0', 'Level1', 'Level2', 'LevelLookupTable']
+
+The simulation has two levels of refinement, so there are a total of
+three HDF5 groups that contain information about the grids at each
+level. Additionally, there is one more dataset ('LevelLookupTable')
+that is useful for finding which level a given grid belongs to. Let's
+have a closer look.
+
+::
+
+     >>> level_lookup = f['LevelLookupTable']
+     >>> level_lookup.shape
+     (44,)
+     >>> level_lookup[:]
+     array([0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
+            2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
+
+This shows you that the first grid is on level 0, the second on level
+1, and all the remaining grids on level 2. Let's have a look at the
+'Level2' group.
+
+::
+
+     >>> g = f['Level2']
+     >>> g.keys()
+    ['Grid00000003', 'Grid00000004', 'Grid00000005', ..., 'Grid00000043', 'Grid00000044']
+
+Each level group also has one attribute, 'NumberOfGrids'.
+
+::
+
+     >>> g.attrs['NumberOfGrids']
+     42
+
+The hierarchy information about each of the grids is stored as both
+attributes and datasets.
+
+::
+
+     >>> grid = g['Grid00000003']
+     >>> grid.attrs.keys()
+     ['Task', 'GridRank', 'Time', 'OldTime', 'SubgridsAreStatic', 'NumberOfBaryonFields', 'FieldType',
+      'BaryonFileName', 'CourantSafetyNumber', 'PPMFlatteningParameter', 'PPMDiffusionParameter',
+      'PPMSteepeningParameter', 'ParticleFileName', 'GravityBoundaryType', 'NumberOfDaughterGrids',
+      'NextGridThisLevelID', 'NextGridNextLevelID']
+     >>> grid.keys()
+     ['GridDimension', 'GridEndIndex', 'GridGlobalPosition',
+      'GridLeftEdge', 'GridRightEdge', 'GridStartIndex', 'NumberOfParticles']
+
+Besides the parameters that have been described above, there are few
+new elements:
+
+``GridGlobalPosition`` is LeftGridEdge[] expressed in integer indices
+of this level, i.e. running from 0 to RootGridDimension[] *
+RefinementFactors[]**level - 1. This may be useful for re-calculating
+positions in long double precision (which is not universally supported
+by HDF5) at runtime.
+
+
+``NumberOfDaughterGrids`` gives you the number of daughter grids.
+
+
+``DaughterGrids`` is a group that contains HDF5-internal soft links to
+the daugher datasets. Example:
+
+::
+
+     >>> daughters = grid['DaughterGrids']
+     >>> daughters.keys()
+     ['DaughterGrid0000', 'DaughterGrid0001', 'DaughterGrid0002', ..., 'DaughterGrid0041']
+     >>> daughters.get('DaughterGrid0000', getlink=True)
+     <SoftLink to "/Level2/Grid00000003">
+
+In this case there are 42 daughter grids.
+
+
+``ParentGrids`` is a group that contains HDF5-internal soft links to
+parent grids on all levels above the present grid's level. Example for
+a level 2 grid:
+
+::
+
+     >>> grid = f['Level2']['Grid00000044']
+     >>> parents = grid['ParentGrids']
+     >>> parents.keys()
+     ['ParentGrid_Level0', 'ParentGrid_Level1']
+     >>> parents.get('ParentGrid_Level0', getlink=True)
+     <SoftLink to "/Level0/Grid00000001">
+
+Lastly, there's one additional (experimental) feature that is
+available only if you've compiled with verson 1.8+ of HDF5. In that
+case you can set '#define HAVE_HDF5_18' in
+Grid_WriteHierarchyInformationHDF5.C [perhaps this should become a
+Makefile configuration option?], and then there will be an external
+HDF5 link to the HDF5 file containing the actual data for that grid. Example:
+
+::
+
+     >>> grid.get('GridData', getlink=True)
+     >>> <ExternalLink to "Grid00000002" in file "./RD0007/RedshiftOutput0007.cpu0000"
+
+
+Controlling the Hierarchy File Output Format
+--------------------------------------------
+
+There are two new parameters governing the format of the hierarchy
+format:
+
+``[OutputControl.]HierarchyFileInputFormat = 0, 1``
+
+  This specifies the format of the hierarchy file to be read in: 0 =
+  ASCII, 1 = HDF5. Default set to 0 for now, but will change to 1 in the
+  future.
+
+``[OutputControl.]HierarchyFileOutputFormat = 0, 1, 2``  [OutputControl.HierarchyFileOutputFormat in new-config]
+
+  This specifies the format of the hierarchy file to be written out: 0
+  = ASCII, 1 = HDF5, 2 = both. Default set to 2 for now, but will change
+  to 1 in the future.

File doc/manual/source/user_guide/RunningLargeSimulations.rst

 
 Here we describe how to efficiently run a large simulation on a high
 number of processors, such as particular parameters to set and
-suggested number of MPI tasks for a given problem size.
+suggested number of MPI tasks for a given problem size.  For a problem
+to be scalable, most of the code must be parallel to achieve high
+performance numbers on large MPI process counts (see `Amdahl's
+Law`__).  In general, the user wants to pick the number of processors
+so that computation is still dominant over communication time.  If the
+processor count is too high, communication time will become too large
+and might even *slow* down the simulation!
+
+For picking the number of processors for an Enzo run, a good starting
+point is putting a 64\ :sup:`3` box on each processor for both AMR and
+unigrid setups.  For example, a 256\ :sup:`3` simulation would run
+well on (256/64)\ :sup:`3` = 64 processors.  For nested grid
+simulations, the outer boxes usually require little computation
+compared to the "zoom-in" region, so the processor count should be
+based on the inner-most nested grid size.  The user can experiment
+with increasing the processor count from this suggestion, but strong
+scaling (i.e. linear speedup with processor count) is not to be
+expected.  Little performance gains (as of v2.0) can be expected
+beyond assigning a 32\ :sup:`3` cube per processor.
+
+.. note:: 
+
+   The level-0 grid is only partitioned during the problem
+   initialization.  It will *never* be re-partitioned if the user
+   restarts with a different number of processors.  However, some
+   performance gains can be expected even if a processor does not
+   contain a level-0 grid because of the work on finer levels.
+
+Important Parameters
+--------------------
+
+* ``LoadBalancing``: Default is 1, which moves work from overloaded to
+  underutilized processes, regardless of the grid position.  **New for
+  v2.1**: In some cases but not always, speedups can be found in load
+  balancing on a `space filling curve`_ (``LoadBalancing = 4``).  Here
+  the grids on each processor will be continuous on the space filling
+  curve.  This results in a grouped set of grids, requiring less
+  communication from other processors (and even other compute nodes).
+
+* ``SubgridSizeAutoAdjust`` and ``OptimalSubgridsPerProcessor``: **New for
+  v2.1** Default is ON and 16, respectively.  The maximum subgrid size
+  and edge length will be dynamically adjusted on each AMR level
+  according to the number of cells on the level and number of
+  processors.  The basic idea behind increasing the subgrid sizes
+  (i.e. coalescing grids) is to reduce communication between grids.
+
+* ``MinimumSubgridEdge`` and ``MaximumSubgridSize``: *Unused if
+  SubgridAutoAdjust is ON*.  Increase both of these parameters to
+  increase the average subgrid size, which might reduce communication
+  and speedup the simulation.
+
+* ``UnigridTranspose``: Default is 0, which is employs blocking MPI
+  communication to transpose the root grid before and after the FFT.
+  In level-0 grids |ge| 1024\ :sup:`3`, this becomes the most
+  expense part of the calculation.  In these types of large runs,
+  Option 2 is recommended, which uses non-blocking MPI calls; however
+  it has some additional memory overhead, which is the reason it is
+  not used by default.
+
+Compile-time options
+--------------------
+
+* ``max-subgrids``: If the number of subgrids in a single AMR level
+  exceeds this value, then the simulation will crash.  Increase as
+  necessary.  Default: 100,000
+
+* ``ooc-boundary-yes``: Stores the boundary conditions out of core,
+  i.e. on disk.  Otherwise, each processor contains a complete copy of
+  the external boundary conditions.  This becomes useful in runs with
+  large level-0 grids.  For instance in a 1024\ :sup:`3` simulation
+  with 16 baryon fields, each processor will contain a set of
+  boundary conditions on 6 faces of 1024\ :sup:`2` with 16 baryon
+  fields.  In single precision, this requires 402MB!  Default: OFF
+
+* ``fastsib-yes``: Uses a chaining mesh to help locate sibling grids
+  when constructing the boundary conditions.  Default: ON
+
+.. |ge| unicode:: 0x2265
+
+.. _space filling curve: http://en.wikipedia.org/wiki/Hilbert_curve
+
+.. __: http://en.wikipedia.org/wiki/Amdahl's_law

File doc/manual/source/user_guide/index.rst

    EmbeddedPython.rst
    HierarchyFile.rst
    FlowChart.rst
-   Presentations.rst

File run/Cooling/CoolingTest/CoolingTest.enzotest

 gravity = True
 cooling = True
 dimensionality = 3
+fullsuite = False
+pushsuite = False
+quicksuite = False

File run/Cooling/OneZoneFreefallTest/OneZoneFreefallTest.enzo

 #
 TopGridRank = 2
 
-# Dim 0 - H number density
-# Dim 1 - metallicity
-# Dim 2 - temperature
-TopGridDimensions = 4 7
+TopGridDimensions = 3 5
 
 OneZoneFreefallTestInitialDensity = 1.0
 OneZoneFreefallTestMinimumEnergy = 1.0
-OneZoneFreefallTestMaximumEnergy = 1000.0
-OneZoneFreefallTestMinimumMetallicity = 1e-6
-OneZoneFreefallTestMaximumMetallicity = 1
+OneZoneFreefallTestMaximumEnergy = 10.0
+OneZoneFreefallTestMinimumMetallicity = 1.0e-7
+OneZoneFreefallTestMaximumMetallicity = 1.0e-3
 
 # Set timestep as a fraction of free-fall time
-OneZoneFreefallTimestepFraction = 1e-3
+OneZoneFreefallTimestepFraction = 1e-2
 
 #
 #  set I/O and stop/start parameters
 #
 StopTime                  = 50
-StopCycle                 = 20000
-CycleSkipDataDump         = 200
+StopCycle                 = 3000
+CycleSkipDataDump         = 20
 DataDumpDir               = DD
 DataDumpName              = DD
 
 #
 OutputCoolingTime         = 1
 OutputTemperature         = 1
+OutputDustTemperature     = 1
 
 #
 # Units
 #
 RadiativeCooling          = 1
 MultiSpecies              = 2
+H2FormationOnDust         = 1
 MetalCooling              = 3             // cloudy cooling
 CloudyCoolingGridFile     = solar_2008_3D_metals.h5 // 3D metals only
-CloudyMetallicityNormalization = 0.018477 // Solar pattern, all metals
-ConstantTemperatureFloor  = 0
 CMBTemperatureFloor       = 0
 IncludeCloudyHeating      = 0
-
+SolarMetalFractionByMass  = 0.02041
 TestProblemUseMetallicityField = 1
-TestProblemMetallicityNormalization = 0.01424544 // consistent with Cloudy 
-				      		 // solar abundances
 
 # Initial species fractions, fiddle at own risk,
-#TestProblemInitialHIFraction      = 0.998
+TestProblemInitialHIFraction      = 0.999
 #TestProblemInitialHIIFraction     = 1e-10
 #TestProblemInitialHeIFraction     = 1.0
 #TestProblemInitialHeIIFraction    = 1.0e-20
 #TestProblemInitialHeIIIIFraction  = 1.0e-20
 #TestProblemInitialHMFraction      = 1.e-20
-#TestProblemInitialH2IFraction     = 1.e-3
+TestProblemInitialH2IFraction     = 1.e-5
 #TestProblemInitialH2IIFraction    = 1.e-20

File run/Cooling/OneZoneFreefallTest/OneZoneFreefallTest.enzotest

 cooling = True
 chemistry = True
 dimensionality = 2
-
-
-
+fullsuite = False
+pushsuite = False
+quicksuite = False

File run/Cooling/OneZoneFreefallTest/notes.txt

 the free-fall time.  Since the timestep continually decreases, outputs are 
 done on cycles.  This test problem can be used to test chemistry and 
 cooling routines.
+
+The script, plot.py, will create plots of n vs. T (and Tdust), n
+vs. f_H2, and n vs. t_cool/t_dyn.  If using H2 formation on 
+dust grains, set dust=True on line 10.  Run this script like this:
+
+python plot.py OneZoneFreefallTest.enzo

File run/Cooling/OneZoneFreefallTest/plot.py

+from matplotlib import pylab
+import sys
+
+from yt.mods import *
+from yt.analysis_modules.simulation_handler.api import EnzoSimulation
+
+do_fH2 = True
+do_t_cool = True
+
+dust = True
+if dust:
+    keyword = 'with_dust'
+else:
+    keyword = 'without_dust'
+
+par_file = sys.argv[1]
+es = EnzoSimulation(par_file, get_data_by_force=True, initial_time=0)
+
+T = []
+n = []
+Z = []
+fH2 = []
+Tdust = []
+t_cool = []
+t_dyn = []
+
+for output in es.allOutputs:
+    print output['filename']
+    pf = load(output['filename'])
+    T.append(pf.h.grids[0]['Temperature'])
+    n.append(pf.h.grids[0]['NumberDensity'][0,0,0])
+    Z.append(pf.h.grids[0]['Metallicity'])
+    if do_fH2: fH2.append(pf.h.grids[0]['H2I_Fraction'])
+    if do_t_cool:
+        t_cool.append(pf.h.grids[0]['Cooling_Time'])
+        t_dyn.append(pf.h.grids[0]['DynamicalTime'][0,0,0])
+    if dust: Tdust.append(pf.h.grids[0]['Dust_Temperature'])
+    del pf
+
+T = na.array(T)
+n = na.array(n)
+Z = na.array(Z)
+fH2 = na.array(fH2)
+Tdust = na.array(Tdust)
+t_cool = na.array(t_cool)
+t_dyn = na.array(t_dyn)
+
+colors = ['black', 'purple', 'blue', 'green', 'orange', 'red']
+
+met = na.round(na.log10(Z[0,0,:,0]))
+for i in range(T.shape[2]):
+    pylab.loglog(n, T[:, 0, i, 0], label='log (Z/Z$_{\odot}$) = %d' % met[i],
+                 color=colors[i], linestyle='-')
+    if dust:
+        pylab.loglog(n, Tdust[:, 0, i, 0], color=colors[i], linestyle='--')
+pylab.xlim(xmin=1.0)
+pylab.ylim(1e0, 1e4)
+pylab.xlabel('n [cm$^{-3}$]')
+pylab.ylabel('T [K]')
+pylab.legend(labelspacing=0.0, loc='upper left')
+pylab.savefig('n-T_%s.png' % keyword)
+pylab.clf()
+
+if do_fH2:
+    for i in range(T.shape[2]):
+        pylab.loglog(n, fH2[:, 0, i, 0], label='log (Z/Z$_{\odot}$) = %d' % met[i],
+                     color=colors[i])
+    pylab.xlim(xmin=1.0)
+    pylab.xlabel('n [cm$^{-3}$]')
+    pylab.ylabel('f$_{H_{2}}$')
+    pylab.ylim(1e-5, 1)
+    pylab.legend(labelspacing=0.0, loc='lower right')
+    pylab.savefig('n-fH2_%s.png' % keyword)
+    pylab.clf()
+
+if do_t_cool:
+    for i in range(T.shape[2]):
+        pylab.loglog(n, (t_cool[:, 0, i, 0]/t_dyn), label='log (Z/Z$_{\odot}$) = %d' % met[i],
+                     color=colors[i])
+    pylab.xlim(xmin=1.0)
+    pylab.xlabel('n [cm$^{-3}$]')
+    pylab.ylabel('t$_{cool}$ / t$_{dyn}$')
+    pylab.legend(labelspacing=0.0, loc='lower right')
+    pylab.savefig('n-t_cool_t_dyn_%s.png' % keyword)
+    pylab.clf()

File run/Cosmology/Hydro/AMRCosmology/AMRCosmology.enzo

+#
+# AMR PROBLEM DEFINITION FILE: Cosmology Simulation (amr version)
+#
+#  define problem
+#
+ProblemType                = 30      // cosmology simulation
+TopGridRank                = 3
+TopGridDimensions          = 16 16 16
+SelfGravity                = 1       // gravity on
+TopGridGravityBoundary     = 0       // Periodic BC for gravity
+LeftFaceBoundaryCondition  = 3 3 3   // same for fluid
+RightFaceBoundaryCondition = 3 3 3
+#
+#  problem parameters
+#
+CosmologySimulationOmegaBaryonNow       = 0.04
+CosmologySimulationOmegaCDMNow          = 0.26
+CosmologySimulationDensityName          = GridDensity
+CosmologySimulationVelocity1Name        = GridVelocities
+CosmologySimulationVelocity2Name        = GridVelocities
+CosmologySimulationVelocity3Name        = GridVelocities
+CosmologySimulationParticlePositionName = ParticlePositions
+CosmologySimulationParticleVelocityName = ParticleVelocities
+#
+#  define cosmology parameters
+#
+ComovingCoordinates        = 1       // Expansion ON
+CosmologyOmegaMatterNow    = 0.3
+CosmologyOmegaLambdaNow    = 0.7
+CosmologyHubbleConstantNow = 0.5     // in 100 km/s/Mpc
+CosmologyComovingBoxSize   = 16.0    // in Mpc/h
+CosmologyMaxExpansionRate  = 0.015   // maximum allowed delta(a)/a
+CosmologyInitialRedshift   = 30      // 
+CosmologyFinalRedshift     = 0       //
+GravitationalConstant      = 1       // this must be true for cosmology
+#
+#  set I/O and stop/start parameters
+#
+StopCycle              = 1000          // stop after this many cycles
+dtDataDump             = 10.0         // dump at beginning and end
+#CycleSkipDataDump      = 20
+DataDumpName           = output_
+ParallelRootGridIO     = 1
+CosmologyOutputRedshift[0] = 10
+CosmologyOutputRedshift[1] = 8
+CosmologyOutputRedshift[2] = 6
+CosmologyOutputRedshift[3] = 4
+CosmologyOutputRedshift[4] = 3
+CosmologyOutputRedshift[5] = 2
+CosmologyOutputRedshift[6] = 1
+CosmologyOutputRedshift[7] = 0
+#
+#  set hydro parameters
+#
+Gamma                  = 1.6667
+PPMDiffusionParameter  = 0       // diffusion off
+DualEnergyFormalism    = 1       // use total & internal energy
+InterpolationMethod    = 1     // SecondOrderA
+CourantSafetyNumber    = 0.5
+ParticleCourantSafetyNumber = 0.8
+RadiativeCooling            = 0
+MultiSpecies                = 0
+#
+#  set grid refinement parameters
+#
+StaticHierarchy           = 0    // dynamic hierarchy
+MaximumRefinementLevel    = 2    // use up to 2 levels
+RefineBy                  = 2    // refinement factor
+CellFlaggingMethod        = 2 4    // use baryon mass for refinement 
+MinimumEfficiency         = 0.4  // fraction efficiency
+MinimumOverDensityForRefinement = 4.0 // times the initial density
+MinimumMassForRefinementLevelExponent = -0.3
+MinimumEnergyRatioForRefinement = 0.4 // min Egas/Etot for shock refinement
+#RefineRegionLeftEdge            = 0.15 0.20 0.41
+#RefineRegionRightEdge           = 0.35 0.45 0.79
+
+HierarchyFileInputFormat = 1;   // ASCII
+HierarchyFileOutputFormat = 2 ; // both HDF5 and ASCII
+
+#
+#  set some global parameters
+#
+GreensFunctionMaxNumber   = 30   // # of greens function at any one time
+

File run/Cosmology/Hydro/AMRCosmology/AMRCosmology.enzotest

+name = 'AMRCosmology'
+answer_testing_script = None
+nprocs = 1
+runtime = 'unknown'
+critical = True
+cadence = 'nightly'
+hydro = True
+gravity = True
+AMR = True
+cosmology = True
+dimensionality = 3
+fullsuite = False
+pushsuite = False
+quicksuite = False

File run/Cosmology/Hydro/AMRCosmologySimulation/AMRCosmologySimulation.enzo

-#
-# AMR PROBLEM DEFINITION FILE: Cosmology Simulation (amr version)
-#
-#  define problem
-#
-ProblemType                = 30      // cosmology simulation
-TopGridRank                = 3
-TopGridDimensions          = 16 16 16
-SelfGravity                = 1       // gravity on
-TopGridGravityBoundary     = 0       // Periodic BC for gravity
-LeftFaceBoundaryCondition  = 3 3 3   // same for fluid
-RightFaceBoundaryCondition = 3 3 3
-#
-#  problem parameters
-#
-CosmologySimulationOmegaBaryonNow       = 0.04
-CosmologySimulationOmegaCDMNow          = 0.26
-CosmologySimulationDensityName          = GridDensity
-CosmologySimulationVelocity1Name        = GridVelocities
-CosmologySimulationVelocity2Name        = GridVelocities
-CosmologySimulationVelocity3Name        = GridVelocities
-CosmologySimulationParticlePositionName = ParticlePositions
-CosmologySimulationParticleVelocityName = ParticleVelocities
-#
-#  define cosmology parameters
-#
-ComovingCoordinates        = 1       // Expansion ON
-CosmologyOmegaMatterNow    = 0.3
-CosmologyOmegaLambdaNow    = 0.7
-CosmologyHubbleConstantNow = 0.5     // in 100 km/s/Mpc
-CosmologyComovingBoxSize   = 16.0    // in Mpc/h
-CosmologyMaxExpansionRate  = 0.015   // maximum allowed delta(a)/a
-CosmologyInitialRedshift   = 30      // 
-CosmologyFinalRedshift     = 0       //
-GravitationalConstant      = 1       // this must be true for cosmology
-#
-#  set I/O and stop/start parameters
-#
-StopCycle              = 1000          // stop after this many cycles
-dtDataDump             = 10.0         // dump at beginning and end
-#CycleSkipDataDump      = 20
-DataDumpName           = output_
-ParallelRootGridIO     = 1
-CosmologyOutputRedshift[0] = 10
-CosmologyOutputRedshift[1] = 8
-CosmologyOutputRedshift[2] = 6
-CosmologyOutputRedshift[3] = 4
-CosmologyOutputRedshift[4] = 3
-CosmologyOutputRedshift[5] = 2
-CosmologyOutputRedshift[6] = 1
-CosmologyOutputRedshift[7] = 0
-#
-#  set hydro parameters
-#
-Gamma                  = 1.6667
-PPMDiffusionParameter  = 0       // diffusion off
-DualEnergyFormalism    = 1       // use total & internal energy
-InterpolationMethod    = 1     // SecondOrderA
-CourantSafetyNumber    = 0.5
-ParticleCourantSafetyNumber = 0.8
-RadiativeCooling            = 0
-MultiSpecies                = 0
-#
-#  set grid refinement parameters
-#
-StaticHierarchy           = 0    // dynamic hierarchy
-MaximumRefinementLevel    = 2    // use up to 2 levels
-RefineBy                  = 2    // refinement factor
-CellFlaggingMethod        = 2 4    // use baryon mass for refinement 
-MinimumEfficiency         = 0.4  // fraction efficiency
-MinimumOverDensityForRefinement = 4.0 // times the initial density
-MinimumMassForRefinementLevelExponent = -0.3
-MinimumEnergyRatioForRefinement = 0.4 // min Egas/Etot for shock refinement
-#RefineRegionLeftEdge            = 0.15 0.20 0.41
-#RefineRegionRightEdge           = 0.35 0.45 0.79
-#
-#  set some global parameters
-#
-GreensFunctionMaxNumber   = 30   // # of greens function at any one time
-

File run/Cosmology/Hydro/AMRCosmologySimulation/AMRCosmologySimulation.enzotest

-name = 'AMRCosmologySimulation'
-answer_testing_script = None
-nprocs = 1
-runtime = 'unknown'
-critical = True
-cadence = 'nightly'
-hydro = True
-gravity = True
-AMR = True
-cosmology = True
-dimensionality = 3

File run/Cosmology/Hydro/AMRNestedCosmology/AMRNestedCosmology.enzo

+#
+# nested cosmology simulation
+#
+ProblemType                    = 30
+TopGridRank                    = 3
+SelfGravity                    = 1
+TopGridGravityBoundary         = 0
+LeftFaceBoundaryCondition      = 3 3 3
+RightFaceBoundaryCondition     = 3 3 3
+BaryonSelfGravityApproximation = 1
+TopGridDimensions              = 32 32 32
+
+#
+#  problem parameters
+#
+CosmologySimulationOmegaBaryonNow       = 0.04
+CosmologySimulationOmegaCDMNow          = 0.26
+CosmologySimulationInitialTemperature   = 140.0  // DEBATABLE
+CosmologySimulationDensityName          = GridDensity
+CosmologySimulationVelocity1Name        = GridVelocities
+CosmologySimulationVelocity2Name        = GridVelocities
+CosmologySimulationVelocity3Name        = GridVelocities
+CosmologySimulationParticlePositionName = ParticlePositions
+CosmologySimulationParticleVelocityName = ParticleVelocities
+CosmologySimulationNumberOfInitialGrids = 3
+CosmologySimulationGridDimension[1]     = 32 32 32
+CosmologySimulationGridLeftEdge[1]      = 0.25 0.25 0.25
+CosmologySimulationGridRightEdge[1]     = 0.75 0.75 0.75
+CosmologySimulationGridLevel[1]         = 1
+CosmologySimulationGridDimension[2]     = 32 32 32
+CosmologySimulationGridLeftEdge[2]      = 0.375 0.375 0.375 
+CosmologySimulationGridRightEdge[2]     = 0.625 0.625 0.625
+CosmologySimulationGridLevel[2]         = 2