Commits

Sam Skillman committed 53bb4f3 Merge

Attempt to merge to tip.

  • Participants
  • Parent commits 796f065, 3014135

Comments (0)

Files changed (224)

File .hgignore

File contents unchanged.
 bbf0a2ffbd22c4fbecf946c9c96e6c4fac5cbdae woc_pre_fld_merge
 48b4e9d9d6b90f703e48e621b488136be2a0e9cf woc_fld_merge
 b86d8ba026d6a0ec30f15d8134add1e55fae2958 Wise10_GalaxyBirth
+2d90aa38e06f00a531db45a43225cde1faf093f2 enzo-2.2
    * Greg Bryan             gbryan@astro.columbia.edu
    * Renyue Cen             cen@astro.princeton.edu
    * Dave Collins           dcollins@physics.ucsd.edu
-   * Nathan Goldbaum        nathan12343@gmail.com
+   * Nathan Goldbaum        goldbaum@ucolick.org
    * Robert Harkness        harkness@sdsc.edu
    * Elizabeth Harper-Clark h-clark@astro.utoronto.ca
    * Cameron Hummels        chummels@gmail.com

File doc/manual/source/developer_guide/FloatIsDouble.rst

 and C/C++ code, the variable precision agrees between the two
 languages. Compilers do not attempt to ensure that calls from C/C++
 to Fortran make any sense, so the user is manifestly on their own.
-To this end, when writing Fortran code, the data type ``real``
-corresponds to ``float``, and ``REALSUB`` corresponds to ``FLOAT``. Mismatching
-these data types can cause misalignment in the data that is being
-passed back and forth between C/C++ and Fortran code (if the
-precision of ``float`` and ``FLOAT`` are not the same), and will often
+To this end, when writing Fortran code you must ensure that your
+variables are declared with the correct type.  Unlike Enzo's C/C++
+routines that overwrite the default ``float`` and ``int``
+types with their single/double precision equivalents, Enzo's Fortran
+routines do not overwrite the basic data types.  Hence, we have
+created unique type identifiers for the Fortran routines that map to
+Enzo's ``float``, ``FLOAT`` and ``int`` types, as specified below:
+
+==================  ==============
+**Enzo C/C++**      **Enzo F/F90**
+``float``           ``R_PREC``
+``int``             ``INTG_PREC``
+``FLOAT``           ``P_PREC``
+==================  ==============
+
+In addition, Fortran allows additional data types for both ``logical``
+and ``complex`` variables.  In Enzo, the precision of these variables
+may be chosen to match Enzo's ``int`` and ``float`` values from C/C++
+using the F/F90 types ``LOGIC_PREC`` and ``CMPLX_PREC`` respectively.
+
+Moreover, unlike C/C++, hard-coded constants in Fortran routines
+default to single-precision values.  This can be especially
+troublesome when calling a Fortran subroutine or function with
+constants as their inputs, or when writing complicated formulas using
+constants that must be of higher precision.  To this end, we have
+defined four type-modifier Fortran suffixes, that can be used to
+declare constants of differing precision:
+
+===================  ==========
+**Variable Type**    **Suffix**
+``R_PREC``           ``RKIND``
+``INTG_PREC``        ``IKIND``
+``P_PREC``           ``PKIND``
+``LOGIC_PREC``       ``LKIND``
+===================  ==========
+
+Note: since a complex number in Fortran is defined through a pair of
+real numbers, to create a complex constant of type ``CMPLX_PREC`` you
+would use the ``RKIND`` suffix on both components.
+
+For example, the type specifiers and constant suffixes could be used
+in the following ways: 
+
+.. code-block:: c
+
+    c     Declarations
+          R_PREC     third
+          INTG_PREC  one
+          P_PREC     fifth
+          CMPLX_PREC two_i
+          LOGIC_PREC test
+
+    c     Calculations
+          third = 1._RKIND / 3._RKIND
+          one   = 1_IKIND
+          fifth = real(1, PKIND) / 5._PKIND
+          two_i = (0._RKIND, 2._RKIND)
+          test  = .true._LKIND
+
+
+All of these type definitions are supplied in the file
+``fortran_types.def`` and should be included within a Fortran routine
+within the scope of the function, after any ``implicit none``
+declaration, and before declaring any variables, e.g.
+
+.. code-block:: c
+
+          subroutine foo(a)
+             implicit none
+    #include "fortran_types.def"
+             R_PREC a
+
+The Enzo build system will preprocess this file to include
+``fortran_types.def`` at the specified location in the file, prior to
+compilation.  Moreover, the spacing in this file is usable using
+either fixed-source-form or free-source-form Fortran files.
+
+**A word of warning:** mismatching the data types between C/C++ and
+Fortran codes can cause misalignment in the data, and will often
 result in nonsense values that will break Enzo elsewhere in the
 code. This can be particularly tricky to debug if the values are
 not used immediately after they are modified!

File doc/manual/source/developer_guide/ProgrammingGuide.rst

 to be re-defined to higher precision types. This is outlined
 in :ref:`FloatIsDouble`.
 
+Fortran types
+-------------
+
+Unlike Enzo's C and C++ routines, Fortran files (.F and .F90) do not
+re-define the built-in 'integer' and 'real' types, so all variables
+and constants must be defined with the appropriate precision.  There
+are pre-defined type specifiers that will match Enzo's C and C++
+precision re-definitions, which should be used for all variables that
+pass through the C/Fortran interface.  This is discussed in detail in 
+:ref:`FloatIsDouble`.
+
 Header Files
 ------------
 

File doc/manual/source/reference/EnzoPrimaryReferences.rst

 The Enzo method paper is not yet complete. However, there are several papers
 that describe the numerical methods used in Enzo, and this documentation
 contains a brief outline of the essential physics in Enzo, in
-:ref:`EnzoAlgorithms`.  Two general references (that should be considered to
-stand in for the method paper) are:
-
-
-*  `Introducing Enzo, an AMR Cosmology Application <http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/0403044>`_
-   by **O'Shea et al.** In "Adaptive Mesh Refinement - Theory and
-   Applications," Eds. T. Plewa, T. Linde & V. G. Weirs, Springer
-   Lecture Notes in Computational Science and Engineering, 2004.
-   `Bibtex entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2004astro.ph..3044O&data_type=BIBTEX&db_key=PRE&nocookieset=1>`_
-*  `Simulating Cosmological Evolution with Enzo <http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:0705.1556>`_
-   by **Norman et al.** In "Petascale Computing: Algorithms and
-   Applications," Ed. D. Bader, CRC Press LLC, 2007.
-   `Bibtex entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2007arXiv0705.1556N&data_type=BIBTEX&db_key=PRE&nocookieset=1>`_
-
-Three somewhat older conferences proceedings are also relevant:
-
+:ref:`EnzoAlgorithms`.  These papers should be considered suitable for
+citations for Enzo in general:
 
 *  `Simulating X-Ray Clusters with Adaptive Mesh Refinement <http://adsabs.harvard.edu/abs/1997ASPC..123..363B>`_
    by **Bryan and Norman.** In "Computational Astrophysics; 12th
    Tomisaka, and Tomoyuki Hanawa. Boston, Mass. : Kluwer Academic,
    1999. (Astrophysics and space science library ; v. 240), p.19
    `Bibtex entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1999ASSL..240...19N&data_type=BIBTEX&db_key=AST&nocookieset=1>`_
+*  `Introducing Enzo, an AMR Cosmology Application <http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/0403044>`_
+   by **O'Shea et al.** In "Adaptive Mesh Refinement - Theory and
+   Applications," Eds. T. Plewa, T. Linde & V. G. Weirs, Springer
+   Lecture Notes in Computational Science and Engineering, 2004.
+   `Bibtex entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2004astro.ph..3044O&data_type=BIBTEX&db_key=PRE&nocookieset=1>`_
+*  `Simulating Cosmological Evolution with Enzo <http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:0705.1556>`_
+   by **Norman et al.** In "Petascale Computing: Algorithms and
+   Applications," Ed. D. Bader, CRC Press LLC, 2007.
+   `Bibtex entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2007arXiv0705.1556N&data_type=BIBTEX&db_key=PRE&nocookieset=1>`_
 
 The primary hydrodynamics methods are PPM and ZEUS, as described in
 the following two papers:
   1989, p. 64-84.  `Bibtex Entry
   <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1989JCoPh..82...64B&data_type=BIBTEX&db_key=PHY&nocookieset=1>`_.
 
-The YT papers can be found here:
+The paper describing the Dedner MHD can be found here:
 
-* M Turk, `Analysis and Visualization of Multi-Scale Astrophysical Simulations
-  Using Python and NumPy
-  <http://conference.scipy.org/proceedings/SciPy2008/paper_11/>`_ in Proceedings
-  of the 7th Python in Science conference (!SciPy 2008), G Varoquaux, T Vaught, J
-  Millman (Eds.), pp. 46-50 (`Bibtex entry <http://hg.yt-project.org/yt/wiki/Citation>`_)
+ * `Magnetohydrodynamic Simulations of Disk Galaxy Formation: The Magnetization of the Cold and Warm Medium <http://adsabs.harvard.edu/abs/2009ApJ...696...96W>`_,
+   by Wang, P.; Abel, T.  The Astrophysical Journal, Volume 696, Issue 1, pp. 96-109 (2009)
+   `Bibtex Entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2009ApJ...696...96W&data_type=BIBTEX&db_key=AST&nocookieset=1>`_.
+
+The paper describing the ray-tracing algorithm (MORAY) can be found here:
+
+ * `ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing <http://adsabs.harvard.edu/abs/2011MNRAS.414.3458W>`_,
+   Wise, J.; Abel, T.  Monthly Notices of the Royal Astronomical Society, Volume 414, Issue 4, pp.  3458-3491.
+   `Bibtex Entry <http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2011MNRAS.414.3458W&data_type=BIBTEX&db_key=AST&nocookieset=1>`_.
+
+The YT paper can be found here:
 
 * `yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data
   <http://adsabs.harvard.edu/abs/2011ApJS..192....9T>`_, by Turk, M. J.;

File doc/manual/source/reference/MakeOptions.rst

 **MACH_LDFLAGS**   Machine-dependent flags for the linker
 ================== ============
 
-Precision flags:
+Machine-specific flags:
 
 ============================== ============
 **MACH_DEFINES**               Machine-specific defines, e.g. ``-DLINUX``, ``-DIBM``, ``-DIA64``, etc.
-**MACH_FFLAGS_INTEGER_32**     Fortran flags for specifying 32-bit integers
-**MACH_FFLAGS_INTEGER_64**     Fortran flags for specifying 64-bit integers
-**MACH_FFLAGS_REAL_32**        Fortran flags for specifying 32-bit reals
-**MACH_FFLAGS_REAL_64**        Fortran flags for specifying 64-bit reals
 ============================== ============
 
 Paths to include header files:

File doc/manual/source/user_guide/EnzoTestSuite.rst

 automatically and relatively frequently (multiple times a day) on 
 a remote server to ensure that bugs have not been introduced during the code 
 development process.  All runs in the quick suite use no more than 
-a single processor.  The total run time should be about 15 minutes.  
+a single processor.  The total run time should be about 15 minutes 
+on the default lowest level of optimization..  
 
 2.  The "push suite" (``--suite=push``).  This is a slightly 
 large set of tests, encompassing all of the quick suite and 
 some additional larger simulations that test a wider variety of physics 
 modules.  The intent of this package is to provide a thorough validation 
 of the code prior to changes being pushed to the main repository.  The 
-total run time is roughly 30 minutes and all simulations use only a single 
-processor.  
+total run time is roughly 60 minutes for default optimization, and 
+all simulations use only a single processor.  
 
 3.  The "full suite" (``--suite=full``).  This encompasses essentially 
 all of test simulations contained within the run directory.  This suite 
 situations, and is intended to be run prior to major changes being pushed 
 to the stable branch of the code.  A small number of simulations in the full 
 suite are designed to be run on 2 processors and will take multiple hours to 
-complete.  The total run time is roughly 60 hours.  
+complete.  The total run time is roughly 60 hours for the default lowest
+level of optimization.
 
 .. _running:
 .. _`running the test suite against the gold standard`:
 into each test problem directory before tests are run.
 
 2.  **Get/update yt.**  The enzo tests are generated and compared using the 
-yt analysis suite.  If you do not yet have yt, visit 
+yt analysis suite.  You must be using yt 2.5 or later in order for the
+test suite to work.  If you do not yet have yt, visit 
 http://yt-project.org/#getyt for installation instructions.  
 If you already have yt and yt is in your path, make sure you're using
 the most up-to-date version by running the following command:
 
     $ yt update
 
-3.  **Generate the standard test files.**  The testing suite operates by 
-running a series of enzo test files throughout the ``run/`` subdirectory.
-Some unique test files are already generated for specific test problems, 
-but the standard generic tests to be run on each test problem need to be 
-created by you with the following command: 
+3.  **Run the test suite.** The testing suite operates by running a 
+series of enzo test files throughout the ``run`` subdirectory.  You can 
+initiate the quicksuite test simulations and their comparison against the 
+current gold standard by running the following commands:
 
 ::
 
     $ cd <enzo_root>/run
-    $ ./make_new_tests.py
+    $ ./test_runner.py -o <output_dir> 
 
-4.  **Run the test suite.** While remaining in the ``run/`` 
-subdirectory, you can initiate the quicksuite test simulations and 
-their comparison against the gold standard by running the following 
-commands:
-
-::
-
-    $ ./test_runner.py --suite=quick -o <output_dir> --answer-compare-name=enzogold000
-
-In this comand, ``--suite=quick`` instructs the test runner to
-use the quick suite. ``--output-dir=<output_dir>`` instructs the 
+In this comand, ``--output-dir=<output_dir>`` instructs the 
 test runner to output its results to a user-specified directory 
 (preferably outside of the enzo file hierarchy).  Make sure this
 directory is created before you call test_runner.py, or it will 
-fail.  Lastly, it uses the ``quick`` gold standard to compare against.
-For a full description of the many flags associated with 
-test_runner.py, see the flags_ section.
+fail.  The default behavior is to use the quick suite, but you
+can specify any set of tests using the ``--suite`` or ``--name``
+flags_. Lastly, we compare against the current gold standard in 
+the cloud: ``enzogold2.2``.  For a full description of the many 
+flags associated with test_runner.py, see the flags_ section.
 
-5.  **Review the results.**  While the test_runner is executing, you should 
+4.  **Review the results.**  While the test_runner is executing, you should 
 see the results coming up at the terminal in real time, but you can review 
 these results in a file output at the end of the run.  The test_runner 
 creates a subdirectory in the output directory you provided it, as shown
     $ ls <output_dir>
     fe7d4e298cb2    
 
-
     $ ls <output_dir>/fe7d4e298cb2    
     Cooling        GravitySolver    MHD                    test_results.txt 
     Cosmology      Hydro            RadiationTransport     version.txt
 directory are all of the test problems that you ran along with their
 simulation outputs, organized based on test type (e.g.  ``Cooling``,
 ``AMR``, ``Hydro``, etc.)  Additionally, you should see a file called
-``test_results.txt``, which contains a summary of the test runs.
+``test_results.txt``, which contains a summary of the test runs and
+which ones failed and why.  
 
-The testing suite does not expect bitwise agreement between the gold standard
-and your results, due to compiler, architecture and operating system
-differences between versions of enzo.  There must be a significant 
-difference between your result and the gold standard for you to fail 
-any tests, thus you should be passing all of the tests.  If you are not, 
-then examine more closely what modifications you made to the enzo source
-which caused the test failure.  If this is a fresh version of enzo that 
-you grabbed and compiled, then you should write the enzo-dev@googlegroups.com 
-email list with details of your test run (computer os, architecture, version 
-of enzo, version of yt, what test failed, what error message you received), 
-so that we can address this issue.
+By default, the testing suite does not expect bitwise agreement between 
+the gold standard and your results, due to compiler, architecture and 
+operating system differences between versions of enzo.  There must be 
+a significant difference between your result and the gold standard for 
+you to fail any tests, thus you should be passing all of the tests.  
+If you are not, then examine more closely what modifications you made 
+to the enzo source which caused the test failure.  If this is a fresh 
+version of enzo that you grabbed and compiled, then you should write 
+the enzo-dev@googlegroups.com email list with details of your test run 
+(computer os, architecture, version of enzo, version of yt, what test 
+failed, what error message you received), so that we can address this 
+issue.
 
-For more details about the results of an individual test, examine the
-``estd.out`` file in the test problem within this directory hierarchy,
-as it contains the stderr and stdout for each test simulation.
+
+My tests are failing and I don't know why
+-----------------------------------------
+
+A variety of things cause tests to fail: differences in compiler,
+optimization level, operating system, MPI submission method, 
+and of course, your modifications to the code.  Go through your 
+``test_results.txt`` file for more information about which tests 
+failed and why.  You could try playing with the relative tolerance 
+for error using the ``--tolerance`` flag as described in the flags_ 
+section.  For more information regarding the failures of a specific 
+test, examine the ``estd.out`` file in that test problem's subdirectory
+within the ``<output_dir>`` directory structure, as it contains the 
+``STDERR`` and ``STDOUT`` for that test simulation.
+
+If you are receiving ``EnzoTestOutputFileNonExistent`` errors, it
+means that your simulation is not completing.  This may be due to
+the fact that you are trying to run enzo with MPI which your 
+system doesn't allow you to initiate from the command line.
+(e.g. it expects you to submit mpirun jobs to the queue).  
+You can solve this problem by recompiling your enzo executable with
+MPI turnend off (i.e. ``make use-mpi-no``), and then just pass the 
+local_nompi machine flag (i.e. ``-m local_nompi``) to your 
+test_runner.py call to run the executable directly without MPI support.  
+Currently, only a few tests use multiple cores, so this is not a 
+problem in the quick or push suites.
+
+If you see a lot of ``YTNoOldAnswer`` errors, it may mean that your
+simulation is running to a different output than the gold standard
+does, and the test suite is trying to compare your last output file
+against a non-existent file in the gold standard.  Look carefully
+at the results of your simulation for this test problem using the 
+provided python file to determine what is happening.  Or it may
+simply mean that you specified the wrong gold standard.
 
 .. _generating_standard:
 
 the gold standard, or you want to test one of your forks against another.
 Regardless of the reason, you want to generate your own reference
 standard for comparison.  To do this, follow the instructions for
-`running the test suite against the gold standard`_, but replace step #4 with:
+`running the test suite against the gold standard`_, but replace step #3 with:
 
-4. **Run the test suite.** Run the suite with these flags within
+3. **Run the test suite.** Run the suite with these flags within
 the ``run/`` subdirectory in the enzo source hierarchy:
 
 ::
 
-    $ ./test_runner.py --suite=quick -o <output_dir> --local-store --answer-store-name=<test_name>
+    $ cd <enzo_root>/run
+    $ ./test_runner.py --suite=quick -o <output_dir> --answer-store --answer-name=<test_name> 
+                       --local 
 
 N.B. We're creating a reference set in this example with the quick 
 suite, but we could just as well create a reference from any number 
 
 Here, we are storing the results from our tests locally in a file 
 called <test_name> which will now reside inside of the ``<output_dir>``.
+If you want to, you can leave off ``--answer-name`` and get a sensible
+default.
 
 .. _directory layout:
 
 ``<output_dir>`` from previous tests), so that it looks something 
 like this `directory layout`_.  From here, you must follow the 
 instructions for `running the test suite against the gold 
-standard`_, but replace step #4 with:
+standard`_, but replace step #3 with:
 
-4.  **Run the test suite.**  Run the suite with these flags inside
+3.  **Run the test suite.**  Run the suite with these flags inside
 the ``run/`` subdirectory in the enzo source hierarchy:
 
 ::
 
-    $ ./test_runner.py --suite=quick -o <output_dir> --local-store --answer-compare-name=<test_name> 
-                       --clobber
+    $ cd <enzo_root>/run
+    $ ./test_runner.py --suite=quick -o <output_dir> --answer-name=<test_name> 
+                       --local --clobber
 
 Here, we're running the quick suite and outputting our results to
 ``<output_dir>``.  We are comparing the simulation results against a 
-local (``--local-store``) reference standard which is named ``<test_name>``
+local (``--local``) reference standard which is named ``<test_name>``
 also located in the ``<output_dir>`` directory.  Note, we included the 
 ``--clobber`` flag to rerun any simulations that may have been present
 in the ``<output_dir>`` under the existing enzo version's files, since 
 the default behavior is to not rerun simulations if their output files 
-are already present.
+are already present.  Because we didn't set the ``--answer-store`` flag,
+the default behavior is to compare against the ``<test_name>``.
 
 .. _flags:
 
     it might load qsub or mpirun in order to start the enzo executable
     for the individual test simulations.  You can only use machine
     names of machines which have a corresponding machine file in the 
-    ``run/run_templates`` subdirectory (e.g. nics-kraken). N.B.
+    ``run/run_templates`` subdirectory (e.g. nics-kraken). *N.B.*
     the default, ``local``, will attempt to run the test simulations using
     mpirun, so if you are required to queue on a machine to execute 
     mpirun, ``test_runner.py`` will silently fail before finishing your
     Rerun enzo on test problems which already have 
     results in the destination directory
 
+``--tolerance=int`` default: see ``--strict``
+    Sets the tolerance of the relative error in the 
+    comparison tests in powers of 10.  
+
+    Ex: Setting ``--tolerance=3`` means that test results
+    are compared against the standard and fail if
+    they are off by more than 1e-3 in relative error.
+    
+``--bitwise`` default: see ``--strict``
+    Declares whether or not bitwise comparison tests
+    are included to assure that the values in output
+    fields exactly match those in the reference standard.
+
+``--strict=[high, medium, low]`` default: low
+    This flag automatically sets the ``--tolerance``
+    and ``--bitwise`` flags to some arbitrary level of
+    strictness for the tests.  If one sets ``--bitwise``
+    or ``--tolerance`` explicitly, they trump the value
+    set by ``--strict``.  When testing enzo general 
+    functionality after an installation, ``--strict=low``
+    is recommended, whereas ``--strict=high`` is suggested
+    when testing modified code against a local reference 
+    standard.
+
+    ``high``: tolerance = 13, bitwise = True
+    ``medium``: tolerance = 6, bitwise = False
+    ``low``: tolerance = 3, bitwise = False
+
 ``--sim-only`` default: False
     Only run simulations, do not store the tests or compare them against a 
     standard.
     When a test fails a pdb session is triggered.  Allows interactive inspection
     of failed test data.
 
-**Flags for tests against local reference standards**
+**Flags for storing, comparing against different standards**
 
-``--answer-compare-name=str`` default: latest 
-    The name of the test against which we will compare
+``--answer-store`` default: False
+    Should we store the results as a reference or just compare
+    against an existing reference?
 
-``--answer-store-name=str`` default: None
-    The name we'll call this set of tests. Also turns on functionality
-    for storing the results instead of comparing the results.
+``--answer-name=str`` default: latest gold standard
+    The name of the file where we will store our reference results,
+    or if ``--answer-store`` is false, the name of the reference against 
+    which we will compare our results. 
 
-``--local-store`` default: False
-    Store/Load local results?
+``--local`` default: False
+    Store/Compare the reference standard locally (i.e. not on the cloud)
 
 **Bisection flags**
 

File run/Cooling/CoolingTest_Cloudy/CoolingTest_Cloudy__test_cooling.py

     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield FieldValuesTest(pf, field, decimals=13)
+    tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+    pf = sim[-1]
+    for field in _fields:
+        yield FieldValuesTest(pf, field, decimals=tolerance)
 

File run/Cooling/CoolingTest_JHW/CoolingTest_JHW__test_cooling.py

     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield FieldValuesTest(pf, field, decimals=13)
+    tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+    pf = sim[-1]
+    for field in _fields:
+        yield FieldValuesTest(pf, field, decimals=tolerance)
 

File run/Cooling/OneZoneFreefallTest/OneZoneFreefallTest__test_freefall.py

      sim_dir_load
 from yt.frontends.enzo.answer_testing_support import \
      requires_outputlog
+from yt.config import ytcfg
 
 _fields = ("Temperature", "Dust_Temperature")
 _pf_name = os.path.basename(os.path.dirname(__file__)) + ".enzo"
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield FieldValuesTest(pf, field, decimals=13)
+    tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+    pf = sim[-1]
+    for field in _fields:
+        yield FieldValuesTest(pf, field, decimals=tolerance)

File run/Cosmology/AMRZeldovichPancake/AMRZeldovichPancake__test_almost_standard.py

+###
+### This is a testing template
+###
+
+import os
+from yt.mods import *
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    VerifySimulationSameTest, \
+    sim_dir_load
+from yt.frontends.enzo.answer_testing_support import \
+    requires_outputlog, \
+    standard_small_simulation
+
+_base_fields = ("Density",
+                "y-velocity",
+                "z-velocity",
+                "Gas_Energy",
+                "particle_position_x",
+                "particle_position_y",
+                "particle_position_z")
+
+@requires_outputlog(os.path.dirname(__file__), 
+                    "AMRZeldovichPancake.enzo") # Verifies that OutputLog exists
+def test_standard():
+    sim = sim_dir_load("AMRZeldovichPancake.enzo",
+                       path="./Cosmology/AMRZeldovichPancake",
+                       find_outputs=True)
+    sim.get_time_series()
+    yield VerifySimulationSameTest(sim)
+    base_pf = sim[0]
+    fields = [f for f in _base_fields if f in base_pf.h.field_list]
+    # Only test the last output.
+    pf = sim[-1]
+    for test in standard_small_simulation(pf, fields): yield test
+
+# Tests that OutputLog exists and fails otherwise
+def test_exist():
+    filename = os.path.dirname(__file__) + "/OutputLog"
+    if not os.path.exists(filename):
+        raise EnzoTestOutputFileNonExistent(filename)

File run/Cosmology/ZeldovichPancake/ZeldovichPancake__test_almost_standard.py

+###
+### This is a testing template
+###
+
+import os
+from yt.mods import *
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    VerifySimulationSameTest, \
+    sim_dir_load
+from yt.frontends.enzo.answer_testing_support import \
+    requires_outputlog, \
+    standard_small_simulation
+
+_base_fields = ("Density",
+                "y-velocity",
+                "z-velocity",
+                "Gas_Energy",
+                "particle_position_x",
+                "particle_position_y",
+                "particle_position_z")
+
+@requires_outputlog(os.path.dirname(__file__), 
+                    "ZeldovichPancake.enzo") # Verifies that OutputLog exists
+def test_almost_standard():
+    sim = sim_dir_load("ZeldovichPancake.enzo",
+                       path="./Cosmology/ZeldovichPancake",
+                       find_outputs=True)
+    sim.get_time_series()
+    yield VerifySimulationSameTest(sim)
+    base_pf = sim[0]
+    fields = [f for f in _base_fields if f in base_pf.h.field_list]
+    # Only test the last output.
+    pf = sim[-1]
+    for test in standard_small_simulation(pf, fields): yield test
+
+# Tests that OutputLog exists and fails otherwise
+def test_exist():
+    filename = os.path.dirname(__file__) + "/OutputLog"
+    if not os.path.exists(filename):
+        raise EnzoTestOutputFileNonExistent(filename)

File run/GravitySolver/TestOrbit/TestOrbit__test_orbit.py

     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield AllFieldValuesTest(pf, field, decimals=13)
+    tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+    pf = sim[-1]
+    for field in _fields:
+        yield AllFieldValuesTest(pf, field, decimals=tolerance)

File run/Hydro/Hydro-1D/FreeExpansion/FreeExpansion__test_free_expansion.py

         return ray_length * ray['t'][ipos]
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, 1.0e-2, 0.0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, 10**-tolerance, 0.0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_collapse_max_value():
     sim = sim_dir_load(_pf_name, path=_dir_name, 
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestFreeExpansionDistance(pf)
+    pf = sim[-1]
+    yield TestFreeExpansionDistance(pf)

File run/Hydro/Hydro-1D/PressurelessCollapse/PressurelessCollapse.enzo

 TopGridDimensions      = 100
 SelfGravity            = 1       // gravity on
 TopGridGravityBoundary = 1       // Isolated BCs
-LeftFaceBoundaryCondition  = 1    // outflow ?
-RightFaceBoundaryCondition = 1    // outflow ?
+LeftFaceBoundaryCondition  = 1   // outflow
+RightFaceBoundaryCondition = 1   // outflow
 PressureFree           = 1       // turn off pressure
 #
 #  set I/O and stop/start parameters
 #  set hydro parameters
 #
 Gamma                  = 1.4
-CourantSafetyNumber    = 0.05    // needs to be lower for pressurefree
-PPMDiffusionParameter  = 0       // diffusion off
+CourantSafetyNumber    = 0.4    // needs to be lower for pressurefree
+PPMDiffusionParameter  = 0      // diffusion off
 #
 #  set grid refinement parameters
 #
-StaticHierarchy           = 1    // dynamic hierarchy
-MaximumRefinementLevel    = 1    // use up to 2 levels
-RefineBy                  = 4    // refinement factor
-MinimumSlopeForRefinement = 0.2  // set this to <= 0.2 to refine CD
+StaticHierarchy           = 1    // static hierarchy
 #
 #  set some global parameters
 #
-SubcycleSafetyFactor   = 2       // 
-tiny_number            = 1.0e-10 // fixes velocity slope problem
-MinimumEfficiency      = 0.4     // better value for 1d than 0.2
-Initialdt = 1e-6
+Initialdt              = 1.0e-6
+

File run/Hydro/Hydro-1D/PressurelessCollapse/make_plots.py

+from yt.mods import *
+import os
+import sys
+import pylab
+
+def make_plot(pfname):
+    pf = load(pfname)
+    ### extract an ortho_ray (1D solution vector)
+    ray = pf.h.ortho_ray(0, [0.5, 0.5])
+
+    ### define fields vector
+    fields = ('Density', 'x-velocity', 'TotalEnergy', 'Pressure' )
+
+    ### make plot
+
+    pylab.figure(1, figsize=(8,7))
+
+    # Density Plot
+    a = pylab.axes([0.09, 0.57, 0.38, 0.38])
+    pylab.axhline(0,color='k',linestyle='dotted')
+    pylab.plot(ray['x'],ray['Density'], 'ro', ms=4)
+
+    pylab.xlabel('Position')
+    pylab.ylabel('Density')
+
+    # Velocity Plot
+    a = pylab.axes([0.59, 0.57, 0.38, 0.38])
+    pylab.axhline(0,color='k',linestyle='dotted')
+    pylab.plot(ray['x'],ray['x-velocity'], 'ro', ms=4)
+
+    pylab.xlabel('Position')
+    pylab.ylabel('Velocity')
+
+    # TotalEnergy Plot
+    a = pylab.axes([0.59, 0.07, 0.38, 0.38])
+    pylab.axhline(0,color='k',linestyle='dotted')
+    pylab.plot(ray['x'],ray['TotalEnergy'], 'ro', ms=4)
+
+    pylab.xlabel('Position')
+    pylab.ylabel('Total Energy')
+
+    ### Save plot
+    pylab.savefig('%s.png' % pf)
+    pylab.clf()
+    
+if __name__ == '__main__':
+    for i in range(11):
+        try: 
+            make_plot('DD%04i/data%04i'% (i,i))
+        except:
+            break
+
+    # To make a movie using avconv, uncomment the following 2 lines
+    # os.system('avconv -r 10 -i data%04d.png -threads 8 -pass 1 -an -f webm -b 2000k movie.webm')
+    # os.system('avconv -r 10 -i data%04d.png -threads 8 -pass 2 -an -f webm -b 2000k movie.webm')
+

File run/Hydro/Hydro-1D/Toro-6-ShockTube/Toro-6-ShockTube__test_toro6.py

+import os
 from yt.mods import *
-from yt.funcs import *
 from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    VerifySimulationSameTest, \
+    sim_dir_load
 from yt.frontends.enzo.answer_testing_support import \
     requires_outputlog, \
-    ShockTubeTest
-import os
+    ShockTubeTest, \
+    standard_small_simulation
+
 
 _data_file = 'DD0001/data0001'
 _solution_file = 'Toro-6-ShockTube_t=2.0_exact.txt'
 _rtol = 1.0e-6
 _atol = 1.0e-7
 
-# Verifies that OutputLog exists
+
+_base_fields = ('Density', 'Gas_Energy')
+
 @requires_outputlog(os.path.dirname(__file__), "Toro-6-ShockTube.enzo")
+
+def test_almost_standard():
+    sim = sim_dir_load("Toro-6-ShockTube.enzo",
+                       path="./Hydro/Hydro-1D/Toro-6-ShockTube",
+                       find_outputs=True)
+    sim.get_time_series()
+    yield VerifySimulationSameTest(sim)
+    base_pf = sim[0]
+    fields = [f for f in _base_fields if f in base_pf.h.field_list]
+    # Only test the last output.
+    pf = sim[-1]
+    for test in standard_small_simulation(pf, fields): yield test
+
+# Tests that OutputLog exists and fails otherwise
+def test_exist():
+    filename = os.path.dirname(__file__) + "/OutputLog"
+    if not os.path.exists(filename):
+        raise EnzoTestOutputFileNonExistent(filename)
+
 def test_toro6():
     test = ShockTubeTest(_data_file, _solution_file, _fields, 
                          _les, _res, _rtol, _atol)

File run/Hydro/Hydro-2D/FreeExpansionAMR/FreeExpansionAMR__test_free_expansion.py

         return ray_length * ray['t'][ipos]
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, 1.0e-2, 0.0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, 10**-tolerance, 0.0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_collapse_max_value():
     sim = sim_dir_load(_pf_name, path=_dir_name, 
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestFreeExpansionDistance(pf)
+    pf = sim[-1]
+    yield TestFreeExpansionDistance(pf)

File run/Hydro/Hydro-2D/NohProblem2D/NohProblem2D__test_noh2d.py

         return np.array([dens.mean(), dens.std(), dens.min(), dens.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 class TestRadialDensity(AnswerTestingTest):
     _type_name = "noh2d_radial"
         return na.array(diag_den)
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-3, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
     def plot(self):
         dd = self.pf.h.all_data()
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestShockImage(pf)
-        yield TestRadialDensity(pf)
+    pf = sim[-1]
+    yield TestShockImage(pf)
+    yield TestRadialDensity(pf)

File run/Hydro/Hydro-2D/NohProblem2DAMR/NohProblem2DAMR__test_noh2damr.py

         return np.array([dens.mean(), dens.std(), dens.min(), dens.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 class TestRadialDensity(AnswerTestingTest):
     _type_name = "noh2damr_radial"
         return na.array(diag_den)
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-3, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
     def plot(self):
         dd = self.pf.h.all_data()
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestShockImage(pf)
-        yield TestRadialDensity(pf)
+    pf = sim[-1]
+    yield TestShockImage(pf)
+    yield TestRadialDensity(pf)

File run/Hydro/Hydro-2D/RampedKelvinHelmholtz2D/RampedKelvinHelmholtz2D.enzo

 Gamma                       = 1.6667
 Mu                          = 1
 HydroMethod                 = 0
-CourantSafetyNumber         = 0.8
+CourantSafetyNumber         = 0.4
 Theta_Limiter               = 1.9
 RiemannSolver               = 3
 #ReconstructionMethod        = 0

File run/Hydro/Hydro-3D/NohProblem3D/NohProblem3D__test_noh3d.py

         return np.array([dens.mean(), dens.std(), dens.min(), dens.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
     
 class TestRadialDensity(AnswerTestingTest):
     _type_name = "noh3d_radial"
         return na.array(diag_den)
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-3, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
     def plot(self):
         dd = self.pf.h.all_data()
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestShockImage(pf)
-        yield TestRadialDensity(pf)
+    pf = sim[-1]
+    yield TestShockImage(pf)
+    yield TestRadialDensity(pf)

File run/Hydro/Hydro-3D/NohProblem3DAMR/NohProblem3DAMR__test_noh3damr.py

         return np.array([dens.mean(), dens.std(), dens.min(), dens.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 class TestRadialDensity(AnswerTestingTest):
     _type_name = "noh3damr_radial"
         return na.array(diag_den)
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-3, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
     def plot(self):
         dd = self.pf.h.all_data()
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestShockImage(pf)
-        yield TestRadialDensity(pf)
+    pf = sim[-1]
+    yield TestShockImage(pf)
+    yield TestRadialDensity(pf)

File run/Hydro/Hydro-3D/ProtostellarCollapse_Std/ProtostellarCollapse_Std.enzotest

 gravity = True
 dimensionality = 3
 max_time_minutes = 3
-fullsuite = True
-pushsuite = True
-quicksuite = True
+fullsuite = False
+pushsuite = False
+quicksuite = False

File run/Hydro/Hydro-3D/RotatingCylinder/RotatingCylinder__test_rotating_cylinder.py

                                                         "AngularMomentumZ"]))
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-3, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_rotating_cylinder():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        yield TestLVariation(pf)
+    pf = sim[-1]
+    yield TestLVariation(pf)

File run/MHD/1D/BrioWu-MHD-1D/BrioWu-MHD-1D__test_briowu.py

     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield AllFieldValuesTest(pf, field, decimals=13)
+    tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+    pf = sim[-1]
+    for field in _fields:
+        yield AllFieldValuesTest(pf, field, decimals=tolerance)

File run/MHD/2D/MHD2DRotorTest/MHD2DRotorTest.enzotest

 gravity = False
 AMR = True
 dimensionality = 2
-max_time_minutes = 2
+max_time_minutes = 2.5
 fullsuite = True
 pushsuite = True
 quicksuite = True

File run/MHD/2D/MHD2DRotorTest/MHD2DRotorTest__test_almost_standard.py

+###
+### This is a testing template
+###
+
+import os
+from yt.mods import *
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    VerifySimulationSameTest, \
+    sim_dir_load
+from yt.frontends.enzo.answer_testing_support import \
+    requires_outputlog, \
+    standard_small_simulation
+
+_base_fields = ("Density",
+                "z-velocity",
+                "Gas_Energy",
+                "particle_position_x",
+                "particle_position_y",
+                "particle_position_z")
+
+@requires_outputlog(os.path.dirname(__file__), 
+                    "MHD2DRotorTest.enzo") # Verifies that OutputLog exists
+def test_standard():
+    sim = sim_dir_load("MHD2DRotorTest.enzo",
+                       path="./MHD/2D/MHD2DRotorTest",
+                       find_outputs=True)
+    sim.get_time_series()
+    yield VerifySimulationSameTest(sim)
+    base_pf = sim[0]
+    fields = [f for f in _base_fields if f in base_pf.h.field_list]
+    # Only test the last output.
+    pf = sim[-1]
+    for test in standard_small_simulation(pf, fields): yield test
+
+# Tests that OutputLog exists and fails otherwise
+def test_exist():
+    filename = os.path.dirname(__file__) + "/OutputLog"
+    if not os.path.exists(filename):
+        raise EnzoTestOutputFileNonExistent(filename)

File run/MHD/2D/MHD2DRotorTest/MHD2DRotorTest__test_rotor.py

 
 _pf_name = os.path.basename(os.path.dirname(__file__)) + ".enzo"
 _dir_name = os.path.dirname(__file__)
-_fields = ('Density', 'Bx','Pressure','MachNumber')
+_fields = ('Density', 'Bx','Pressure')
 
 class TestRotorImage(AnswerTestingTest):
     _type_name = "mhd_rotor_image"
         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_rotor():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestRotorImage(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestRotorImage(pf, field)

File run/MHD/2D/SedovBlast-MHD-2D-Fryxell/SedovBlast-MHD-2D-Fryxell__test_fryxell.py

         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_fryxell():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestFryxellImage(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestFryxellImage(pf, field)

File run/MHD/2D/SedovBlast-MHD-2D-Gardiner/SedovBlast-MHD-2D-Gardiner.enzo

 TopGridGravityBoundary     = 0
 LeftFaceBoundaryCondition  = 3 3 
 RightFaceBoundaryCondition = 3 3
-#DomainLeftEdge = -0.5 -0.75 0 
-#DomainRightEdge = 0.5 0.75 0 
+DomainLeftEdge = -0.5 -0.5 0 
+DomainRightEdge = 0.5 0.5 0 
 
 #
 #  set I/O and stop/start parameters

File run/MHD/2D/SedovBlast-MHD-2D-Gardiner/SedovBlast-MHD-2D-Gardiner__test_gardiner.py

         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_gardiner():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestGardinerImage(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestGardinerImage(pf, field)

File run/MHD/2D/SedovBlast-MHD-2D-Gardiner/scripts.py

 
 ### define fields vector
 fields = ('Density', 'Pressure', 'Bx', 'By')
-pc = PlotCollection(pf, center=[0.5,0.5,0.5])
+pc = PlotCollection(pf, center=[0.0,0.0,0.0])
 
 for f in fields:
     pc.add_slice(f, 2)

File run/RadiationTransport/PhotonShadowing/PhotonShadowing__test_photonshadowing.py

         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_photon_shadowing():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestPhotonShadowing(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestPhotonShadowing(pf, field)

File run/RadiationTransport/PhotonShadowing/scripts.py

 pf = load("DD%4.4d/data%4.4d" % (last,last))
 
 pc = PlotCollection(pf, center=[0.5,0.5,0.5])
-pc.add_slice('kph',2)
+pc.add_slice('HI_kph',2)
 pc.add_slice('Neutral_Fraction',2)
 pc.add_slice('Temperature',2)
 pc.save()

File run/RadiationTransport/PhotonTest/PhotonTest__test_photontest.py

         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_photon_test():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestPhotonTest(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestPhotonTest(pf, field)

File run/RadiationTransport/PhotonTestAMR/PhotonTestAMR__test_amrphotontest.py

         return np.array([dd.mean(), dd.std(), dd.min(), dd.max()])
 
     def compare(self, new_result, old_result):
-        assert_allclose(new_result, old_result, rtol=1e-13, atol=0)
+        tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
+        assert_allclose(new_result, old_result, rtol=10**-tolerance, atol=0)
 
 @requires_outputlog(_dir_name, _pf_name)
 def test_amr_photon_test():
     sim = sim_dir_load(_pf_name, path=_dir_name,
                        find_outputs=True)
     sim.get_time_series()
-    for pf in sim:
-        for field in _fields:
-            yield TestAMRPhotonTest(pf, field)
+    pf = sim[-1]
+    for field in _fields:
+        yield TestAMRPhotonTest(pf, field)

File run/make_new_tests.py

-#!/usr/bin/env python
-import shutil, os
-
-# Do not run the standard tests on these test problems.
-ignore_list = ('GravityTest',)
-
-template = open("test_type.py.template").read()
-
-for root, dirs, files in os.walk("."):
-    for fn in files:
-        if fn.endswith(".enzotest") and \
-          os.path.basename(fn)[:-9] not in ignore_list:
-            simname = os.path.splitext(fn)[0]
-            simpath = root
-            testname = os.path.basename(fn)[:-9]
-            oname = os.path.join(root, testname + "__test_standard.py")
-            output = template % dict(filename = fn[:-4], simpath = simpath)
-            open(oname, "w").write(output)

File run/test_runner.py

 from yt.utilities.logger import \
     disable_stream_logging, ufstring
 disable_stream_logging()
+
+# Set the filename for the latest version of the gold standard
+# and for the default local standard output
+ytcfg["yt", "gold_standard_filename"] = str("enzogold2.2")
+ytcfg["yt", "local_standard_filename"] = str("enzolocal2.2")
 from yt.utilities.answer_testing.framework import \
     AnswerTesting
 
     def addSuccess(self, test):
         self.successes.append("%s: PASS" % (test))
 
-    def finalize(self, result, outfile=None):
+    def finalize(self, result, outfile=None, sims_not_finished=[], sim_only=False):
         print 'Testing complete.'
+        print 'Sims not finishing: %i' % len(sims_not_finished)
         print 'Number of errors: %i' % len(self.errors)
         print 'Number of failures: %i' % len(self.failures)
         print 'Number of successes: %i' % len(self.successes)
         if outfile is not None:
             outfile.write('Test Summary\n')
+            outfile.write('Sims Not Finishing: %i\n' % len(sims_not_finished))
             outfile.write('Tests Passed: %i\n' % len(self.successes))
             outfile.write('Tests Failed: %i\n' % len(self.failures))
-            outfile.write('Tests Errored: %i\n' % len(self.errors))
+            outfile.write('Tests Errored: %i\n\n' % len(self.errors))
+            outfile.write('Relative error tolerance: 1e-%i\n' % self.tolerance)
+            if self.bitwise:
+                outfile.write('Bitwise tests included\n')
+            else:
+                outfile.write('Bitwise tests not included\n')
+            if sim_only:
+                outfile.write('\n')
+                outfile.write('Simulations run, but not tests (--sim-only)\n')
+                return
             outfile.write('\n\n')
 
+            if sims_not_finished:
+                print'Simulations which did not finish in allocated time:'
+                print'(Try rerunning each/all with --time-multiplier=2)'
+                outfile.write('Simulations which did not finish in allocated time:\n')
+                outfile.write('(Try rerunning each/all with --time-multiplier=2)\n')
+                for notfin in sims_not_finished: 
+                    print notfin
+                    outfile.write(notfin + '\n')
+                outfile.write('\n')
+
             outfile.write('Tests that passed: \n')
             for suc in self.successes: 
                 outfile.write(suc)
 
             outfile.write('Tests that failed:\n')
             for fail in self.failures: 
-                outfile.write(fail)
-                outfile.write('\n')
+                for li, line in enumerate(fail.split('\\n')):
+                    if li > 0: outfile.write('    ')
+                    outfile.write(line)
+                    outfile.write('\n')
             outfile.write('\n')
 
             outfile.write('Tests that errored:\n')
             for err in self.errors: 
-                outfile.write(err)
-                outfile.write('\n')
+                for li, line in enumerate(err.split('\\n')):
+                    if li > 0: outfile.write('    ')
+                    outfile.write(line)
+                    outfile.write('\n')
             outfile.write('\n')
 
 class EnzoTestCollection(object):
         else:
             self.tests = tests
         self.test_container = []
+        self.sims_not_finished = []
 
     def go(self, output_dir, interleaved, machine, exe_path, sim_only=False,
            test_only=False):
+        self.sim_only = sim_only
         go_start_time = time.time()
         self.output_dir = output_dir
         total_tests = len(self.tests)
+
+        # copy executable to top of testing directory
+        shutil.copy(exe_path, output_dir)
+        exe_path = os.path.join(output_dir, os.path.basename(exe_path))
+        
         if interleaved:
             for i, my_test in enumerate(self.tests):
                 print "Preparing test: %s." % my_test['name']
                                                        plugins=self.plugins))
                 if not test_only:
                     print "Running simulation: %d of %d." % (i+1, total_tests)
-                    self.test_container[i].run_sim()
+                    if not self.test_container[i].run_sim():
+                        self.sims_not_finished.append(self.test_container[i].test_data['name'])
                 if not sim_only:
                     print "Running test: %d of %d." % (i+1, total_tests)
                     self.test_container[i].run_test()
             self.prepare_all_tests(output_dir, machine, exe_path)
             if not test_only: self.run_all_sims()
             if not sim_only: self.run_all_tests()
-        if not sim_only: self.save_test_summary()
+        self.save_test_summary()
         go_stop_time = time.time()
         print "\n\nComplete!"
         print "Total time: %f seconds." % (go_stop_time - go_start_time)
         print "Running all simulations."
         for i, my_test in enumerate(self.test_container):
             print "Running simulation: %d of %d." % (i+1, total_tests)
-            my_test.run_sim()
+            # Did the simulation finish?
+            if not my_test.run_sim():
+                self.sims_not_finished.append(my_test.test_data['name'])
 
     def run_all_tests(self):
         total_tests = len(self.test_container)
         run_passes = run_failures = 0
         dnfs = default_test = 0
         f = open(os.path.join(self.output_dir, results_filename), 'w')
-        self.plugins[1].finalize(None, outfile=f)
-        # for my_test in self.test_container:
-        #     default_only = False
-        #     if my_test.run_finished:
-        #         if my_test.test_data['answer_testing_script'] == 'None' or \
-        #                 my_test.test_data['answer_testing_script'] is None:
-        #             default_only = True
-        #             default_test += 1
-        #         t_passes = 0
-        #         t_failures = 0
-        #         for t_result in my_test.results.values():
-        #             t_passes += int(t_result)
-        #             t_failures += int(not t_result)
-        #         f.write("%-70sPassed: %4d, Failed: %4d" % (my_test.test_data['fulldir'], 
-        #                                                    t_passes, t_failures))
-        #         if default_only:
-        #             f.write(" (default tests).\n")
-        #         else:
-        #             f.write(".\n")
-        #         all_passes += t_passes
-        #         all_failures += t_failures
-        #         run_passes += int(not (t_failures > 0))
-        #         run_failures += int(t_failures > 0)
-        #     else:
-        #         dnfs += 1
-        #         f.write("%-70sDID NOT FINISH\n" % my_test.test_data['fulldir'])
-
-        # f.write("\n")
-        # f.write("%-70sPassed: %4d, Failed: %4d.\n" % ("Total", 
-        #                                               all_passes, all_failures))
-        # f.write("Runs finished with all tests passed: %d.\n" % run_passes)
-        # f.write("Runs finished with at least one failure: %d.\n" % run_failures)
-        # f.write("Runs failed to complete: %d.\n" % dnfs)
-        # f.write("Runs finished with only default tests available: %d.\n" % default_test)
+        self.plugins[1].finalize(None, outfile=f, sims_not_finished=self.sims_not_finished, 
+                                 sim_only=self.sim_only)
         f.close()
         if all_failures > 0 or dnfs > 0:
             self.any_failures = True
         # Check for existence
         if os.path.exists(os.path.join(self.run_dir, 'RunFinished')):
             print "%s run already completed, continuing..." % self.test_data['name']
-            return
+            return True
         
         os.chdir(self.run_dir)
         command = "%s %s" % (machines[self.machine]['command'], 
                           options.time_multiplier):
                 print "Simulation exceeded maximum run time."
                 os.killpg(proc.pid, signal.SIGUSR1)
+                self.finished = False
             running += 1
             time.sleep(1)
         
             f.close()
             print "Simulation completed in %f seconds." % \
                 (sim_stop_time - sim_start_time)
+            self.finished = True
         os.chdir(cur_dir)
+        return self.finished
 
     def run_test(self):
         rf = os.path.join(self.run_dir, 'RunFinished')
                       help="Changeset to use in simulation repo.  If supplied, make clean && make is also run")
     parser.add_option("--run-suffix", dest="run_suffix", default=None, metavar='str',
                       help="An optional suffix to append to the test run directory. Useful to distinguish multiple runs of a given changeset.")
+    parser.add_option("", "--bitwise",
+                      dest="bitwise", default=None, action="store_true", 
+                      help="run bitwise comparison of fields? (trumps strict)")
+    parser.add_option("", "--tolerance",
+                      dest="tolerance", default=None, metavar='int',
+                      help="tolerance for relative precision in comparison (trumps strict)")
+
+    all_strict = ['high', 'medium', 'low']
+    parser.add_option("", "--strict",
+                      dest="strict", default='low', metavar='str',
+                      help="strictness for testing precision: [%s]" % " ,".join(all_strict))
 
     answer_plugin = AnswerTesting()
     answer_plugin.enabled = True
     testproblem_group = optparse.OptionGroup(parser, "Test problem selection options")
     testproblem_group.add_option("", "--suite",
                                  dest="test_suite", default=unknown,
-                                 help="quick: 37 tests in ~25 minutes, push: 48 tests in ~90 minutes, full: 96 tests in ~18 hours.",
+                                 help="quick: 37 tests in ~15 minutes, push: 48 tests in ~60 minutes, full: 96 tests in ~60 hours.",
                                  choices=all_suites, metavar=all_suites)
 
     for var, caster in sorted(known_variables.items()):
         pdb_plugin.enabled = True
         pdb_plugin.enabled_for_failures = True
 
+   
     # Get information about the current repository, set it as the version in
     # the answer testing plugin.
     options.repository = os.path.expanduser(options.repository)
     answer_plugin.configure(options, None)
     reporting_plugin.configure(options, None)
 
+    # Break out if no valid strict set 
+    if options.strict not in all_strict:
+        sys.exit("Error: %s is not a valid strict, try --strict=[%s]" % (options.strict, ", ".join(all_strict)))
+
     # Break out if output directory not specified.
     if options.output_dir is None:
         print 'Please enter an output directory with -o option'
             if val == 'None': val = None
             if val == "False": val = False
             construct_selection[var] = caster(val)
+    # if no selection criteria given, run the quick suite
+    if not construct_selection:
+        construct_selection['quicksuite'] = True
     print
     print "Selecting with:"
     for k, v in sorted(construct_selection.items()):
     # the path to the executable we're testing
     exe_path = os.path.join(options.repository, "src/enzo/enzo.exe")
 
-    # Make it happen
+    # If strict is set, then use it to set tolerance and bitwise 
+    # values for later use when the nosetests get called in 
+    # answer_testing_support.py
+    # N.B. Explicitly setting tolerance and/or bitwise trumps 
+    # the strict values
+
+    if options.strict == 'high':
+        if options.tolerance is None:
+            options.tolerance = 13
+        if options.bitwise is None:
+            options.bitwise = True
+    elif options.strict == 'medium':
+        if options.tolerance is None:
+            options.tolerance = 6