Britton Smith avatar Britton Smith committed d1665e2 Merge

Merged.

Comments (0)

Files changed (2)

doc/manual/source/user_guide/EnzoTestSuite.rst

 criteria, including (but not limited to) the physics included, the
 estimated runtime of the test, and the dimensionality.  For
 convenience, three pre-created, overlapping sets of tests are
-provided:
+provided.  For each set of tests, the test suite can automatically
+pull the "gold standard" results from a remote server; or one
+can generate their own standard locally against which she can compare.
 
-1.  The "quick suite" (``--quicksuite=True``).  This is composed of
+1.  The "quick suite" (``--suite=quick``).  This is composed of
 small calculations that test critical physics packages both
 alone and in combination.  The intent of this package is to be run
-relatively frequently (multiple times a day) to ensure that bugs have
-not been introduced during the code development process.  All runs 
-in the quick suite use no more than a single processor.  The total 
-run time should be about 25 minutes.  The gold standard results for 
-the quick suite alone can be downloaded 
-`here <http://enzo-project.org/tests/gold_standard_quick.tar.gz>`_.
+automatically and relatively frequently (multiple times a day) on 
+a remote server to ensure that bugs have not been introduced during the code 
+development process.  All runs in the quick suite use no more than 
+a single processor.  The total run time should be about 25 minutes.  
 
 2.  The "push suite" (``--pushsuite=True``).  This is a slightly 
 large set of tests, encompassing all of the quick suite and 
 modules.  The intent of this package is to provide a thorough validation 
 of the code prior to changes being pushed to the main repository.  The 
 total run time is roughly 90 minutes and all simulations use only a single 
-processor.  The gold standard results for the push suite can be downloaded 
-`here <http://enzo-project.org/tests/gold_standard_push.tar.gz>`_.
+processor.  
 
 3.  The "full suite" (``--fullsuite=True``).  This encompasses essentially 
 all of test simulations contained within the run directory.  This suite 
 situations, and is intended to be run prior to major changes being pushed 
 to the stable branch of the code.  A small number of simulations in the full 
 suite are designed to be run on 2 processors and will take multiple hours to 
-complete.  The total run time is roughly 36 hours.  The gold standard results
-for the full suite can be downloaded 
-`here <http://enzo-project.org/tests/gold_standard_full.tar.gz>`_.
+complete.  The total run time is roughly 36 hours.  
 
 How to run the test suite
 -------------------------
 
-The Enzo test suite is run within the ``run/`` subdirectory of the
-Enzo source distribution, using the ``test_runner.py`` file.  To
-run the test suite, follow these instructions:
-
-1.  Before running the test suite, you should download the "gold
-standard" results for the 
-`quick <http://enzo-project.org/tests/gold_standard_quick.tar.gz>`_, 
-`push <http://enzo-project.org/tests/gold_standard_push.tar.gz>`_, or 
-`full <http://enzo-project.org/tests/gold_standard_full.tar.gz>`_ 
-suites and untar that file into a convenient directory.
-
-2.  Compile Enzo.  The gold standard calculations use the default 
+1.  Compile Enzo.  The gold standard calculations use the default 
 compiler settings that can be restored with ``make default``.  
 If you use significantly different compilation options
-(higher-level optimization in particular) you may see somewhat
-different outputs that will result in failed tests.
-
-3.  Go into the ``run/`` subdirectory in the Enzo repository and
-type the following command:
+(high-level optimization in particular) you may see somewhat
+different outputs that will result in failed tests.  To compile 
+enzo with the standard settings, complete these commands:
 
 ::
 
-    ./test_runner.py --quicksuite=True  --compare-dir=/path/to/gold_standard \
-            --output-dir=/enzo/test/directory
+    cd <enzo_root>/src/enzo
+    make default
+    make clean
+    make
 
-In this comand, ``--quicksuite=True`` instructs the test runner to
+Make sure you move the resulting enzo.exe file to somewhere in 
+your active path.
+
+2.  Get yt.  The enzo tests are generated and compared using the 
+yt analysis suite.  If you do not yet have yt, visit 
+http://yt-project.org/#getyt for installation instructions.  
+If you already have yt and yt is in your path, make sure you're using
+the most up-to-date version by running the following command:
+
+::
+
+    yt update
+
+3.  Generate the test files.  The testing suite operates by 
+creating a number of standard tests for each test problem, but
+you need to generate the test files first by executing the following
+commands:
+
+::
+    cd <enzo_root>/run
+    python make_new_tests.py
+
+3.  Run the testing suite. While remaining in the ``run/`` 
+subdirectory, you can initiate the generation of the quicksuite test
+simulations and comparison of them against the gold standard by 
+running the following commands:
+
+::
+
+    python test_runner.py --suite=quick -o <external_directory_where_tests_will_reside> 
+
+In this comand, ``--suite=quick`` instructs the test runner to
 use the quick suite (other possible keyboards here are
-``--pushsuite=True`` and ``--fullsuite=True``).
+``--suite=push`` and ``--suite=full``).
 ``--output-dir=/enzo/test/directory`` instructs the test runner to
-write output to the user-specified directory, and
-``--compare-dir=/path/to/gold_standard`` instructs the test runner
-to use the set of data files in the listed directory as a gold
-standard for comparison.  It is also possible to choose sets of tests
-that are sorted by dimensionality, physics modules, runtime, number of
-processors required, and other criteria.  A single named test can be run 
-by giving ``--name=<name of test>``.  Type ``./test_runner.py
---help`` for a more complete listing.
+write output to a user-specified directory (preferably outside of the
+enzo root hierarchy).  For a full description of the many flags 
+associated with test_runner.py, see the section on running more tests below.
+
+4.  Review the results. While the test_runner is executing, you should 
+see the results coming up at the terminal in real time, but you can review 
+these results in a file output at the end of the run.  The test_runner 
+generates a couple subdirectories in the output directory you provided it.  
+These will look something like this:
+
+::
+
+    ls <external_directory_where_tests_will_reside> 
+
+                    gold_quick  fe7d4e298cb2    
+
+The first subdirectory is where the gold standard downloaded itself
+on to your system.  The files in that directory are python ''shelve'' 
+objects.  The second subdirectory is named by the unique hash of the 
+version of enzo that you used to run your tests.  Within this directory, 
+you should see the individual test problems that you ran, and you should 
+see a file called test_results.txt.  This file contains statistics on
+all of the tests the passed and failed, as well as the reasons why
+failures occurred.  If you get a test failure with a brand new version
+of the code (i.e. no modifications), you should report your results
+to the enzo-users email list.  However, if you have modified the source
+and you receive some failures, you can use test_results.txt to track down
+the source of the problems.
 
 
 How to add a new test to the library
 (with extension ``.enzotest``).  The latter contains a set of
 parameters that specify the properties of the test.  Consider the test
 suite parameter file for InteractingBlastWaves, which can be found in the
-``run/Hydro/Hydro-1D/InteractingBlastWavest`` directory:
+``run/Hydro/Hydro-1D/InteractingBlastWaves`` directory:
 
 ::
 
     ./test_runner.py --repo=/path/to/desired/enzo/repo  \
          --compare-dir=/path/to/new/reference/directory \
          --output-dir=/path/to/output/directory
-
-
-
-

run/test_runner.py

 import imp
 import optparse
 import os.path
+import os
 import shutil
 import signal
 import subprocess
 
 import nose
 from nose.loader import TestLoader
+from nose.plugins import Plugin
 
 from yt.config import ytcfg
 ytcfg["yt","suppressStreamLogging"] = "True"
     commands.bisect(u,repo,rev=options.bad,**bisection_default_corrector("bad",True))
     commands.bisect(u,repo,**bisection_default_corrector("command",command))
 
+class ResultsSummary(Plugin):
+    def configure(self, options, conf):
+        super(ResultsSummary, self).configure(options, conf)
+        if not self.enabled:
+            return
+        self.errors = []
+        self.failures = []
+        self.successes = []
+
+    def addError(self, test, err):
+        self.errors.append("%s: ERROR %s" % (test, err))
+
+    def addFailure(self, test, err):
+        self.failures.append("%s: FAILURE %s" % (test, err))
+
+    def addSuccess(self, test):
+        self.successes.append("%s: PASS %s" % (test))
+
+    def finalize(self, result):
+        print self.errors, self.failures, self.successes
+
 class EnzoTestCollection(object):
     def __init__(self, tests = None, verbose=True, args = None,
                  plugins = None):
                 shutil.copy(os.path.join(self.test_dir, version_filename),
                             os.path.join(self.run_dir, version_filename))
                 if self.exe_path is not None:
-                    shutil.copy(self.exe_path, os.path.join(self.run_dir, self.local_exe))
+                    os.symlink(self.exe_path, os.path.join(self.run_dir, self.local_exe))
             else:
                 print "%s already exists. Skipping directory." % self.test_data['name']
         else:
             shutil.copy(os.path.join(self.test_dir, version_filename),
                         os.path.join(self.run_dir, version_filename))
             if self.exe_path is not None:
-                shutil.copy(self.exe_path, os.path.join(self.run_dir, self.local_exe))
+                os.symlink(self.exe_path, os.path.join(self.run_dir, self.local_exe))
 
     def _create_run_script(self):
         template_path = os.path.join(os.path.dirname(__file__), 
     answer_plugin = AnswerTesting()
     answer_plugin.enabled = True
     answer_plugin.options(parser)
+    reporting_plugin = ResultsSummary()
+    reporting_plugin.enabled = True
 
     all_suites = ['quick', 'push', 'full']
     suite_vars = [suite+"suite" for suite in all_suites]
                                  dest="test_suite", default=unknown,
                                  help="quick: 37 tests in ~25 minutes, push: 48 tests in ~90 minutes, full: 96 tests in ~18 hours.",
                                  choices=all_suites, metavar=all_suites)
+
     for var, caster in sorted(known_variables.items()):
         if var not in suite_vars:
             print "adding ", var
     answer_plugin._my_version = rev_hash
 
     answer_plugin.configure(options, None)
+    reporting_plugin.configure(options, None)
 
     # Break out if output directory not specified.
     if options.output_dir is None:
         bisector(options,args)
         sys.exit(0)
     etc = EnzoTestCollection(verbose=options.verbose, args=args,
-                             plugins = [answer_plugin])
+                             plugins = [answer_plugin, reporting_plugin])
 
     construct_selection = {}
     if options.test_suite is not None:
 
     # Store the results locally or in the cloud.
     answer_plugin.finalize()
+    reporting_plugin.finalize(None)
 
     try:
         import json
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.