+yt includes a testing suite which one can run on the codebase to assure that
+no major functional breaks have occurred. This testing suite is based on
+python nosetests_. It consists of unit testing, a basic level of testing
+where we confirm that the units in functions make sense and that these functions
+will run without failure. The testing suite also includes more rigorous
+tests, like answer tests, which involve generating output from yt functions,
+and comparing and matching those results against outputs of the same code in
+previous versions of yt for consistency in results.
+.. _nosetests: https://nose.readthedocs.org/en/latest/
+The testing suite should be run locally by developers to make sure they aren't
+checking in any code that breaks existing functionality. To further this goal,
+an automatic buildbot runs the test suite after each code commit to confirm
+that yt hasn't broken recently.
+One can run unit tests in a similar way to running answer tests. First
+follow the setup instructions on `running answer testing`__, then simply
+execute this at the command line to run all unit tests:
+If you want to specify a specific unit test to run (and not run the entire
+suite), you can do so by specifying the path of the test relative to the
+``$YT_DEST/src/yt-hg/yt`` directory. For example, if you want to run the
+plot_window tests, you'd run:
+ $ nosetests visualization/tests/test_plotwindow.py
+Answer tests test **actual data**, and many operations on that data, to make
+sure that answers don't drift over time. This is how we will be testing
+frontends, as opposed to operations, in yt.
+The very first step is to make a directory and copy over the data against which
+you want to test. Currently, we test:
+ * ``DD0010/moving7_0010`` (available in ``tests/`` in the yt distribution)
+ * ``IsolatedGalaxy/galaxy0030/galaxy0030`` (available here: http://yt-project.org/data/ )
+Next, modify the file ``~/.yt/config`` to include a section ``[yt]``
+with the parameter ``test_data_dir``. Set this to point to the
+directory with the test data you want to compare. Here is an example
+ test_data_dir = /Users/tomservo/src/yt-data
+More data will be added over time. To run a comparison, you must first run
+"develop" so that the new nose plugin becomes available:
+ $ cd $YT_DEST/src/yt-hg
+ $ python setup.py develop
+Then, in the same directory,
+ $ nosetests --with-answer-testing
+The current gold standard results will be downloaded from the amazon cloud
+and compared to what is generated locally. The results from a nose testing
+session are pretty straightforward to understand, the results for each test
+are printed directly to STDOUT. If a test passes, nose prints a period, F if
+a test fails, and E if the test encounters an exception or errors out for
+some reason. If you want to also run tests for the 'big' datasets, then in
+ $ nosetests --with-answer-testing --answer-big-data
+How to Write Answer Tests
+Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .
+You can find examples there of how to write a test. Here is a trivial example:
+ class MaximumValue(AnswerTestingTest):
+ _type_name = "ParentageRelationships"
+ def __init__(self, pf_fn, field):
+ super(MaximumValue, self).__init__(pf_fn)
+ v, c = self.pf.h.find_max(self.field)
+ result = np.empty(4, dtype="float64")
+ def compare(self, new_result, old_result):
+ assert_equal(new_result, old_result)
+What this does is calculate the location and value of the maximum of a
+field. It then puts that into the variable result, returns that from
+``run`` and then in ``compare`` makes sure that all are exactly equal.
+ * Subclass ``AnswerTestingTest``
+ * Add the attributes ``_type_name`` (a string) and ``_attrs``
+ (a tuple of strings, one for each attribute that defines the test --
+ see how this is done for projections, for instance)
+ * Implement the two routines ``run`` and ``compare`` The first
+ should return a result and the second should compare a result to an old
+ result. Neither should yield, but instead actually return. If you need
+ additional arguments to the test, implement an ``__init__`` routine.
+ * Keep in mind that *everything* returned from ``run`` will be stored.
+ So if you are going to return a huge amount of data, please ensure that
+ the test only gets run for small data. If you want a fast way to
+ measure something as being similar or different, either an md5 hash
+ (see the grid values test) or a sum and std of an array act as good proxies.
+ * Typically for derived values, we compare to 10 or 12 decimal places.
+ For exact values, we compare exactly.
+How to add data to the testing suite
+To add data to the testing suite, first write a new set of tests for the data.
+The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is
+considered canonical. Do these things:
+ * Create a new directory, ``tests`` inside the frontend's directory.
+ * Create a new file, ``test_outputs.py`` in the frontend's ``tests``
+ * Create a new routine that operates similarly to the routines you can see
+ * This routine should test a number of different fields and data objects.
+ * The test routine itself should be decorated with
+ ``@requires_pf(file_name)`` This decorate can accept the argument
+ ``big_data`` for if this data is too big to run all the time.
+ * There are ``small_patch_amr`` and ``big_patch_amr`` routines that
+ you can yield from to execute a bunch of standard tests. This is where
+ you should start, and then yield additional tests that stress the
+ outputs in whatever ways are necessary to ensure functionality.
+ * **All tests should be yielded!**
+If you are adding to a frontend that has a few tests already, skip the first
+To upload answers you can execute this command:
+ $ nosetests --with-answer-testing frontends/enzo/ --answer-store --answer-name=whatever
+The current version of the gold standard can be found in the variable
+``_latest`` inside ``yt/utilities/answer_testing/framework.py`` As of
+the time of this writing, it is ``gold001`` Note that the name of the
+suite of results is now disconnected from the parameter file's name, so you
+can upload multiple outputs with the same name and not collide.
+To upload answers, you **must** have the package boto installed, and you
+**must** have an Amazon key provided by Matt. Contact Matt for these keys.
+ * Many of the old answer tests need to be converted. This includes tests
+ for halos, volume renderings, data object access, and profiles. These
+ will require taking the old tests and converting them over, but this
+ process should be straightforward.
+ * We need to have data for Orion, Nyx, and FLASH and any other codes that
+ * Tests need to be written for Orion, Nyx, FLASH