Matthew Turk avatar Matthew Turk committed 521bc31 Merge

Merged in chummels/yt-doc (pull request #71)

Adding Answer Testing Docs to Documentation

Comments (0)

Files changed (3)

source/advanced/index.rst

    debugdrive
    external_analysis
    developing
+   testing
    reason_architecture

source/advanced/testing.rst

+.. _testing:
+
+=======
+Testing
+=======
+
+yt includes a testing suite which one can run on the codebase to assure that
+no major functional breaks have occurred.  This testing suite is based on 
+python nosetests_.  It consists of unit testing, a basic level of testing
+where we confirm that the units in functions make sense and that these functions
+will run without failure.  The testing suite also includes more rigorous
+tests, like answer tests, which involve generating output from yt functions,
+and comparing and matching those results against outputs of the same code in 
+previous versions of yt for consistency in results.
+
+.. _nosetests: https://nose.readthedocs.org/en/latest/
+
+The testing suite should be run locally by developers to make sure they aren't
+checking in any code that breaks existing functionality.  To further this goal,
+an automatic buildbot runs the test suite after each code commit to confirm
+that yt hasn't broken recently.
+
+.. _unit_testing:
+
+Unit Testing
+------------
+
+What do Unit Tests Do
+^^^^^^^^^^^^^^^^^^^^^
+
+How to Run Unit Tests
+^^^^^^^^^^^^^^^^^^^^^
+
+One can run unit tests in a similar way to running answer tests.  First
+follow the setup instructions on `running answer testing`__, then simply 
+execute this at the command line to run all unit tests:
+
+__ run_answer_testing_
+
+.. code-block:: bash
+
+   $ nosetests
+
+If you want to specify a specific unit test to run (and not run the entire
+suite), you can do so by specifying the path of the test relative to the
+``$YT_DEST/src/yt-hg/yt`` directory.  For example, if you want to run the
+plot_window tests, you'd run:
+
+.. code-block:: bash
+
+   $ nosetests visualization/tests/test_plotwindow.py
+
+How to Write Unit Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. _answer_testing:
+
+Answer Testing
+--------------
+
+What do Answer Tests Do
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Answer tests test **actual data**, and many operations on that data, to make 
+sure that answers don't drift over time.  This is how we will be testing 
+frontends, as opposed to operations, in yt.
+
+.. _run_answer_testing:
+
+How to Run Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The very first step is to make a directory and copy over the data against which
+you want to test.  Currently, we test:
+
+ * ``DD0010/moving7_0010`` (available in ``tests/`` in the yt distribution)
+ * ``IsolatedGalaxy/galaxy0030/galaxy0030`` (available here: http://yt-project.org/data/ )
+
+Next, modify the file ``~/.yt/config`` to include a section ``[yt]`` 
+with the parameter ``test_data_dir``.  Set this to point to the
+directory with the test data you want to compare.  Here is an example 
+config file:
+
+.. code-block:: bash
+
+   [yt]
+   test_data_dir = /Users/tomservo/src/yt-data
+
+More data will be added over time.  To run a comparison, you must first run 
+"develop" so that the new nose plugin becomes available:
+
+.. code-block:: bash
+
+   $ cd $YT_DEST/src/yt-hg
+   $ python setup.py develop
+
+Then, in the same directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing
+
+The current gold standard results will be downloaded from the amazon cloud 
+and compared to what is generated locally.  The results from a nose testing 
+session are pretty straightforward to understand, the results for each test 
+are printed directly to STDOUT. If a test passes, nose prints a period, F if 
+a test fails, and E if the test encounters an exception or errors out for 
+some reason.  If you want to also run tests for the 'big' datasets, then in 
+the yt directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing --answer-big-data
+
+How to Write Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .  
+You can find examples there of how to write a test.  Here is a trivial example:
+
+.. code-block:: python
+
+   #!python
+   class MaximumValue(AnswerTestingTest):
+       _type_name = "ParentageRelationships"
+       _attrs = ("field",)
+       def __init__(self, pf_fn, field):
+           super(MaximumValue, self).__init__(pf_fn)
+           self.field = field
+   
+       def run(self):
+           v, c = self.pf.h.find_max(self.field)
+           result = np.empty(4, dtype="float64")
+           result[0] = v
+           result[1:] = c
+           return result
+
+       def compare(self, new_result, old_result):
+           assert_equal(new_result, old_result)
+
+What this does is calculate the location and value of the maximum of a 
+field.  It then puts that into the variable result, returns that from 
+``run`` and then in ``compare`` makes sure that all are exactly equal.
+
+To write a new test:
+
+ * Subclass ``AnswerTestingTest``
+ * Add the attributes ``_type_name`` (a string) and ``_attrs`` 
+   (a tuple of strings, one for each attribute that defines the test -- 
+   see how this is done for projections, for instance)
+ * Implement the two routines ``run`` and ``compare``  The first 
+   should return a result and the second should compare a result to an old 
+   result.  Neither should yield, but instead actually return.  If you need 
+   additional arguments to the test, implement an ``__init__`` routine.
+ * Keep in mind that *everything* returned from ``run`` will be stored.  
+   So if you are going to return a huge amount of data, please ensure that 
+   the test only gets run for small data.  If you want a fast way to 
+   measure something as being similar or different, either an md5 hash 
+   (see the grid values test) or a sum and std of an array act as good proxies.
+ * Typically for derived values, we compare to 10 or 12 decimal places.  
+   For exact values, we compare exactly.
+
+How to add data to the testing suite
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add data to the testing suite, first write a new set of tests for the data.  
+The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is 
+considered canonical.  Do these things:
+
+ * Create a new directory, ``tests`` inside the frontend's directory.
+
+ * Create a new file, ``test_outputs.py`` in the frontend's ``tests`` 
+   directory.
+
+ * Create a new routine that operates similarly to the routines you can see 
+   in Enzo's outputs.
+
+   * This routine should test a number of different fields and data objects.
+
+   * The test routine itself should be decorated with 
+     ``@requires_pf(file_name)``  This decorate can accept the argument 
+     ``big_data`` for if this data is too big to run all the time.
+
+   * There are ``small_patch_amr`` and ``big_patch_amr`` routines that 
+     you can yield from to execute a bunch of standard tests.  This is where 
+     you should start, and then yield additional tests that stress the 
+     outputs in whatever ways are necessary to ensure functionality.
+
+   * **All tests should be yielded!**
+
+If you are adding to a frontend that has a few tests already, skip the first 
+two steps.
+
+How to Upload Answers
+^^^^^^^^^^^^^^^^^^^^^
+
+To upload answers you can execute this command:
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing frontends/enzo/ --answer-store --answer-name=whatever
+
+The current version of the gold standard can be found in the variable 
+``_latest`` inside ``yt/utilities/answer_testing/framework.py``  As of 
+the time of this writing, it is ``gold001``  Note that the name of the 
+suite of results is now disconnected from the parameter file's name, so you 
+can upload multiple outputs with the same name and not collide.
+
+To upload answers, you **must** have the package boto installed, and you 
+**must** have an Amazon key provided by Matt.  Contact Matt for these keys.
+
+What Needs to be Done
+^^^^^^^^^^^^^^^^^^^^^
+
+ * Many of the old answer tests need to be converted.  This includes tests 
+   for halos, volume renderings, data object access, and profiles.  These 
+   will require taking the old tests and converting them over, but this 
+   process should be straightforward.
+ * We need to have data for Orion, Nyx, and FLASH and any other codes that 
+   want to be tested
+ * Tests need to be written for Orion, Nyx, FLASH

source/cookbook/complex_plots.rst

 
 .. yt_cookbook:: amrkdtree_downsampling.py
 
+Volume Rendering with Bounding Box and Overlaid Grids
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This recipe demonstrates how to overplot a bounding box on a volume rendering
+as well as overplotting grids representing the level of refinement achieved
+in different regions of the code.
+
+.. yt_cookbook:: rendering_with_box_and_grids.py
+
 Plotting Streamlines
 ~~~~~~~~~~~~~~~~~~~~
 
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.