Commits

holger krekel committed ccd5794

improve release announcement, shift and fix examples a bit. Bump version to 2.2.0

Comments (0)

Files changed (27)

-Changes between 2.1.3 and XXX 2.2.0
+Changes between 2.1.3 and 2.2.0
 ----------------------------------------
 
 - fix issue90: introduce eager tearing down of test items so that

_pytest/__init__.py

 #
-__version__ = '2.2.0.dev11'
+__version__ = '2.2.0'
 
     group._addoption("-m",
         action="store", dest="markexpr", default="", metavar="MARKEXPR",
-        help="only run tests which match given mark expression.  "
-             "An expression is a python expression which can use "
-             "marker names.")
+        help="only run tests matching given mark expression.  "
+             "example: -m 'mark1 and not mark2'."
+             )
 
     group.addoption("--markers", action="store_true", help=
         "show markers (builtin, plugin and per-project ones).")

_pytest/python.py

     config.addinivalue_line("markers",
         "parametrize(argnames, argvalues): call a test function multiple "
         "times passing in multiple different argument value sets. Example: "
-        "@parametrize(arg1, [1,2]) would lead to two calls of the decorated "
+        "@parametrize('arg1', [1,2]) would lead to two calls of the decorated "
         "test function, one with arg1=1 and another with arg1=2."
     )
 

doc/announce/release-2.2.0.txt

 ===========================================================================
 
 pytest-2.2.0 is a test-suite compatible release of the popular
-py.test testing tool.  There are a couple of new features and improvements:
+py.test testing tool.  Plugins might need upgrades. It comes 
+with these improvements:
 
-* "--duration=N" option showing the N slowest test execution 
-  or setup/teardown calls.
+* more powerful parametrization of tests:
 
-* @pytest.mark.parametrize decorator for runnin test functions
-  with multiple values and a new more powerful metafunc.parametrize()
-  helper to be used from pytest_generate_tests. Multiple parametrize
-  functions can now be invoked for the same test function.
+  - new @pytest.mark.parametrize decorator for running test functions
+  - new metafunc.parametrize() API for parametrizing arguments independently
+  - see examples at http://pytest.org/latest/example/parametrize.html
+  - NOTE that parametrize() related APIs are still a bit experimental
+    and might change in future releases.
 
-* "-m markexpr" option for selecting tests according to their mark and
-  a new "markers" ini-variable for registering test markers.  The new "--strict"
-  option will bail out with an error if you are using unregistered markers.
+* improved handling of test markers and refined marking mechanism:
 
-* teardown functions are now more eagerly called so that they appear
-  more directly connected to the last test item that needed a particular
-  fixture/setup.
+  - "-m markexpr" option for selecting tests according to their mark
+  - a new "markers" ini-variable for registering test markers for your project
+  - the new "--strict" bails out with an error if using unregistered markers.
+  - see examples at http://pytest.org/latest/example/markers.html
 
-Usage of improved parametrize is documented in examples at 
-http://pytest.org/latest/example/parametrize.html
+* duration profiling: new "--duration=N" option showing the N slowest test 
+  execution or setup/teardown calls. This is most useful if you want to
+  find out where your slowest test code is.
 
-Usages of the improved marking mechanism is illustrated by a couple
-of initial examples, see http://pytest.org/latest/example/markers.html
+* also 2.2.0 performs more eager calling of teardown/finalizers functions 
+  resulting in better and more accurate reporting when they fail
 
 Besides there is the usual set of bug fixes along with a cleanup of
 pytest's own test suite allowing it to run on a wider range of environments.
     pip install -U pytest # or
     easy_install -U pytest
 
-Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, XXX for their
-help and feedback on various issues.
+Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri,
+Alfredo Doza and all who gave feedback or sent bug reports.
 
 best,
 holger krekel
 
 * Other plugins might need an upgrade if they implement
   the ``pytest_runtest_logreport`` hook which now is called unconditionally
-  for the setup/teardown fixture phases of a test. You can just choose to
-  ignore them by inserting "if rep.when != 'call': return". Note that
-  most code probably "just" works because the hook was already called
-  for failing setup/teardown phases of a test.
+  for the setup/teardown fixture phases of a test. You may choose to
+  ignore setup/teardown failures by inserting "if rep.when != 'call': return"
+  or something similar. Note that most code probably "just" works because 
+  the hook was already called for failing setup/teardown phases of a test
+  so a plugin should have been ready to grok such reports already.
 
+
+Changes between 2.1.3 and 2.2.0
+----------------------------------------
+
+- fix issue90: introduce eager tearing down of test items so that
+  teardown function are called earlier.
+- add an all-powerful metafunc.parametrize function which allows to 
+  parametrize test function arguments in multiple steps and therefore
+  from indepdenent plugins and palces. 
+- add a @pytest.mark.parametrize helper which allows to easily
+  call a test function with different argument values
+- Add examples to the "parametrize" example page, including a quick port 
+  of Test scenarios and the new parametrize function and decorator.
+- introduce registration for "pytest.mark.*" helpers via ini-files
+  or through plugin hooks.  Also introduce a "--strict" option which 
+  will treat unregistered markers as errors
+  allowing to avoid typos and maintain a well described set of markers
+  for your test suite.  See exaples at http://pytest.org/latest/mark.html
+  and its links.
+- issue50: introduce "-m marker" option to select tests based on markers
+  (this is a stricter and more predictable version of '-k' in that "-m"
+  only matches complete markers and has more obvious rules for and/or
+  semantics.
+- new feature to help optimizing the speed of your tests: 
+  --durations=N option for displaying N slowest test calls 
+  and setup/teardown methods.
+- fix issue87: --pastebin now works with python3
+- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
+- fix and cleanup pytest's own test suite to not leak FDs 
+- fix issue83: link to generated funcarg list
+- fix issue74: pyarg module names are now checked against imp.find_module false positives
+- fix compatibility with twisted/trial-11.1.0 use cases
 
     $ py.test test_assert1.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_assert1.py F
     E        +  where 3 = f()
     
     test_assert1.py:5: AssertionError
-    ========================= 1 failed in 0.03 seconds =========================
+    ========================= 1 failed in 0.02 seconds =========================
 
 py.test has support for showing the values of the most common subexpressions
 including calls, attributes, comparisons, and binary and unary
 
     $ py.test test_assert2.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_assert2.py F
 
     $ py.test --funcargs
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collected 0 items
     pytestconfig
         the pytest config object with access to command line opts.
         See http://docs.python.org/library/warnings.html for information
         on warning categories.
         
-    cov
-        A pytest funcarg that provides access to the underlying coverage object.
     
     =============================  in 0.00 seconds =============================
 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     
     test_module.py .F
     
     test_module.py:9: AssertionError
     ----------------------------- Captured stdout ------------------------------
-    setting up <function test_func2 at 0x10130ccf8>
-    ==================== 1 failed, 1 passed in 0.03 seconds ====================
+    setting up <function test_func2 at 0x101353a28>
+    ==================== 1 failed, 1 passed in 0.02 seconds ====================
 
 Accessing captured output from a test function
 ---------------------------------------------------
 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     mymodule.py .
     
-    ========================= 1 passed in 0.06 seconds =========================
-    [?1034h
+    ========================= 1 passed in 0.05 seconds =========================

doc/example/markers.txt

+
+.. _`mark examples`:
 
 Working with custom markers
 =================================================
 
+Here are some example using the :ref:`mark` mechanism.
 
-Here are some example using the :ref:`mark` mechanism.
+marking test functions and selecting them for a run
+----------------------------------------------------
+
+You can "mark" a test function with custom metadata like this::
+
+    # content of test_server.py
+
+    import pytest
+    @pytest.mark.webtest
+    def test_send_http():
+        pass # perform some webtest test for your app
+    def test_something_quick():
+        pass
+
+.. versionadded:: 2.2
+
+You can then restrict a test run to only run tests marked with ``webtest``::
+
+    $ py.test -v -m webtest
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
+    collecting ... collected 2 items
+    
+    test_server.py:3: test_send_http PASSED
+    
+    =================== 1 tests deselected by "-m 'webtest'" ===================
+    ================== 1 passed, 1 deselected in 0.01 seconds ==================
+
+Or the inverse, running all tests except the webtest ones::
+    
+    $ py.test -v -m "not webtest"
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
+    collecting ... collected 2 items
+    
+    test_server.py:6: test_something_quick PASSED
+    
+    ================= 1 tests deselected by "-m 'not webtest'" =================
+    ================== 1 passed, 1 deselected in 0.01 seconds ==================
+
+Registering markers
+-------------------------------------
+
+.. versionadded:: 2.2
+
+.. ini-syntax for custom markers:
+
+Registering markers for your test suite is simple::
+
+    # content of pytest.ini
+    [pytest]
+    markers = 
+        webtest: mark a test as a webtest. 
+
+You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
+
+    $ py.test --markers
+    @pytest.mark.webtest: mark a test as a webtest.
+    
+    @pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. 
+    
+    @pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
+    
+    @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
+    
+    @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
+    
+    @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
+    
+
+For an example on how to add and work with markers from a plugin, see 
+:ref:`adding a custom marker from a plugin`.
+
+.. note::
+
+    It is recommended to explicitely register markers so that:
+
+    * there is one place in your test suite defining your markers
+
+    * asking for existing markers via ``py.test --markers`` gives good output
+
+    * typos in function markers are treated as an error if you use
+      the ``--strict`` option. Later versions of py.test are probably
+      going to treat non-registered markers as an error.
+
+.. _`scoped-marking`:
+
+Marking whole classes or modules
+----------------------------------------------------
+
+If you are programming with Python2.6 you may use ``pytest.mark`` decorators
+with classes to apply markers to all of its test methods::
+
+    # content of test_mark_classlevel.py
+    import pytest
+    @pytest.mark.webtest
+    class TestClass:
+        def test_startup(self):
+            pass
+        def test_startup_and_more(self):
+            pass
+
+This is equivalent to directly applying the decorator to the
+two test functions.
+
+To remain backward-compatible with Python2.4 you can also set a
+``pytestmark`` attribute on a TestClass like this::
+
+    import pytest
+
+    class TestClass:
+        pytestmark = pytest.mark.webtest
+
+or if you need to use multiple markers you can use a list::
+
+    import pytest
+
+    class TestClass:
+        pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
+
+You can also set a module level marker::
+
+    import pytest
+    pytestmark = pytest.mark.webtest
+
+in which case it will be applied to all functions and
+methods defined in the module.
+
+Using ``-k TEXT`` to select tests
+----------------------------------------------------
+
+You can use the ``-k`` command line option to only run tests with names that match the given argument::
+
+    $ py.test -k send_http  # running with the above defined examples
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 4 items
+    
+    test_server.py .
+    
+    =================== 3 tests deselected by '-ksend_http' ====================
+    ================== 1 passed, 3 deselected in 0.02 seconds ==================
+
+And you can also run all tests except the ones that match the keyword::
+
+    $ py.test -k-send_http
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 4 items
+    
+    test_mark_classlevel.py ..
+    test_server.py .
+    
+    =================== 1 tests deselected by '-k-send_http' ===================
+    ================== 3 passed, 1 deselected in 0.03 seconds ==================
+
+Or to only select the class::
+
+    $ py.test -kTestClass
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 4 items
+    
+    test_mark_classlevel.py ..
+    
+    =================== 2 tests deselected by '-kTestClass' ====================
+    ================== 2 passed, 2 deselected in 0.02 seconds ==================
 
 .. _`adding a custom marker from a plugin`:
 
 the test needs::
 
     $ py.test -E stage2
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
-    collecting ... collected 1 items
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 5 items
     
+    test_mark_classlevel.py ..
+    test_server.py ..
     test_someenv.py s
     
-    ========================== 1 skipped in 0.02 seconds ===========================
+    =================== 4 passed, 1 skipped in 0.04 seconds ====================
   
 and here is one that specifies exactly the environment needed::
 
     $ py.test -E stage1
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
-    collecting ... collected 1 items
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 5 items
     
+    test_mark_classlevel.py ..
+    test_server.py ..
     test_someenv.py .
     
-    =========================== 1 passed in 0.02 seconds ===========================
+    ========================= 5 passed in 0.04 seconds =========================
 
 The ``--markers`` option always gives you a list of available markers::
 
     $ py.test --markers
+    @pytest.mark.webtest: mark a test as a webtest.
+    
     @pytest.mark.env(name): mark test to run only on named environment
     
     @pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. 
     
     @pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
     
+    @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
+    
     @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
     
     @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.

doc/example/multipython.py

 module containing a parametrized tests testing cross-python
 serialization via the pickle module.
 """
-import py
+import py, pytest
 
 pythonlist = ['python2.4', 'python2.5', 'python2.6', 'python2.7', 'python2.8']
 
 def pytest_generate_tests(metafunc):
+    # we parametrize all "python1" and "python2" arguments to iterate
+    # over the python interpreters of our list above - the actual
+    # setup and lookup of interpreters in the python1/python2 factories
+    # respectively.
     for arg in metafunc.funcargnames:
-        if arg.startswith("python"):
+        if arg in ("python1", "python2"):
             metafunc.parametrize(arg, pythonlist, indirect=True)
-        elif arg == "obj":
-            metafunc.parametrize("obj", metafunc.function.multiarg.kwargs['obj'])
 
-@py.test.mark.multiarg(obj=[42, {}, {1:3},])
+@pytest.mark.parametrize("obj", [42, {}, {1:3},])
 def test_basic_objects(python1, python2, obj):
     python1.dumps(obj)
     python2.load_and_is_true("obj == %s" % obj)

doc/example/mysetup.txt

 
     $ py.test test_sample.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_sample.py F
     ================================= FAILURES =================================
     _______________________________ test_answer ________________________________
     
-    mysetup = <conftest.MySetup instance at 0x1013145a8>
+    mysetup = <conftest.MySetup instance at 0x1012b2bd8>
     
         def test_answer(mysetup):
             app = mysetup.myapp()
 
     $ py.test test_ssh.py -rs
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_ssh.py s
     ========================= short test summary info ==========================
-    SKIP [1] /Users/hpk/tmp/doc-exec-167/conftest.py:22: specify ssh host with --ssh
+    SKIP [1] /Users/hpk/tmp/doc-exec-625/conftest.py:22: specify ssh host with --ssh
     
     ======================== 1 skipped in 0.02 seconds =========================
 

doc/example/nonpython.txt

 
     nonpython $ py.test test_simple.yml
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     
     test_simple.yml .F
     usecase execution failed
        spec failed: 'some': 'other'
        no further details known at this point.
-    ==================== 1 failed, 1 passed in 0.09 seconds ====================
+    ==================== 1 failed, 1 passed in 0.10 seconds ====================
 
 You get one dot for the passing ``sub1: sub1`` check and one failure.
 Obviously in the above ``conftest.py`` you'll want to implement a more
 
     nonpython $ py.test -v
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
     collecting ... collected 2 items
     
     test_simple.yml:1: usecase: ok PASSED
 
     nonpython $ py.test --collectonly
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     <YamlFile 'test_simple.yml'>
       <YamlItem 'ok'>

doc/example/parametrize.txt

 
 .. versionadded:: 2.2
 
-The builtin ``parametrize`` marker allows you to easily write generic
-test functions that will be invoked with multiple input/output values::
+The builtin ``pytest.mark.parametrize`` decorator directly enables 
+parametrization of arguments for a test function.  Here is an example
+of a test function that wants to compare that processing some input
+results in expected output::
 
     # content of test_expectation.py
     import pytest
     def test_eval(input, expected):
         assert eval(input) == expected
 
-Here we parametrize two arguments of the test function so that the test 
+we parametrize two arguments of the test function so that the test 
 function is called three times.  Let's run it::
 
     $ py.test -q 
     collecting ... collected 3 items
     ..F
-    =================================== FAILURES ===================================
-    ______________________________ test_eval[6*9-42] _______________________________
+    ================================= FAILURES =================================
+    ____________________________ test_eval[6*9-42] _____________________________
     
     input = '6*9', expected = 42
     
     E       assert 54 == 42
     E        +  where 54 = eval('6*9')
     
-    test_expectation.py:9: AssertionError
+    test_expectation.py:8: AssertionError
     1 failed, 2 passed in 0.03 seconds
 
 As expected only one pair of input/output values fails the simple test function.
     $ py.test -q --all
     collecting ... collected 5 items
     ....F
-    =================================== FAILURES ===================================
-    _______________________________ test_compute[4] ________________________________
+    ================================= FAILURES =================================
+    _____________________________ test_compute[4] ______________________________
     
     param1 = 4
     
 this is a fully self-contained example which you can run with::
 
     $ py.test test_scenarios.py
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     
     test_scenarios.py ..
     
-    =========================== 2 passed in 0.02 seconds ===========================
+    ========================= 2 passed in 0.02 seconds =========================
 
 If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
 
 
     $ py.test --collectonly test_scenarios.py
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     <Module 'test_scenarios.py'>
       <Class 'TestSampleWithScenarios'>
           <Function 'test_demo[basic]'>
           <Function 'test_demo[advanced]'>
     
-    ===============================  in 0.01 seconds ===============================
+    =============================  in 0.01 seconds =============================
 
 Deferring the setup of parametrized resources
 ---------------------------------------------------
 Let's first see how it looks like at collection time::
 
     $ py.test test_backends.py --collectonly
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     <Module 'test_backends.py'>
       <Function 'test_db_initialized[d1]'>
       <Function 'test_db_initialized[d2]'>
     
-    ===============================  in 0.01 seconds ===============================
+    =============================  in 0.01 seconds =============================
 
 And then when we run the test::
 
     $ py.test -q test_backends.py
     collecting ... collected 2 items
     .F
-    =================================== FAILURES ===================================
-    ___________________________ test_db_initialized[d2] ____________________________
+    ================================= FAILURES =================================
+    _________________________ test_db_initialized[d2] __________________________
     
-    db = <conftest.DB2 instance at 0x1013195f0>
+    db = <conftest.DB2 instance at 0x10150ab90>
     
         def test_db_initialized(db):
             # a dummy test
     $ py.test -q
     collecting ... collected 3 items
     F..
-    =================================== FAILURES ===================================
-    __________________________ TestClass.test_equals[1-2] __________________________
+    ================================= FAILURES =================================
+    ________________________ TestClass.test_equals[1-2] ________________________
     
-    self = <test_parametrize.TestClass instance at 0x1013158c0>, a = 1, b = 2
+    self = <test_parametrize.TestClass instance at 0x101509638>, a = 1, b = 2
     
         def test_equals(self, a, b):
     >       assert a == b
     test_parametrize.py:18: AssertionError
     1 failed, 2 passed in 0.03 seconds
 
-Checking serialization between Python interpreters
+Indirect parametrization with multiple resources
 --------------------------------------------------------------
 
 Here is a stripped down real-life example of using parametrized
 testing for testing serialization, invoking different python interpreters.
 We define a ``test_basic_objects`` function which is to be run
-with different sets of arguments for its three arguments::
+with different sets of arguments for its three arguments:
 
 * ``python1``: first python interpreter, run to pickle-dump an object to a file
 * ``python2``: second interpreter, run to pickle-load an object from a file 
 
 .. literalinclude:: multipython.py
 
-Running it (with Python-2.4 through to Python2.7 installed)::
+Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
 
-   . $ py.test -q multipython.py
+   . $ py.test -rs -q multipython.py
    collecting ... collected 75 items
    ssssssssssssssssss.........ssssss.........ssssss.........ssssssssssssssssss
-   27 passed, 48 skipped in 4.87 seconds
+   ========================= short test summary info ==========================
+   SKIP [24] /Users/hpk/p/pytest/doc/example/multipython.py:36: 'python2.8' not found
+   SKIP [24] /Users/hpk/p/pytest/doc/example/multipython.py:36: 'python2.4' not found
+   27 passed, 48 skipped in 3.03 seconds

doc/example/pythoncollection.txt

 
     $ py.test --collectonly
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     <Module 'check_myapp.py'>
       <Class 'CheckMyApp'>
 
     . $ py.test --collectonly pythoncollection.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 3 items
     <Module 'pythoncollection.py'>
       <Function 'test_function'>

doc/example/reportingdemo.txt

 
     assertion $ py.test failure_demo.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 39 items
     
     failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
     failure_demo.py:15: AssertionError
     _________________________ TestFailing.test_simple __________________________
     
-    self = <failure_demo.TestFailing object at 0x10134bc50>
+    self = <failure_demo.TestFailing object at 0x1013552d0>
     
         def test_simple(self):
             def f():
         
     >       assert f() == g()
     E       assert 42 == 43
-    E        +  where 42 = <function f at 0x101322320>()
-    E        +  and   43 = <function g at 0x101322398>()
+    E        +  where 42 = <function f at 0x101514f50>()
+    E        +  and   43 = <function g at 0x101516050>()
     
     failure_demo.py:28: AssertionError
     ____________________ TestFailing.test_simple_multiline _____________________
     
-    self = <failure_demo.TestFailing object at 0x10134b150>
+    self = <failure_demo.TestFailing object at 0x101355950>
     
         def test_simple_multiline(self):
             otherfunc_multi(
     failure_demo.py:11: AssertionError
     ___________________________ TestFailing.test_not ___________________________
     
-    self = <failure_demo.TestFailing object at 0x10134b710>
+    self = <failure_demo.TestFailing object at 0x101355ad0>
     
         def test_not(self):
             def f():
                 return 42
     >       assert not f()
     E       assert not 42
-    E        +  where 42 = <function f at 0x101322398>()
+    E        +  where 42 = <function f at 0x101514f50>()
     
     failure_demo.py:38: AssertionError
     _________________ TestSpecialisedExplanations.test_eq_text _________________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x10134be10>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x1013559d0>
     
         def test_eq_text(self):
     >       assert 'spam' == 'eggs'
     failure_demo.py:42: AssertionError
     _____________ TestSpecialisedExplanations.test_eq_similar_text _____________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101347110>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x101350dd0>
     
         def test_eq_similar_text(self):
     >       assert 'foo 1 bar' == 'foo 2 bar'
     failure_demo.py:45: AssertionError
     ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343d50>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x101350d10>
     
         def test_eq_multiline_text(self):
     >       assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
     failure_demo.py:48: AssertionError
     ______________ TestSpecialisedExplanations.test_eq_long_text _______________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x10134b210>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x101350cd0>
     
         def test_eq_long_text(self):
             a = '1'*100 + 'a' + '2'*100
     failure_demo.py:53: AssertionError
     _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343d90>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x101350f50>
     
         def test_eq_long_text_multiline(self):
             a = '1\n'*100 + 'a' + '2\n'*100
     failure_demo.py:58: AssertionError
     _________________ TestSpecialisedExplanations.test_eq_list _________________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343dd0>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134f350>
     
         def test_eq_list(self):
     >       assert [0, 1, 2] == [0, 1, 3]
     failure_demo.py:61: AssertionError
     ______________ TestSpecialisedExplanations.test_eq_list_long _______________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343b90>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134fc10>
     
         def test_eq_list_long(self):
             a = [0]*100 + [1] + [3]*100
     failure_demo.py:66: AssertionError
     _________________ TestSpecialisedExplanations.test_eq_dict _________________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343210>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134f2d0>
     
         def test_eq_dict(self):
     >       assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
     failure_demo.py:69: AssertionError
     _________________ TestSpecialisedExplanations.test_eq_set __________________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343990>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134f110>
     
         def test_eq_set(self):
     >       assert set([0, 10, 11, 12]) == set([0, 20, 21])
     failure_demo.py:72: AssertionError
     _____________ TestSpecialisedExplanations.test_eq_longer_list ______________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343590>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134f510>
     
         def test_eq_longer_list(self):
     >       assert [1,2] == [1,2,3]
     failure_demo.py:75: AssertionError
     _________________ TestSpecialisedExplanations.test_in_list _________________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343e50>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10134f6d0>
     
         def test_in_list(self):
     >       assert 1 in [0, 2, 3, 4, 5]
     failure_demo.py:78: AssertionError
     __________ TestSpecialisedExplanations.test_not_in_text_multiline __________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x10133bb10>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10152c490>
     
         def test_not_in_text_multiline(self):
             text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
     failure_demo.py:82: AssertionError
     ___________ TestSpecialisedExplanations.test_not_in_text_single ____________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x10133b990>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10152cfd0>
     
         def test_not_in_text_single(self):
             text = 'single foo line'
     failure_demo.py:86: AssertionError
     _________ TestSpecialisedExplanations.test_not_in_text_single_long _________
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x10133bbd0>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10152c090>
     
         def test_not_in_text_single_long(self):
             text = 'head ' * 50 + 'foo ' + 'tail ' * 20
     failure_demo.py:90: AssertionError
     ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
     
-    self = <failure_demo.TestSpecialisedExplanations object at 0x101343510>
+    self = <failure_demo.TestSpecialisedExplanations object at 0x10152cb90>
     
         def test_not_in_text_single_long_term(self):
             text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
             i = Foo()
     >       assert i.b == 2
     E       assert 1 == 2
-    E        +  where 1 = <failure_demo.Foo object at 0x10133b390>.b
+    E        +  where 1 = <failure_demo.Foo object at 0x10152c350>.b
     
     failure_demo.py:101: AssertionError
     _________________________ test_attribute_instance __________________________
                 b = 1
     >       assert Foo().b == 2
     E       assert 1 == 2
-    E        +  where 1 = <failure_demo.Foo object at 0x10133b250>.b
-    E        +    where <failure_demo.Foo object at 0x10133b250> = <class 'failure_demo.Foo'>()
+    E        +  where 1 = <failure_demo.Foo object at 0x10134fe90>.b
+    E        +    where <failure_demo.Foo object at 0x10134fe90> = <class 'failure_demo.Foo'>()
     
     failure_demo.py:107: AssertionError
     __________________________ test_attribute_failure __________________________
     failure_demo.py:116: 
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
     
-    self = <failure_demo.Foo object at 0x10133bd50>
+    self = <failure_demo.Foo object at 0x10152c610>
     
         def _get_b(self):
     >       raise Exception('Failed to get attrib')
                 b = 2
     >       assert Foo().b == Bar().b
     E       assert 1 == 2
-    E        +  where 1 = <failure_demo.Foo object at 0x10133bb50>.b
-    E        +    where <failure_demo.Foo object at 0x10133bb50> = <class 'failure_demo.Foo'>()
-    E        +  and   2 = <failure_demo.Bar object at 0x10133b1d0>.b
-    E        +    where <failure_demo.Bar object at 0x10133b1d0> = <class 'failure_demo.Bar'>()
+    E        +  where 1 = <failure_demo.Foo object at 0x10152c950>.b
+    E        +    where <failure_demo.Foo object at 0x10152c950> = <class 'failure_demo.Foo'>()
+    E        +  and   2 = <failure_demo.Bar object at 0x10152c250>.b
+    E        +    where <failure_demo.Bar object at 0x10152c250> = <class 'failure_demo.Bar'>()
     
     failure_demo.py:124: AssertionError
     __________________________ TestRaises.test_raises __________________________
     
-    self = <failure_demo.TestRaises instance at 0x1013697e8>
+    self = <failure_demo.TestRaises instance at 0x1015219e0>
     
         def test_raises(self):
             s = 'qwe'
     >   int(s)
     E   ValueError: invalid literal for int() with base 10: 'qwe'
     
-    <0-codegen /Users/hpk/p/pytest/_pytest/python.py:833>:1: ValueError
+    <0-codegen /Users/hpk/p/pytest/_pytest/python.py:957>:1: ValueError
     ______________________ TestRaises.test_raises_doesnt _______________________
     
-    self = <failure_demo.TestRaises instance at 0x101372a70>
+    self = <failure_demo.TestRaises instance at 0x1013794d0>
     
         def test_raises_doesnt(self):
     >       raises(IOError, "int('3')")
     failure_demo.py:136: Failed
     __________________________ TestRaises.test_raise ___________________________
     
-    self = <failure_demo.TestRaises instance at 0x10136a908>
+    self = <failure_demo.TestRaises instance at 0x10151f6c8>
     
         def test_raise(self):
     >       raise ValueError("demo error")
     failure_demo.py:139: ValueError
     ________________________ TestRaises.test_tupleerror ________________________
     
-    self = <failure_demo.TestRaises instance at 0x10136c710>
+    self = <failure_demo.TestRaises instance at 0x1013733f8>
     
         def test_tupleerror(self):
     >       a,b = [1]
     failure_demo.py:142: ValueError
     ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
     
-    self = <failure_demo.TestRaises instance at 0x101365488>
+    self = <failure_demo.TestRaises instance at 0x10136e170>
     
         def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
             l = [1,2,3]
     l is [1, 2, 3]
     ________________________ TestRaises.test_some_error ________________________
     
-    self = <failure_demo.TestRaises instance at 0x101367248>
+    self = <failure_demo.TestRaises instance at 0x10136ef38>
     
         def test_some_error(self):
     >       if namenotexi:
     <2-codegen 'abc-123' /Users/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
     ____________________ TestMoreErrors.test_complex_error _____________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101380f38>
+    self = <failure_demo.TestMoreErrors instance at 0x101520638>
     
         def test_complex_error(self):
             def f():
     failure_demo.py:5: AssertionError
     ___________________ TestMoreErrors.test_z1_unpack_error ____________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101367f80>
+    self = <failure_demo.TestMoreErrors instance at 0x10136bcb0>
     
         def test_z1_unpack_error(self):
             l = []
     failure_demo.py:179: ValueError
     ____________________ TestMoreErrors.test_z2_type_error _____________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101363dd0>
+    self = <failure_demo.TestMoreErrors instance at 0x10136a440>
     
         def test_z2_type_error(self):
             l = 3
     failure_demo.py:183: TypeError
     ______________________ TestMoreErrors.test_startswith ______________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101364bd8>
+    self = <failure_demo.TestMoreErrors instance at 0x101368290>
     
         def test_startswith(self):
             s = "123"
             g = "456"
     >       assert s.startswith(g)
-    E       assert <built-in method startswith of str object at 0x1013524e0>('456')
-    E        +  where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
+    E       assert <built-in method startswith of str object at 0x101354030>('456')
+    E        +  where <built-in method startswith of str object at 0x101354030> = '123'.startswith
     
     failure_demo.py:188: AssertionError
     __________________ TestMoreErrors.test_startswith_nested ___________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101363fc8>
+    self = <failure_demo.TestMoreErrors instance at 0x101368f38>
     
         def test_startswith_nested(self):
             def f():
             def g():
                 return "456"
     >       assert f().startswith(g())
-    E       assert <built-in method startswith of str object at 0x1013524e0>('456')
-    E        +  where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
-    E        +    where '123' = <function f at 0x10132c500>()
-    E        +  and   '456' = <function g at 0x10132c8c0>()
+    E       assert <built-in method startswith of str object at 0x101354030>('456')
+    E        +  where <built-in method startswith of str object at 0x101354030> = '123'.startswith
+    E        +    where '123' = <function f at 0x10136c578>()
+    E        +  and   '456' = <function g at 0x10136c5f0>()
     
     failure_demo.py:195: AssertionError
     _____________________ TestMoreErrors.test_global_func ______________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x1013696c8>
+    self = <failure_demo.TestMoreErrors instance at 0x10136aef0>
     
         def test_global_func(self):
     >       assert isinstance(globf(42), float)
     failure_demo.py:198: AssertionError
     _______________________ TestMoreErrors.test_instance _______________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x1013671b8>
+    self = <failure_demo.TestMoreErrors instance at 0x10151c440>
     
         def test_instance(self):
             self.x = 6*7
     >       assert self.x != 42
     E       assert 42 != 42
-    E        +  where 42 = <failure_demo.TestMoreErrors instance at 0x1013671b8>.x
+    E        +  where 42 = <failure_demo.TestMoreErrors instance at 0x10151c440>.x
     
     failure_demo.py:202: AssertionError
     _______________________ TestMoreErrors.test_compare ________________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x101366560>
+    self = <failure_demo.TestMoreErrors instance at 0x101373a70>
     
         def test_compare(self):
     >       assert globf(10) < 5
     failure_demo.py:205: AssertionError
     _____________________ TestMoreErrors.test_try_finally ______________________
     
-    self = <failure_demo.TestMoreErrors instance at 0x1013613b0>
+    self = <failure_demo.TestMoreErrors instance at 0x101363c68>
     
         def test_try_finally(self):
             x = 1
     E           assert 1 == 0
     
     failure_demo.py:210: AssertionError
-    ======================== 39 failed in 0.39 seconds =========================
+    ======================== 39 failed in 0.41 seconds =========================

doc/example/simple.txt

 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     gw0 I
     gw0 [0]
     
     scheduling tests via LoadScheduling
     
-    =============================  in 0.48 seconds =============================
+    =============================  in 0.71 seconds =============================
 
 .. _`excontrolskip`:
 
 
     $ py.test -rs    # "-rs" means report details on the little 's'
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     
     test_module.py .s
     ========================= short test summary info ==========================
-    SKIP [1] /Users/hpk/tmp/doc-exec-172/conftest.py:9: need --runslow option to run
+    SKIP [1] /Users/hpk/tmp/doc-exec-630/conftest.py:9: need --runslow option to run
     
     =================== 1 passed, 1 skipped in 0.02 seconds ====================
 
 
     $ py.test --runslow
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 2 items
     
     test_module.py ..
     
-    ========================= 2 passed in 0.02 seconds =========================
+    ========================= 2 passed in 0.62 seconds =========================
 
 Writing well integrated assertion helpers
 --------------------------------------------------
 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     project deps: mylib-1.1
     collecting ... collected 0 items
     
 
     $ py.test -v
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
     info1: did you know that ...
     did you?
     collecting ... collected 0 items
 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 0 items
     
     =============================  in 0.00 seconds =============================
 Now we can profile which test functions execute slowest::
 
     $ py.test --durations=3
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
+    collecting ... collected 3 items
+    
+    test_some_are_slow.py ...
+    
+    ========================= slowest 3 test durations =========================
+    0.20s call     test_some_are_slow.py::test_funcslow2
+    0.10s call     test_some_are_slow.py::test_funcslow1
+    0.00s setup    test_some_are_slow.py::test_funcfast
+    ========================= 3 passed in 0.32 seconds =========================
 Running the test looks like this::
 
     $ py.test test_simplefactory.py
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_simplefactory.py F
     
-    =================================== FAILURES ===================================
-    ________________________________ test_function _________________________________
+    ================================= FAILURES =================================
+    ______________________________ test_function _______________________________
     
     myfuncarg = 42
     
     E       assert 42 == 17
     
     test_simplefactory.py:5: AssertionError
-    =========================== 1 failed in 0.02 seconds ===========================
+    ========================= 1 failed in 0.03 seconds =========================
 
 This means that indeed the test function was called with a ``myfuncarg``
 argument value of ``42`` and the assert fails.  Here is how py.test
 Running this will generate ten invocations of ``test_func`` passing in each of the items in the list of ``range(10)``::
 
     $ py.test test_example.py
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 10 items
     
     test_example.py .........F
     
-    =================================== FAILURES ===================================
-    _________________________________ test_func[9] _________________________________
+    ================================= FAILURES =================================
+    _______________________________ test_func[9] _______________________________
     
     numiter = 9
     
     E       assert 9 < 9
     
     test_example.py:6: AssertionError
-    ====================== 1 failed, 9 passed in 0.07 seconds ======================
+    ==================== 1 failed, 9 passed in 0.05 seconds ====================
 
 Obviously, only when ``numiter`` has the value of ``9`` does the test fail.  Note that the ``pytest_generate_tests(metafunc)`` hook is called during
 the test collection phase which is separate from the actual test running.
 Let's just look at what is collected::
 
     $ py.test --collectonly test_example.py
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 10 items
     <Module 'test_example.py'>
       <Function 'test_func[0]'>
       <Function 'test_func[8]'>
       <Function 'test_func[9]'>
     
-    ===============================  in 0.01 seconds ===============================
+    =============================  in 0.01 seconds =============================
 
 If you want to select only the run with the value ``7`` you could do::
 
     $ py.test -v -k 7 test_example.py  # or -k test_func[7]
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev8 -- /Users/hpk/venv/1/bin/python
+    =========================== test session starts ============================
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0 -- /Users/hpk/venv/1/bin/python
     collecting ... collected 10 items
     
     test_example.py:5: test_func[7] PASSED
     
-    ========================= 9 tests deselected by '-k7' ==========================
-    ==================== 1 passed, 9 deselected in 0.01 seconds ====================
+    ======================= 9 tests deselected by '-k7' ========================
+    ================== 1 passed, 9 deselected in 0.02 seconds ==================
 
 You might want to look at :ref:`more parametrization examples <paramexamples>`.
 

doc/getting-started.txt

 To check your installation has installed the correct version::
 
     $ py.test --version
-    This is py.test version 2.1.3, imported from /Users/hpk/p/pytest/pytest.pyc
+    This is py.test version 2.2.0, imported from /Users/hpk/p/pytest/pytest.pyc
     setuptools registered plugins:
-      pytest-cov-1.4 at /Users/hpk/venv/0/lib/python2.7/site-packages/pytest_cov.pyc
-      pytest-xdist-1.6 at /Users/hpk/venv/0/lib/python2.7/site-packages/xdist/plugin.pyc
+      pytest-xdist-1.7.dev1 at /Users/hpk/p/pytest-xdist/xdist/plugin.pyc
 
 If you get an error checkout :ref:`installation issues`.
 
 
     $ py.test
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_sample.py F
     E        +  where 4 = func(3)
     
     test_sample.py:5: AssertionError
-    ========================= 1 failed in 0.02 seconds =========================
+    ========================= 1 failed in 0.04 seconds =========================
 
 py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes.  We got a failure report because our little ``func(3)`` call did not return ``5``.
 
     ================================= FAILURES =================================
     ____________________________ TestClass.test_two ____________________________
     
-    self = <test_class.TestClass instance at 0x1013167a0>
+    self = <test_class.TestClass instance at 0x10150a170>
     
         def test_two(self):
             x = "hello"
     ================================= FAILURES =================================
     _____________________________ test_needsfiles ______________________________
     
-    tmpdir = local('/Users/hpk/tmp/pytest-93/test_needsfiles0')
+    tmpdir = local('/Users/hpk/tmp/pytest-1595/test_needsfiles0')
     
         def test_needsfiles(tmpdir):
             print tmpdir
     
     test_tmpdir.py:3: AssertionError
     ----------------------------- Captured stdout ------------------------------
-    /Users/hpk/tmp/pytest-93/test_needsfiles0
-    1 failed in 0.04 seconds
+    /Users/hpk/tmp/pytest-1595/test_needsfiles0
+    1 failed in 0.15 seconds
 
 Before the test runs, a unique-per-test-invocation temporary directory
 was created.  More info at :ref:`tmpdir handling`.
 
  - (new in 2.2) :ref:`durations`
  - (much improved in 2.2) :ref:`marking and test selection <mark>`
+ - (improved in 2.2) :ref:`parametrized test functions <parametrized test functions>`
  - advanced :ref:`skip and xfail`
+ - unique :ref:`dependency injection through funcargs <funcargs>`
  - can :ref:`distribute tests to multiple CPUs <xdistcpu>` through :ref:`xdist plugin <xdist>`
  - can :ref:`continuously re-run failing tests <looponfailing>`
  - many :ref:`builtin helpers <pytest helpers>`
  - flexible :ref:`Python test discovery`
- - unique :ref:`dependency injection through funcargs <funcargs>`
- - :ref:`parametrized test functions <parametrized test functions>`
 
 - **integrates many common testing methods**
 
 .. currentmodule:: _pytest.mark
 
 By using the ``pytest.mark`` helper you can easily set
-metadata on your test functions. To begin with, there are
+metadata on your test functions. There are
 some builtin markers, for example:
 
 * :ref:`skipif <skipif>` - skip a test function if a certain condition is met
 * :ref:`parametrize <parametrizemark>` to perform multiple calls
   to the same test function.
 
-It's also easy to create custom markers or to apply markers
-to whole test classes or modules.
+It's easy to create custom markers or to apply markers
+to whole test classes or modules. See :ref:`mark examples` for examples
+which also serve as documentation.
 
-marking test functions and selecting them for a run
-----------------------------------------------------
-
-You can "mark" a test function with custom metadata like this::
-
-    # content of test_server.py
-
-    import pytest
-    @pytest.mark.webtest
-    def test_send_http():
-        pass # perform some webtest test for your app
-    def test_something_quick():
-        pass
-
-.. versionadded:: 2.2
-
-You can then restrict a test run to only run tests marked with ``webtest``::
-
-    $ py.test -v -m webtest
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6 -- /Users/hpk/venv/0/bin/python
-    collecting ... collected 2 items
-    
-    test_server.py:3: test_send_http PASSED
-    
-    ===================== 1 tests deselected by "-m 'webtest'" =====================
-    ==================== 1 passed, 1 deselected in 0.01 seconds ====================
-
-Or the inverse, running all tests except the webtest ones::
-    
-    $ py.test -v -m "not webtest"
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6 -- /Users/hpk/venv/0/bin/python
-    collecting ... collected 2 items
-    
-    test_server.py:6: test_something_quick PASSED
-    
-    =================== 1 tests deselected by "-m 'not webtest'" ===================
-    ==================== 1 passed, 1 deselected in 0.01 seconds ====================
-
-Registering markers
--------------------------------------
-
-.. versionadded:: 2.2
-
-.. ini-syntax for custom markers:
-
-Registering markers for your test suite is simple::
-
-    # content of pytest.ini
-    [pytest]
-    markers = 
-        webtest: mark a test as a webtest. 
-
-You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
-
-    $ py.test --markers
-    @pytest.mark.webtest: mark a test as a webtest.
-    
-    @pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. 
-    
-    @pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
-    
-    @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
-    
-    @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
-    
-
-For an example on how to add and work markers from a plugin, see 
-:ref:`adding a custom marker from a plugin`.
-
-.. note::
-
-    It is recommended to explicitely register markers so that:
-
-    * there is one place in your test suite defining your markers
-
-    * asking for existing markers via ``py.test --markers`` gives good output
-
-    * typos in function markers are treated as an error if you use
-      the ``--strict`` option. Later versions of py.test are probably
-      going to treat non-registered markers as an error.
-
-.. _`scoped-marking`:
-
-Marking whole classes or modules
-----------------------------------------------------
-
-If you are programming with Python2.6 you may use ``pytest.mark`` decorators
-with classes to apply markers to all of its test methods::
-
-    # content of test_mark_classlevel.py
-    import pytest
-    @pytest.mark.webtest
-    class TestClass:
-        def test_startup(self):
-            pass
-        def test_startup_and_more(self):
-            pass
-
-This is equivalent to directly applying the decorator to the
-two test functions.
-
-To remain backward-compatible with Python2.4 you can also set a
-``pytestmark`` attribute on a TestClass like this::
-
-    import pytest
-
-    class TestClass:
-        pytestmark = pytest.mark.webtest
-
-or if you need to use multiple markers you can use a list::
-
-    import pytest
-
-    class TestClass:
-        pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
-
-You can also set a module level marker::
-
-    import pytest
-    pytestmark = pytest.mark.webtest
-
-in which case it will be applied to all functions and
-methods defined in the module.
-
-Using ``-k TEXT`` to select tests
-----------------------------------------------------
-
-You can use the ``-k`` command line option to only run tests with names that match the given argument::
-
-    $ py.test -k send_http  # running with the above defined examples
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
-    collecting ... collected 4 items
-    
-    test_server.py .
-    
-    ===================== 3 tests deselected by '-ksend_http' ======================
-    ==================== 1 passed, 3 deselected in 0.02 seconds ====================
-
-And you can also run all tests except the ones that match the keyword::
-
-    $ py.test -k-send_http
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
-    collecting ... collected 4 items
-    
-    test_mark_classlevel.py ..
-    test_server.py .
-    
-    ===================== 1 tests deselected by '-k-send_http' =====================
-    ==================== 3 passed, 1 deselected in 0.03 seconds ====================
-
-Or to only select the class::
-
-    $ py.test -kTestClass
-    ============================= test session starts ==============================
-    platform darwin -- Python 2.7.1 -- pytest-2.2.0.dev6
-    collecting ... collected 4 items
-    
-    test_mark_classlevel.py ..
-    
-    ===================== 2 tests deselected by '-kTestClass' ======================
-    ==================== 2 passed, 2 deselected in 0.02 seconds ====================
 
 API reference for mark related objects
 ------------------------------------------------

doc/monkeypatch.txt

 .. background check:
    $ py.test
    =========================== test session starts ============================
-   platform darwin -- Python 2.7.1 -- pytest-2.1.3
+   platform darwin -- Python 2.7.1 -- pytest-2.2.0
    collecting ... collected 0 items
    
-   =============================  in 0.00 seconds =============================
+   =============================  in 0.20 seconds =============================
 
 Method reference of the monkeypatch function argument
 -----------------------------------------------------
 
     example $ py.test -rx xfail_demo.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 6 items
     
     xfail_demo.py xxxxxx
     XFAIL xfail_demo.py::test_hello6
       reason: reason
     
-    ======================== 6 xfailed in 0.11 seconds =========================
+    ======================== 6 xfailed in 0.08 seconds =========================
 
 .. _`evaluation of skipif/xfail conditions`:
 
 
 Test parametrization:
 
-- `generating parametrized tests with funcargs`_ (uses deprecated
- ``addcall()`` API.
+- `generating parametrized tests with funcargs`_ (uses deprecated ``addcall()`` API.
 - `test generators and cached setup`_
 - `parametrizing tests, generalized`_ (blog post)
 - `putting test-hooks into local or global plugins`_ (blog post)
 
     $ py.test test_tmpdir.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_tmpdir.py F
     ================================= FAILURES =================================
     _____________________________ test_create_file _____________________________
     
-    tmpdir = local('/Users/hpk/tmp/pytest-94/test_create_file0')
+    tmpdir = local('/Users/hpk/tmp/pytest-1596/test_create_file0')
     
         def test_create_file(tmpdir):
             p = tmpdir.mkdir("sub").join("hello.txt")
     E       assert 0
     
     test_tmpdir.py:7: AssertionError
-    ========================= 1 failed in 0.05 seconds =========================
+    ========================= 1 failed in 0.20 seconds =========================
 
 .. _`base temporary directory`:
 
 
     $ py.test test_unittest.py
     =========================== test session starts ============================
-    platform darwin -- Python 2.7.1 -- pytest-2.1.3
+    platform darwin -- Python 2.7.1 -- pytest-2.2.0
     collecting ... collected 1 items
     
     test_unittest.py F
     test_unittest.py:8: AssertionError
     ----------------------------- Captured stdout ------------------------------
     hello
-    ========================= 1 failed in 0.04 seconds =========================
+    ========================= 1 failed in 0.23 seconds =========================
 
 .. _`unittest.py style`: http://docs.python.org/library/unittest.html
 
         name='pytest',
         description='py.test: simple powerful testing with Python',
         long_description = long_description,
-        version='2.2.0.dev11',
+        version='2.2.0',
         url='http://pytest.org',
         license='MIT license',
         platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],