1. Jack Riches
  2. pytest

Commits

holger krekel  committed c544577

fix/enhance example

  • Participants
  • Parent commits 410d803
  • Branches default

Comments (0)

Files changed (2)

File doc/en/example/parametrize.txt

View file
 
     $ py.test test_scenarios.py
     =========================== test session starts ============================
-    platform linux2 -- Python 2.7.3 -- pytest-2.3.4
+    platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev3
+    plugins: xdist, oejskit, pep8, cache, couchdbkit, quickcheck
     collected 4 items
     
     test_scenarios.py ....
     
-    ========================= 4 passed in 0.01 seconds =========================
+    ========================= 4 passed in 0.04 seconds =========================
 
 If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
 
 
     $ py.test --collectonly test_scenarios.py
     =========================== test session starts ============================
-    platform linux2 -- Python 2.7.3 -- pytest-2.3.4
+    platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev3
+    plugins: xdist, oejskit, pep8, cache, couchdbkit, quickcheck
     collected 4 items
     <Module 'test_scenarios.py'>
       <Class 'TestSampleWithScenarios'>
           <Function 'test_demo1[advanced]'>
           <Function 'test_demo2[advanced]'>
     
-    =============================  in 0.01 seconds =============================
+    =============================  in 0.03 seconds =============================
 
 Note that we told ``metafunc.parametrize()`` that your scenario values
 should be considered class-scoped.  With pytest-2.3 this leads to a
 
     $ py.test test_backends.py --collectonly
     =========================== test session starts ============================
-    platform linux2 -- Python 2.7.3 -- pytest-2.3.4
+    platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev3
+    plugins: xdist, oejskit, pep8, cache, couchdbkit, quickcheck
     collected 2 items
     <Module 'test_backends.py'>
       <Function 'test_db_initialized[d1]'>
       <Function 'test_db_initialized[d2]'>
     
-    =============================  in 0.00 seconds =============================
+    =============================  in 0.03 seconds =============================
 
 And then when we run the test::
 
     ================================= FAILURES =================================
     _________________________ test_db_initialized[d2] __________________________
     
-    db = <conftest.DB2 instance at 0x13cbcb0>
+    db = <conftest.DB2 instance at 0x19ba7e8>
     
         def test_db_initialized(db):
             # a dummy test
     ================================= FAILURES =================================
     ________________________ TestClass.test_equals[1-2] ________________________
     
-    self = <test_parametrize.TestClass instance at 0x24e6d88>, a = 1, b = 2
+    self = <test_parametrize.TestClass instance at 0x2489b00>, a = 1, b = 2
     
         def test_equals(self, a, b):
     >       assert a == b
    ............sss............sss............sss............ssssssssssssssssss
    ========================= short test summary info ==========================
    SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
+
+Indirect parametrization of optional implementations/imports
+--------------------------------------------------------------------
+
+If you want to compare the outcomes of several implementations of a given
+API, you can write test functions that receive the already imported implementations
+and get skipped in case the implementation is not importable/available.  Let's
+say we have a "base" implementation and the other (possibly optimized ones) 
+need to provide similar results::
+
+    # content of conftest.py
+
+    import pytest
+
+    @pytest.fixture(scope="session")
+    def basemod(request):
+        return pytest.importorskip("base")
+
+    @pytest.fixture(scope="session", params=["opt1", "opt2"])
+    def optmod(request):
+        return pytest.importorskip(request.param)
+
+And then a base implementation of a simple function::
+
+    # content of base.py
+    def func1():
+        return 1
+
+And an optimized version::
+
+    # content of opt1.py
+    def func1():
+        return 1.0001
+
+And finally a little test module::
+
+    # content of test_module.py
+
+    def test_func1(basemod, optmod):
+        assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
+
+
+If you run this with reporting for skips enabled::
+
+    $ py.test -rs test_module.py
+    =========================== test session starts ============================
+    platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev3
+    plugins: xdist, oejskit, pep8, cache, couchdbkit, quickcheck
+    collected 2 items
+    
+    test_module.py .s
+    ========================= short test summary info ==========================
+    SKIP [1] /tmp/doc-exec-11/conftest.py:10: could not import 'opt2'
+    
+    =================== 1 passed, 1 skipped in 0.04 seconds ====================
+
+You'll see that we don't have a ``opt2`` module and thus the second test run
+of our ``test_func1`` was skipped.  A few notes:
+
+- the fixture functions in the ``conftest.py`` file are "session-scoped" because we
+  don't need to import more than once 
+
+- if you have multiple test functions and a skipped import, you will see
+  the ``[1]`` count increasing in the report
+
+- you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style
+  parametrization on the test functions to parametrize input/output 
+  values as well.
+
+
+

File doc/en/example/special.txt

View file
         seen = set([None])
         session = request.node
         for item in session.items:
-            instance = item.getparent(pytest.Instance)
-            if instance not in seen:
-                if hasattr(instance.obj, "callme"):
-                   instance.obj.callme()
-                seen.add(instance)
+            cls = item.getparent(pytest.Class)
+            if cls not in seen:
+                if hasattr(cls.obj, "callme"):
+                   cls.obj.callme()
+                seen.add(cls)
 
 test classes may now define a ``callme`` method which
 will be called ahead of running any tests::
     # content of test_module.py
 
     class TestHello:
-        def callme(self):
+        @classmethod
+        def callme(cls):
             print "callme called!"
 
         def test_method1(self):
             print "test_method1 called"
 
     class TestOther:
-        def callme(self):
+        @classmethod
+        def callme(cls):
             print "callme other called"
         def test_other(self):
             print "test other"
 
+    # works with unittest as well ...
+    import unittest
+    
+    class SomeTest(unittest.TestCase):
+        @classmethod
+        def callme(self):
+            print "SomeTest callme called"
+
+        def test_unit1(self):
+            print "test_unit1 method called"
+
 If you run this without output capturing::
 
     $ py.test -q -s test_module.py 
-    ...
+    ....
     callattr_ahead_of_alltests called
     callme called!
     callme other called
+    SomeTest callme called
     test_method1 called
     test_method1 called
     test other
+    test_unit1 method called