Commits

Brianna Laugher committed 0b9d82e

issue #308
first attempt, mark individual parametrize test instances with other marks (like xfail)

  • Participants
  • Parent commits 7c468f8

Comments (8)

  1. holger krekel

    forget about point 5 :) But for the record, it is about being able to parametrize a test with marker instances to test pytest machinery. There is no such test currently, but if one tried to parametrize with [pytest.mark.mymark1(...), pytest.mark.mymark2(...)] , expecting those marks to be passed as the fixture value, it would not work. One would have to "double-mark" it to receive those markers as fixtures in a test. In that sense, the changes are not fully backward compatible but i don't suppose it's going to be a problem in real life. If it does show up, we can introduce a flag "marktransfer" in a subsequent release, defaulting to "True". If False, the transfer you are just introducing would not be performed.

    all looks good now. Looking forward to the doc bits, then i'll do some final testing and merge!

    1. Brianna Laugher author

      oh, I understand - if someone is deliberately trying to pass in a MarkDecorator. Yeah, that will be weird. :) Actually it will work fine as long as there are multiple args!

  2. Brianna Laugher author

    hm, so regendoc is essentially running the python snippets in the documentation, is that it? it seems to produce a lot of spurious differences (different object addresses, different line length, 0.01 seconds vs 0.02 seconds) - should I add those diffs or try to leave them out?

  3. holger krekel

    Brianna, this looks good. A few notes/questions/suggestions:

    • Could you put the tests into its own TestMarkersWithParametrization class (and mark it with issue308 if you like)?

    • have at least one test that it also works with pytest_generate_tests not only decorators

    • fix single-argname case: if you use something like parametrize("onearg", [pytest.mark.xfail(1), 2]) your current code will not detect it due to the "tuple" normalization at the beginning of Metafunc.parametrize()

    • instead of the two loops over argvalues (you added to parametrize()) you can do it in one i think.

    • for completeness, it would be good to test that one can parametrize with markers like so: @parametrize("marker", [pytest.mark.somename(pytest.mark.xfail()), ...] which would have a test function receiving the xfail() marker. Maybe more a theoretic consideration and i think it will just pass.

    • could you also imagine adding one or two examples to doc/en/example/parametrize.txt? We use this tool https://bitbucket.org/RonnyPfannschmidt/regendoc/ and issue "regendoc --update parametrize.txt" to generate "real life" output. Ideally, you could also add a section to doc/en/parametrize.txt but i can care for that myself if you prefer.

    thanks for improving parametrization!

    holger

    1. Brianna Laugher author

      I have addressed the first 4 points in c383449 . I assume you do not think it is important to add support to metafunc.addcall since that is deprecated.

      I'm not sure what you mean by point 5. Is it applying multiple marks/nested marks? I made a test for that case but it doesn't pass, only one of the markers gets applied to the test. I haven't dug into it because I was not sure if it was important to support or not.

      I will add some docs later today :)

      thanks for your quick comments.

Files changed (2)

File _pytest/python.py

 import sys
 import pytest
 from _pytest.main import getfslineno
+from _pytest.mark import MarkDecorator, MarkInfo
 from _pytest.monkeypatch import monkeypatch
 from py._code.code import TerminalRepr
 
         self._globalid_args = set()
         self._globalparam = _notexists
         self._arg2scopenum = {}  # used for sorting parametrized resources
+        self.keywords = {}
 
     def copy(self, metafunc):
         cs = CallSpec2(self.metafunc)
         cs.funcargs.update(self.funcargs)
         cs.params.update(self.params)
+        cs.keywords.update(self.keywords)
         cs._arg2scopenum.update(self._arg2scopenum)
         cs._idlist = list(self._idlist)
         cs._globalid = self._globalid
     def id(self):
         return "-".join(map(str, filter(None, self._idlist)))
 
-    def setmulti(self, valtype, argnames, valset, id, scopenum=0):
+    def setmulti(self, valtype, argnames, valset, id, keywords, scopenum=0):
         for arg,val in zip(argnames, valset):
             self._checkargnotcontained(arg)
             getattr(self, valtype)[arg] = val
             if val is _notexists:
                 self._emptyparamspecified = True
         self._idlist.append(id)
+        self.keywords.update(keywords)
 
     def setall(self, funcargs, id, param):
         for x in funcargs:
         if not argvalues:
             argvalues = [(_notexists,) * len(argnames)]
 
+        # these marks/keywords will be applied in Function init
+        newkeywords = {}
+        for i, argval in enumerate(argvalues):
+            newkeywords[i] = {}
+            if isinstance(argval, MarkDecorator):
+                # convert into a mark without the test content mixed in
+                newmark = MarkDecorator(argval.markname, argval.args[:-1], argval.kwargs)
+                newkeywords[i] = {newmark.markname: newmark}
+
+        argvalues = [av.args[-1] if isinstance(av, MarkDecorator) else av
+                     for av in argvalues]
+
         if scope is None:
             scope = "subfunction"
         scopenum = scopes.index(scope)
                 assert len(valset) == len(argnames)
                 newcallspec = callspec.copy(self)
                 newcallspec.setmulti(valtype, argnames, valset, ids[i],
-                                     scopenum)
+                                     newkeywords[i], scopenum)
                 newcalls.append(newcallspec)
         self._calls = newcalls
 
 
         for name, val in (py.builtin._getfuncdict(self.obj) or {}).items():
             self.keywords[name] = val
+        if callspec:
+            for name, val in callspec.keywords.items():
+                self.keywords[name] = val
         if keywords:
             for name, val in keywords.items():
                 self.keywords[name] = val

File testing/python/metafunc.py

             "*3 passed*"
         ])
 
+    @pytest.mark.issue308
+    def test_mark_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
 
+            @pytest.mark.foo
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.bar((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        items = testdir.getitems(s)
+        assert len(items) == 3
+        for item in items:
+            assert 'foo' in item.keywords
+        assert 'bar' not in items[0].keywords
+        assert 'bar' in items[1].keywords
+        assert 'bar' not in items[2].keywords
+
+    @pytest.mark.issue308
+    def test_select_individual_parametrize_instance_based_on_mark(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.foo((2, 3)),
+                (3, 4),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        rec = testdir.inline_run("-m", 'foo')
+        passed, skipped, fail = rec.listoutcomes()
+        assert len(passed) == 1
+        assert len(skipped) == 0
+        assert len(fail) == 0
+
+    @pytest.mark.xfail("is this important to support??")
+    @pytest.mark.issue308
+    def test_nested_marks_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.foo(pytest.mark.bar((1, 3))),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        items = testdir.getitems(s)
+        assert len(items) == 3
+        for mark in ['foo', 'bar']:
+            assert mark not in items[0].keywords
+            assert mark in items[1].keywords
+            assert mark not in items[2].keywords
+
+    @pytest.mark.xfail(reason="is this important to support??")
+    @pytest.mark.issue308
+    def test_nested_marks_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+            mastermark = pytest.mark.foo(pytest.mark.bar)
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                mastermark((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        items = testdir.getitems(s)
+        assert len(items) == 3
+        for mark in ['foo', 'bar']:
+            assert mark not in items[0].keywords
+            assert mark in items[1].keywords
+            assert mark not in items[2].keywords
+
+    @pytest.mark.issue308
+    def test_simple_xfail_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.xfail((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        reprec = testdir.inline_run()
+        # xfail is skip??
+        reprec.assertoutcome(passed=2, skipped=1)
+
+    @pytest.mark.issue308
+    def test_xfail_with_arg_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.xfail("sys.version > 0")((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        reprec = testdir.inline_run()
+        reprec.assertoutcome(passed=2, skipped=1)
+
+    @pytest.mark.issue308
+    def test_xfail_with_kwarg_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.xfail(reason="some bug")((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        reprec = testdir.inline_run()
+        reprec.assertoutcome(passed=2, skipped=1)
+
+    @pytest.mark.issue308
+    def test_xfail_with_arg_and_kwarg_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.xfail("sys.version > 0", reason="some bug")((1, 3)),
+                (2, 3),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        reprec = testdir.inline_run()
+        reprec.assertoutcome(passed=2, skipped=1)
+
+    @pytest.mark.issue308
+    def test_xfail_is_xpass_on_individual_parametrize_instance(self, testdir):
+        s = """
+            import pytest
+
+            @pytest.mark.parametrize(("input", "expected"), [
+                (1, 2),
+                pytest.mark.xfail("sys.version > 0", reason="some bug")((2, 3)),
+                (3, 4),
+            ])
+            def test_increment(input, expected):
+                assert input + 1 == expected
+        """
+        testdir.makepyfile(s)
+        reprec = testdir.inline_run()
+        # xpass is fail, obviously :)
+        reprec.assertoutcome(passed=2, failed=1)
+