1. lac
  2. pytest

Commits

lac  committed 4195a49

Some grammar fixes, but a few problems remain

  • Participants
  • Parent commits 4d1be5f
  • Branches default

Comments (0)

Files changed (1)

File doc/skipping.txt

View file
 =====================================================================
 
 You can skip or "xfail" test functions, either by marking functions
-through a decorator or by calling the ``pytest.skip|xfail`` helpers.
-A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it to run.  And *xfail* means that you expect your test to fail because there is an
+with a decorator or by calling the ``pytest.skip|xfail`` helpers.
+A *skip* means that you expect your test to pass unless a certain configuration or condition (e.g. wrong Python interpreter, missing dependency) prevents it from being run.  An *xfail* means that you expect your test to fail because there is an
 implementation problem.  py.test counts and lists *xfailing* tests separately
-and you can provide info such as a bug number or a URL to provide a
-human readable problem context.
+and it is possible to give  additional information, such as a bug number 
+or a URL to provide a problem context for your human readers.
 
-Usually detailed information about skipped/xfailed tests is not shown
+XYZZY explain how you do this with an example?
+
+Detailed information about skipped/xfailed tests is usually not shown
 to avoid cluttering the output.  You can use the ``-r`` option to
 see details corresponding to the "short" letters shown in the
 test progress::
 
     py.test -rxs  # show extra info on skips and xfail tests
 
+XYZZY this is confusing.  I think that it should be 
+    py.test -rxs # show extra info on xfails and skips 
+  
+    so the -r is to be verbose, and the x is for xfails and the s is for 
+    skips. But if this isn't what you mean, then I am even more confused.
+
+
 (See :ref:`how to change command line options defaults`)
 
 .. _skipif:
 Skipping a single function
 -------------------------------------------
 
-Here is an example for marking a test function to be skipped
+Here is an example of marking a test function to be skipped
 when run on a Python3 interpreter::
 
     @pytest.mark.skipif("sys.version_info >= (3,0)")
     def test_function(...):
         ...
 
-Create a shortcut for your conditional skip decorator
+You can create a shortcut for your conditional skip decorator
 at module level like this::
 
     win32only = pytest.mark.skipif("sys.platform != 'win32'")
 skip test functions of a class
 --------------------------------------
 
-As with all function :ref:`marking` you can do it at
+As with all function :ref:`marking` you can skip test functions at the
 `whole class- or module level`_.  Here is an example
-for skipping all methods of a test class based on platform::
+for skipping all methods of a test class based on the platform::
 
     class TestPosixCalls:
         pytestmark = pytest.mark.skipif("sys.platform == 'win32'")
         def test_function(self):
             "will not be setup or run under 'win32' platform"
 
-It is fine in general to apply multiple "skipif" decorators
-on a single function - this means that if any of the conditions
+Using multiple "skipif" decorators with a single function is generally fine  - this means that if any of the conditions
 apply the function will be skipped.
 
 .. _`whole class- or module level`: mark.html#scoped-marking
 you can force the running and reporting of an ``xfail`` marked test
 as if it weren't marked at all.
 
-Same as with skipif_ you can also selectively expect a failure
-depending on platform::
+As with skipif_, you can also selectively expect a failure
+depending on the platform::
 
     @pytest.mark.xfail("sys.version_info >= (3,0)")
     def test_function():
         ...
 
-You can also avoid running an "xfail" test at all or
-specify a reason such as a bug ID or similar.  Here is
+You can also avoid running an "xfail" test at all, or
+specify a reason why the test fails, such as a bug ID.  Here is
 a simple test file with usages:
 
 .. literalinclude:: example/xfail_demo.py
 you can also imperatively produce an XFail-outcome from
 within test or setup code.  Example::
 
+XYZZY 'imperatively' means in an urgent, commanding manner.  Dictators
+issue commands imperatively.  I don't think that code, or anything 
+non-human can.  And really the only thing you can do imperatively is
+issue orders.  You cannot 'imperatively produce' anything.
+I think what you mean is 'you can also force an XFail-outcome'.  
+But if you really want the 'produce' you have to drop the 'imperatively'.
+
     def test_function():
         if not valid_config():
             pytest.xfail("unsupported configuration")
     docutils = pytest.importorskip("docutils")
 
 If ``docutils`` cannot be imported here, this will lead to a
-skip outcome of the test.  You can also skip depending if
-if a library does not come with a high enough version::
+skip outcome of the test.  You can also skip based on the
+version number of the library.  This test will be skipped if
+the docutils version available is earlier than "0.3"::
 
     docutils = pytest.importorskip("docutils", minversion="0.3")
 
 The version will be read from the specified module's ``__version__`` attribute.
 
+XYZZY Add a note here about how various things like 0.3rc and 0.3beta and
+      such are handled.  It's a mess
+
 imperative skip from within a test or setup function
 ------------------------------------------------------
 
 If for some reason you cannot declare skip-conditions
-you can also imperatively produce a Skip-outcome from
+you can also imperatively produce a skip-outcome from
 within test or setup code.  Example::
 
     def test_function():
         if not valid_config():
             pytest.skip("unsupported configuration")
 
+XYZZY see my note about XFail.  You cannot 'imperatively produce' here
+either.