1. Ned Batchelder
  2. coverage.py
Issue #185 duplicate

I wish coverage.py would tell me what stack traces exercised this line of code.

Zooko O'Whielacronx
created an issue

I have more than a thousand unit tests here, and I'm wondering which one(s) are exercising a certain bit of code.

(Ths is because, even though all of the lines of code that I've looked at, and even all of the branches, are covered, there's still some data-dependent functionality that is not getting exercised. I know because there's a bug that shows up in the wild but not in the unit tests, and the diagnostics produced when it shows up (i.e. stack traces and values emitted) seem to point to a bug in this code.)

Anyway, I think it would be pretty sweet if coverage.py would note down all the stack traces that resulted in each line of code being executed. (Or at least each function.)

Then I could browse them to figure out which unit tests are exercising this code.

I realize this could make running tests under coverage a lot slower...

Comments (4)

  1. Zooko O'Whielacronx reporter

    Brian Warner pointed out:

    <warner> sounds like you actually want trial to set some variable at the
    		 beginning of each test case, and keep separate coverage files for
    		 each test  [02:57]
    <zooko> Ooh, that would be a good way to do it!  [02:58]
    <zooko> If I still had our "trialcoverage" hack running, instead of just
    		invoking a "coverage" process and telling it to run a "trial" process,
    		then that would be a very easy extension to add. :-/
    <warner> yeah, basically just restart the coverage at the start of each test
    		 case
    <warner> and emit a merged file for overall numbers, but still be able to do
    		 separate analysis of each test
    <zooko> So the most recent version of trialcoverage does exactly that -- stop
    		coverage after each test, start it again just before the next test.
    <zooko> So it would be a fairly simple extension to save the .coverage data
    		into a file/dir named by the name of this test.
    
  2. Ned Batchelder repo owner

    This seems like the next big frontier for coverage.py, and a few people have talked about doing something.

    The two big issues for coverage.py are the data explosion that would result from trying to capture all this data, and then how do you report on it usefully? Even just noting the name of the test would require some thinking about how to show the results. Do you have an idea how you would present the data?

    Also, note that figleaf has a feature for this, called "sections".

  3. eduardo schettino

    sometimes i need something like this. i just put a breakpoint using pdb, run the tests and check when it stops :)

    not exactly the same but another day i was thinking that i wish to know which lines were covered by my unit-test and which ones were covered by my system tests in a single report.

    my idea was that instead of just saying covered "yes" / "no" / "excluded" we could replace "yes" with one (or more) "cover-name" like "unit-test" and "func-test", I guess from there to save the name of the unit-test as cover-name wouldnt be hard.

    Do you have an idea how you would present the data?

    i guess for such details it would only make sense to visualize it with html. using javascript to select which set of "cover-name" would you like to visualize.

  4. Log in to comment