Question: What's a recommended way to store additional metadata during a coverage run?

Issue #652 closed
Ardelean Vlad George
created an issue

Hi! I'm looking for guidance for an idea that I have. I'll present my goal, some steps that I took to research a direction to follow, and at the end, I will clarify my question.

High level goal: I want to store and report the number of asserts that were executed. I want to display the assertions count together with the covered/not-covered status of lines. This would allow one to also know "how much" certain code was covered.

Steps that I took, or plan to take: I looked at the internals of coverage, in order to find a place where I can plug some code in, so to store some metadata. I noticed that the internal format used for storing data, wouldn't trivially allow storing more than the row numbers I noticed that there's a C and a Python tracer. I don't know how I'd be able to modify the C one - probably I need to compile my own tracer based on that one - haven't done such things, so postponing this approach. I took a look at the python tracer and as far as I could tell it's made to deal with line numbers and not much else. Probably the interfaces of the Python and C tracers are the same though, so it was interesting to see what it does. I noticed however that the tracer didn't get called for the functions I was testing. This means I probably missed something There's then the obvious problem of counting the number of assert statements that are being run between executions of different portions of the app code - I will deal with that on my own. I have a vague idea how to do that, but this is not in the scope of this question.

So my question is: How would you recommend I integrate my code with coverage.py?

Current idea: Have a pytest plugin that counts asserts, and monkey-patches coverage, so that it can store and correlate assert-statement related data, to the line numbers reported by coverage.py

I know this question might be heavy. If you feel like what I'm trying to do exceeds the original plan for coverage, please let me know :) This is more of an experiment at the moment, and this question is part of the research that I'm doing.

Thanks a lot! Awesome library btw :P

Comments (4)

  1. Ned Batchelder repo owner

    Hi, this is an interesting question. We don't have a simple way to integrate extra data like this at the moment. There's been a long-standing request to support "who tests what" (issue #170), which would probably start supporting plugins that can record more information than just line numbers.

    I'm wondering though, how will you use the assert count? You want to report for an entire test suite, how many assertions were run? And then what will you do with the number? Do you have a target number you are trying to reach?

    Another approach is to use a test-runner plugin that can collect the assert numbers, and then report them somewhere, so that you don't have to fiddle with coverage.py at all.

  2. Ardelean Vlad George reporter

    Hi, and thanks for the reply.

    I'd use the assertion count per line of code. I'd report not only whether the line was covered, but also how many asserts were run over all tests that covered that one line.

    As such, one could easily see areas of code where only broad scope "integration"-ish tests were made. Such large scope tests would greatly increase coverage, but wouldnt assert too many things.

    To illustrate a problem with coverage ojly, think about this case: 1. I have 70% coverage in my project 2. I delete all the assertions from all the tests 3. My coverage stays exactly the same.

    If I also had a way to preserve the number of assertions that were run during a test, when reporting this number, I'd be able to distinguish between a test suite that has 0 assertions and one which has tens of thousands

  3. Log in to comment