Repeated CoverageData.updates cause problems

Issue #415 resolved
Former user created an issue

Attempting to use the API to get the following behavior:

  • collect coverage during each "test run" which is a single path of API calls
  • clear the coverage at the end of each such "run" so that the system can tell what was covered during each run
  • keep a CoverageData with all information across "test runs" so that at the end of testing a report over all test runs can be produced

has the critical code. Basic structure is just:

if self.oldCovData == None:
    self.oldCovData = newCov

(the write_file is just for bug reporting)

When done without write_file, it hangs and grabs most of CPU/large amounts of RAM. With write_file, the timeout causes testing to stop before that can happen, but the coverage file is huge and claims large numbers of processes. Only testing one program, a simple AVL tree.

Coverage data file is attached.

Note that I also see behavior where it appears that:

  • calling .get_data twice, the second call returns None, but if new coverage is added by executing the code being measure, get_data returns an object that has not (as API docs indicate) cleared out the old data. Not sure if this is related, but it explains the call to erase in the code linked on github.

Comments (4)

  1. Ned Batchelder repo owner

    I've already changed the .get_data() to not return None the second time. Does that help?

  2. Ned Batchelder repo owner

    I believe this is due to an extreme proliferation of brief_sys data in the .coverage file. I've removed that in 088704cc33ec, let me know how it seems now.

  3. Log in to comment