Repeated CoverageData.updates cause problems

Issue #415 resolved
Anonymous created an issue

Attempting to use the API to get the following behavior:

  • collect coverage during each "test run" which is a single path of API calls
  • clear the coverage at the end of each such "run" so that the system can tell what was covered during each run
  • keep a CoverageData with all information across "test runs" so that at the end of testing a report over all test runs can be produced

https://github.com/agroce/tstl/blob/master/src/static/boilerplate_cov.py

has the critical code. Basic structure is just:

if self.oldCovData == None:
    self.oldCovData = newCov
else:
    self.oldCovData.write_file("bug_report.coverage")
    self.oldCovData.update(newCov)

(the write_file is just for bug reporting)

When done without write_file, it hangs and grabs most of CPU/large amounts of RAM. With write_file, the timeout causes testing to stop before that can happen, but the coverage file is huge and claims large numbers of processes. Only testing one program, a simple AVL tree.

Coverage data file is attached.

Note that I also see behavior where it appears that:

  • calling .get_data twice, the second call returns None, but if new coverage is added by executing the code being measure, get_data returns an object that has not (as API docs indicate) cleared out the old data. Not sure if this is related, but it explains the call to erase in the code linked on github.

Comments (4)

  1. Log in to comment