"coverage combine" consumes a lot of memory

Issue #282 invalid
Andres Riancho created an issue

When trying to use "coverage combine" in my circleci.com CI build [0] I get a message saying: "Warning: The build VMs have a memory limit of 4GB. Your build hit this limit on one or more containers, and your build results are likely invalid."

Removing the "coverage combine" command from the build fixes the build failure, so I'm pretty confident that this is a bug in coverage.py and not in circle.com or anything else.

The software I'm trying to measure code coverage on is w3af [1], which rather large (lots of python files, lots of lines).

Coverage information is gathered using nosetests with "--with-cov --cov-report=xml".

Any ideas on how to fix this?

[0] https://circleci.com/gh/andresriancho/w3af/130 [1] http://w3af.org

Comments (5)

  1. Ned Batchelder repo owner

    Andres, I'm sorry to hear about the problem. Can you provide some more data? For example, before the "coverage combine" step, how many data files do you have, and how large are they? Can you get them to me somehow?

  2. Andres Riancho reporter

    Well... it seems that this issue is bigger than I thought. Here's more information:

    https://circleci.com/gh/andresriancho/w3af/164 : Failed (but that's fine, just my tests not passing)
    https://circleci.com/gh/andresriancho/w3af/161 : High memory usage due to coverage measurements

    If you check the changes for build 164 (https://github.com/andresriancho/w3af/commit/f85ee733a486bede6e95309df203142990c203a7) you'll notice that all I did was to disable coverage. Also, it's important to notice that in this case what seems to be failing is not the "coverage combine" step, since in the output of the 161 build we see that the test run timeouts => no coverage output is created.

    Since the only change is the coverage, we can conclude that there is something really ugly there.

    I've used coverage.py in other projects and I understand that with coverage enabled the test suite runs slower, but it shouldn't be THAT slow!

  3. Ned Batchelder repo owner

    If you still have this problem, and can get more information, please feel free to re-open the issue.

  4. Log in to comment