Issue #6 resolved

coverage not collected for multiprocessing.Process

created an issue

Coverage is not reported for anything executed within the multiprocessing.Process. There is a bug report for coveragepy (#117), but since cov-core tries to start coverage in subprocesses, it seems like this is the best place for the report. The following link is for the coveragepy report, which has a great minimal working example:

Using py.test and pytest-cov, I find that coverage doesn't start in multiprocessing.Process processes. I have tried manually doing "import cov_core_init; cov_core_init.init()" to rule out the possibility that init_cov_core.pth wasn't getting run, but this doesn't seem to help. Besides, the coverage tracer appears to be correctly installed when I look at sys.gettrace(). I also confirmed that the COV_CORE_* environment variables are properly set.

I feel like I've exhausted the really simple possibilities, and there must be something more subtle going on. I'm not too familiar with coveragepy or cov-core or pytest-cov, so I'm not really sure where to look next. Is there any other information I can provide that would be helpful?

Comments (12)

  1. amcnabb8 reporter

    After looking at coveragepy a bit more, I think that it doesn't really have support for running in multiple processes concurrently. Each process seems to clobber the .coverage file. Although it makes sense to add support for this in coveragepy itself, it would also be possible for to make each process write to its own coverage data file. In any case, I don't fully understand what init_cov_core.pth helps with if coveragepy doesn't support concurrent access to data files.

  2. memedough repo owner


    Thanks for the report.

    Sub process coverage measurement is based on sub processes performing site initialisation and starting coverage. Multiprocessing is different, it uses fork, site initialisation isn't done again, parent and children all use the one file.

    There is now support for coverage measurement of multiprocess children by using a hook to start measurement under a unique name.

    Please try the latest cov-core.


  3. amcnabb8 reporter
    • changed status to new

    As I mentioned in the report, I called cov_core_init.init() by hand to rule out the possibility that init_cov_core.pth wasn't getting run, so site initialization isn't the issue. After looking at the code in coveragepy, it seems clear to me that each process clobbers the .coverage file without paying attention to each other. Of course, I might have missed something important in the code, but it really looks like coveragepy doesn't support having multiple processes write to the same file.

  4. amcnabb8 reporter

    By the way, I don't see any documentation for cov-core online or in the repositoory. I would like to try to use the hook that you alluded to, and I'll probably be able to figure it out by crawling through the source, but it would be really nice if this were documented.

  5. amcnabb8 reporter

    So I think I see how to call multiprocessing_hook, but I don't see how to make the parent process pick the filename for the child process.

  6. memedough repo owner


    coveragepy has support for measuring multiple processes if you configure it correctly so each has a different filename. pytest-cov does that automatically for sub processes that start a python interpreter and do site initialization.

    Your report was for multiprocessing which uses fork which does not start a python interpreter and do site initialization, neither coveragepy or pytest-cov was supporting multiprocessing. I saw that it could be added so that multiprocessing after fork can start coverage under different filename for that child.

    You mentioned calling cov_core_init.init() by hand, was that in the child process after the multiprocessing fork, if not it would have no effect. If so it would still have no effect unless you told coverage to stop and save the file at the end of the child process.

    coveragepy doesn't support multiple processes writing to the same file, they must write to different files and afterwards the files are combined.

    There is documentation for pytest-cov and nose-cov, cov-core is a common library and is not a public api, only comments in the code.

    You shouldn't need to call multiprocessing_hook, pytest-cov will do that. The parent process will start coverage and set env vars and the multiprocessing hook. After a multiprocessing fork the hook is called and coverage will be started for the child process under a different filename. At the end of the child process the multiprocessing finalize must be called to save the coverage. pytest-cov will do that.

    The parent process doesn't decide the child process filename, the child process decides the filename using the hostname, process pid and a random number.

    I have a working test and have also tested manually showing coverage collection when using multiprocessing (thanks for the pointer to the minimal working example). If this is still a problem would you be able to describe in more detail with a minimal working example?


  7. Niklas Hambüchen


    this doesn't work for me. Code executing in the multiprocessing.Process I start is not counted for coverage.

    I put some logging-to-file debug stuff into, from which show me that multiprocessing_hook and multiprocessing_start are executed; multiprocessing_finish is never called though, so that cov.stop() and are not run.

    Should this be like that?

  8. Niklas Hambüchen

    I found the problem. The multiprocessing.Process was not join()ed and when the main process terminated, the spawned process was just killed without having any chance to save coverage data.

    Though it might be obvious for some, it would be nice to have a note somewhere that explains that these multiprocessing finalizing functions are not called when the main process exits.

  9. amcnabb8 reporter

    I think I may have the same problem as Niklas: if the processes are not joined, then the coverage data gets lost. Is there a way to occasionally trigger the data to get saved to work around this problem?

  10. Log in to comment