Pickling of MPI Comm
Issue #104
on hold
While a general MPI communicator is not serializable, the predefined constant communicators like COMM_SELF, COMM_NULL, and COMM_WORLD can probably be serialized and deserialized in a well defined manner.
We are reusing a MPI4py based python module with distributed
, which sends pickled objects over the network for distributed computing. In our setup the MPI'd module is running strictly single rank with MPI.COMM_SELF to solve a small problem.
I understand this is not a correct way of using MPI, but allowing a picklable COMM_SELF seems to be the cleanest solution given these constraints.
Comments (2)
-
-
- changed status to on hold
- Log in to comment
I'm afraid I'm not ready to implement these features yet. While I acknowledge the case of
MPI.Comm
is rather trivial, other MPI classes would require much more care. For example, think of MPI datatypes, we would need to handle the predefined ones, but as MPI provides datatype decode functionality, we should be able to serialize reconstruct back any MPI datatype. The case ofMPI.Op
is similar, we should need to support pickling for a bunch of predefined operations, but I think for user-defined operations we cannot do much (unless the routine implemented the operation is implemented in Python and that routine is pickleable). At the risk of upsetting you, I have to reject your request at this time.However, you can workaround things easily on your side by using the
copyreg
module. Here you have an example that would allow you to pickle/unpickle the predefined communicators, and you can easily extend the support to other predefined handles. You just need to make sure to put this code in a module, and import it in both client and server sides before attempting to transfer communicators over the network.