PETSc4py can't run in parallel, how to fix it?

Issue #14 invalid
Former user created an issue

I want to run PETSc4py in Python parallelly, but it didn't work.

MPI4py works in my Python, but code as

from petsc4py import PETSc

rank = PETSc.COMM_WORLD.Get_rank()
size = PETSc.COMM_WORLD.Get_size()

print 'Hello World! From process {rank} out of {size} process(es).'.format(rank=rank,size=size) called by

mpiexec -n 4 python petsc_hello_world.py only gives size=1 in this case. I guess it's the problem of installation. I can't remember exactly what I did during the installation, probably I'm not careful on choosing the right configuration for MPI of PETSc4py. But I looked into the installation configuration, there is no pointer to the MPI. Does PESTc4py has its own MPI? But in this case why it didn't work.

Do you know how to fix the problem without going back to reinstallation? Thanks.

Comments (6)

  1. Lisandro Dalcin

    This smells as if the mpiexec you are using does not correspond with the MPI you used to build PETSc/petsc4py. This usually happens when users have broken build environment, and there is very little I can do to fix it. Run 'ldd /path/to/petsc4py/$PETSC_ARH/PETSc.so' to discover what MPI libraries petsc4py is using, make sure is the same you used to build PETSc, and check that your mpiexec correspond to the same MPI implementation.

  2. Mengqi Zhang

    Thanks a lot Lisandro, I just registered an account. I followed your suggestion and found the right mpiexec. Now the problem is solved.

    Can I ask for an another favor? Do you know or have any code example for PETSc4py in parallel? I can't find any tutorial on this, especially the grammar regarding how to set up the right matrix format for parallel computing.

    Thank you.

  3. Lisandro Dalcin

    Well, there are some examples in demo/ that do work in parallel. What kind of problems are you trying to solve?

  4. Mengqi Zhang

    Hi, Lisandro. Thanks for your reply.

    The examples in the demo seem not suite me. I want to solve a big generalized eigenvalue problem. The matrices are sparse. My question lies on how to allocate the matrices among the processors. Do I have to do it by myself? Or there is a routine to do so? I notice you have point out somewhere else that one should do something like

    from petsc4py import PETSc 
    A = PETSc.Mat().create() 
    A.setType('aij') 
    A.setSizes([M,N])  
    A.setPreallocationNNZ([diag_nz, offdiag_nz]) # optional 
    A.setUp()
    

    I have several question regarding these lines.

    (1) I presume M and N are the dimension of the matrix. Then how do the processors divide the matrix? I guess setPreallocationNNZ does the allocation of the matrix among the processors. What does nz mean here? Why here appears diag and offdiag?

    (2) I actually saw somewhere else people use A.setPreallocationNNZ(5), with a single parameter. What does 5 mean here?

    (3) I want to make sure that the matrix so generated is sparse (since it uses aij)? I feel it tricky since if the matrix is stored as sparse, will the allocation/parallelization destroy the efficiency of sparse matrix?

    After the matrix is set up, I would like to use SLEPc4py to solve the generalized eigenvalue problem. The example code I got online is like

    E = SLEPc.EPS(); E.create()
    E.setOperators(A)
    E.setProblemType(SLEPc.EPS.ProblemType.GNHEP)
    E.setFromOptions()
    E.solve()
    

    I'm afraid this script is not designed for the parallel computation since in the options there is no indication of parallelization. Do you know how to set it up?

    Thank you very much for your time. I appreciate it very much.

  5. Lisandro Dalcin

    Dear Meng, this is an issue tracker, we use it for discussions strictly related to bugs or development. The petsc-users mailing list is the appropriate place to post questions and ask for help, feel free to CC your questions to my personal email to get my attention faster. Sorry for any inconvenience, but please re-post your questions to petsc-users@mcs.anl.gov .

  6. Log in to comment