Parallel matrix-vector multiplication and ordering

Issue #96 invalid
Robert Speck created an issue

When I use the matrix of this example and multiply it with a global vector, then all works well (i.e. I get the 2nd order discretization of the negative Laplacian) until the ordering of the vector becomes important. Take e.g. m=n=4 and 4 MPI ranks, then the results are very different from the ones with 1 or 2 ranks.

This does not happen when using a shell matrix as e.g. here, but then it seems I cannot use a preconditioner when I want to solve a system with this matrix.

So, how do I need to create my matrix to get parallel matrix-vector multiplication working with global vectors without loosing choices for preconditioning?

Comments (6)

  1. Robert Speck reporter

    Mhm, I guess this is not a bug, since I keep seeing this in many other petsc4py scripts.. should I post this as a question on the mailing list instead?

  2. Lisandro Dalcin

    What do you mean by "the results are very different" ? Please note that the first example uses a "hardwired" global ordering, while the second example uses DMDA, which handles the global <-> natural ordering automatically, so you end up seeing things in the natural ordering, which does not depend on the parallel partitioning.

  3. Robert Speck reporter

    Thanks for your reply. The resulting vector contains different values, and reordering does not help. I agree, though, that this depends on the ordering of the input vector. So, what would I need to do in order to get a matrix-vector multiplication which works independently on the spatial decomposition? And in what ordering is the result?

  4. Lisandro Dalcin

    Well, at this point I think you should point this question the the mailing list. There is no single answer, what's most convenient depends on the details of your applications. In the general case, you as a user have to handle reorderings, for example using VecScatter or PetscSF or maybe AO . But if you use dome other PETSc data structures, let say DMDA, then the parallel reordering is automatic.

    As this is not really an issue within petsc4py, I'm closing it.

  5. Log in to comment