Wiki

Clone wiki

lifev-release / tutorial / Offline_mesh_partitioning

Go back


Offline mesh partitioning

In this tutorial we show how to partition a mesh offline and how to subsequently load it for your parallel computation. In this tutorial we showed how to read a mesh and how to partition it in the same cpp file. Now we describe another procedure, called "offline mesh partitioning" that is more suitable for simulations involving large meshes that are executed on parallel machines.

The main idea is to consider the mesh partitioning as a pre-processing of the subsequent parallel simulation. Indeed such a pre-preprocessing step will be run in serial (i.e. using 1 MPI process) and it involves loading the mesh, partition it and eventually write all the partitions obtained to file. Then, on the parallel machine, we launch the simulation using NP MPI processes: here each process will load in parallel its own local piece of mesh.

Based on the comments above in this tutorial we will write two different cpp files: one implementing the mesh partitioning and another one in which we actually read in parallel the offline partitioned mesh. Both the applications have the same datafile whose content is shown below

[mesh]
    mesh_dir  = ./ 
    mesh_file = cube.mesh
    mesh_type = .mesh
[../]

[mesh_partitioning]
    partitioner_type = parmetis # here the user can choose between parmetis and zoltan
    num_parts = 4
    filename_offline_mesh = cube_offline.h5 # in the filename put always the extension .h5
[../]

In the datafile we have a section called mesh_partitioning which contains parameters used to carry out the partitioning. In particular:

  • partitioner_type: it can be either ParMetis or Zoltan
  • num_parts: number of subdomains in which we want to cut the mesh
  • filename_offline_mesh: name of the file in which we store the partitioned mesh plus the string ".h5"

The extension of the file which contains the partitioned mesh is h5 as it stands for hdf5 format. In fact, the partitioned mesh will be stored into an hdf5 binary file.

Partition a mesh offline

In this phase the main steps involved are:

  1. loading the mesh
  2. partitioning the mesh
  3. write the local subdomains to file

The main file reported below implements the aforementioned steps. Please note that the code has to be run in serial, using 1 MPI process. At the beginning of the file mainPartitionOffline.cpp we check that the number of processes is equal to one. Then we read the mesh, we partition it and finally we write the local subdomains to file.

The source code reported here produces a file called cube_offline.h5 that will be used by the following tutorial about Read a partitioned mesh.

#include <Epetra_ConfigDefs.h>
#ifdef EPETRA_MPI
#include <mpi.h>
#include <Epetra_MpiComm.h>
#else
#include <Epetra_SerialComm.h>
#endif

#include <lifev/core/LifeV.hpp>
#include <lifev/core/filter/GetPot.hpp>
#include <lifev/core/mesh/MeshData.hpp>
#include <lifev/core/mesh/MeshPartitionTool.hpp>
#include <lifev/core/filter/PartitionIO.hpp>

using namespace LifeV;

int main ( int argc, char** argv )
{
    typedef RegionMesh<LinearTetra> mesh_Type;
    typedef std::shared_ptr<mesh_Type> meshPtr_Type;
    typedef MeshPartitionTool<mesh_Type> meshPartitioner_Type;
    typedef std::vector<meshPtr_Type> meshParts_Type;
    typedef std::shared_ptr<meshParts_Type> meshPartsPtr_Type;

#ifdef HAVE_MPI
    MPI_Init (&argc, &argv);
    std::shared_ptr<Epetra_Comm> Comm ( new Epetra_MpiComm (MPI_COMM_WORLD) );
#else
    std::shared_ptr<Epetra_Comm> Comm ( new Epetra_SerialComm () );
#endif

    if ( Comm->NumProc() != 1 )
    {
        std::cout << "This test needs to be run with a single process. Aborting." << std::endl;
        return EXIT_FAILURE;
    }

    // Reading datafile
    GetPot dataFile ( "dataOfflinePartitioning.in" );

    // Reading the mesh
    meshPtr_Type mesh ( new mesh_Type ( Comm ) );
    MeshData meshData;
    meshData.setup (dataFile, "mesh");
    readMesh ( *mesh, meshData);

    // Mesh partitioning
    const std::string partitionerType = dataFile ("mesh_partitioning/partitioner_type", "parmetis");
    const int numParts = dataFile ( "mesh_partitioning/num_parts", 1 );

    if ( Comm->MyPID() == 0 )
    {
        std::cout << "\nUsing " << partitionerType << " mesh partitioner\n\n";
    }

    // Using MeshPartitionTool class
    Teuchos::ParameterList meshParameters;
    meshParameters.set ("num-parts", numParts , "");
    meshParameters.set ("offline-mode", true, "");
    meshParameters.set ("graph-lib", partitionerType, "");
    meshPartitioner_Type meshCutter ( mesh, Comm, meshParameters );
    if (! meshCutter.success() )
    {
        std::cout << "Mesh partition failed.";
        return EXIT_FAILURE;
    }

    // Delete the not partitioned mesh
    mesh.reset();

    // Store into a vector all the local subdomains
    meshPartsPtr_Type meshPartPtr;
    meshPartPtr = meshCutter.allMeshParts();

    // Write mesh parts to HDF5 container
    std::string partsFile = dataFile ( "mesh_partitioning/filename_offline_mesh", "default.h5" );
    {
        std::shared_ptr<Epetra_MpiComm> mpiComm = std::dynamic_pointer_cast<Epetra_MpiComm> (Comm);
        PartitionIO<mesh_Type> partitionIO (partsFile, mpiComm);
        partitionIO.write (meshPartPtr);
    }

#ifdef HAVE_MPI
    std::cout << std::endl << "MPI Finalization" << std::endl;
    MPI_Finalize();
#endif

    return 0;
}

Read a partitioned mesh

In this section we show how to read in parallel the partitioned mesh produced before. The datafile we are going to use is the one shown at the top of this page. In the cpp file of our application we make use of the PartitionIO class to enable the parallel mesh reading.

REMARK: it is mandatory to run this application using a number of MPI processes equal to the number of parts in which we previously cut the mesh.

#include <Epetra_ConfigDefs.h>
#ifdef EPETRA_MPI
#include <mpi.h>
#include <Epetra_MpiComm.h>
#else
#include <Epetra_SerialComm.h>
#endif

// LifeV
#include <lifev/core/LifeV.hpp>
#include <lifev/core/filter/GetPot.hpp>
#include <lifev/core/filter/PartitionIO.hpp>
#include <lifev/core/mesh/RegionMesh.hpp>

using namespace LifeV;

int main ( int argc, char** argv )
{
    typedef RegionMesh<LinearTetra> mesh_Type;
    typedef std::shared_ptr<mesh_Type> meshPtr_Type;

#ifdef HAVE_MPI
    MPI_Init (&argc, &argv);
    std::shared_ptr<Epetra_Comm> Comm ( new Epetra_MpiComm (MPI_COMM_WORLD) );
#else
    std::shared_ptr<Epetra_Comm> Comm ( new Epetra_SerialComm () );
#endif

    // Reading datafile
    GetPot dataFile ( "dataOfflinePartitioning.in" );
    const int numParts = dataFile ( "mesh_partitioning/num_parts", 1 );

    // Check number of MPI processes used
    if ( Comm->NumProc() != numParts )
    {
        std::cout << "This test needs to be run with the same " << std::flush;
        std::cout << "number of processes specified in the datafile. Aborting." << std::endl;
        return EXIT_FAILURE;
    }

    // Reading in the partitioned mesh in parallel
    std::string partsFile = dataFile ( "mesh_partitioning/filename_offline_mesh", "default.h5" );
    meshPtr_Type local_mesh;
    {
        std::shared_ptr<Epetra_MpiComm> mpiComm = std::dynamic_pointer_cast<Epetra_MpiComm> (Comm);
        PartitionIO<mesh_Type> partitionIO (partsFile, mpiComm);
        partitionIO.read (local_mesh);
    }

    // Each process prints the number of tetrahedra of its own local mesh subdomain
    std::cout << "\n\nProcess " << Comm->MyPID() << ", has " << local_mesh->numVolumes() << " tetrahedra\n\n";


#ifdef HAVE_MPI
    std::cout << std::endl << "MPI Finalization" << std::endl;
    MPI_Finalize();
#endif

    return 0;
}

Here you can download the dataOfflinePartitioning.in, the mainPartitionOffline.cpp, the mainReadPartitionedMesh.cpp and the cube.mesh files used in this tutorial.

Updated