Wiki
Clone wikilifev-release / tutorial / Mesh_partitioning
Mesh partitioning
In this tutorial we show how to partition a mesh in LifeV. Mesh partitioning is used when running simulations in parallel. Let's suppose we want to run a simulation using NP MPI processes: in this case the loaded mesh needs to partitioned into NP subdomains. Mesh partitioning in LifeV can be performed using either the Parmetis library or the Zoltan package of Trilinos. The class which contain an abstract interface for using both the Parmetis and Zoltan partitioner is called MeshPartitionTool. It allows to specify in the datafile the actual instance of partitoner that we want to use, that are parmetis or zoltan.
With respect to the datafile used for reading the mesh, here we have to add another section called, for instance, mesh_partitioning
. In this section the user has to specify only the type of partitioner to be used. Below we report an example of datafile called partitionMesh.in
[mesh]
mesh_dir = ./
mesh_file = cube.mesh
mesh_type = .mesh
[../]
[mesh_partitioning]
partitioner_type = parmetis # here the user can choose between parmetis and zoltan
[../]
Once we have prepared the datafile, it is possible to focus on the source file of the application which implements the mesh partitioning. The source code presented in this tutorial is an extension of what we have written here. In fact, after loading the mesh we just have to make an instance of the MeshPartitionTool
class: the constructor of such an object performs the mesh partitioning and eventually, by invoking the method meshPart()
we can recover the local mesh partition.
Once the partition is done we let every MPI process to print some information concerning the number of tetrahedral elements which it owns.
#include <Epetra_ConfigDefs.h>
#ifdef EPETRA_MPI
#include <mpi.h>
#include <Epetra_MpiComm.h>
#else
#include <Epetra_SerialComm.h>
#endif
// LifeV
#include <lifev/core/LifeV.hpp>
#include <lifev/core/filter/GetPot.hpp>
#include <lifev/core/mesh/MeshData.hpp>
#include <lifev/core/mesh/MeshPartitionTool.hpp>
using namespace LifeV;
int main ( int argc, char** argv )
{
typedef RegionMesh<LinearTetra> mesh_Type;
typedef std::shared_ptr<mesh_Type> meshPtr_Type;
typedef MeshPartitionTool<mesh_Type> meshPartitioner_Type;
#ifdef HAVE_MPI
MPI_Init (&argc, &argv);
std::shared_ptr<Epetra_Comm> Comm ( new Epetra_MpiComm (MPI_COMM_WORLD) );
#else
std::shared_ptr<Epetra_Comm> Comm ( new Epetra_SerialComm () );
#endif
// Reading datafile
GetPot dataFile ( "partitionMesh.in" );
// Reading the mesh
meshPtr_Type mesh ( new mesh_Type ( Comm ) );
MeshData meshData;
meshData.setup (dataFile, "mesh");
readMesh ( *mesh, meshData);
// Mesh partitioning
meshPtr_Type local_mesh ( new mesh_Type ( Comm ) );
const std::string partitionerType = dataFile ("mesh_partitioning/partitioner_type", "parmetis");
if ( Comm->MyPID() == 0 )
{
std::cout << "\nUsing " << partitionerType << " mesh partitioner\n\n";
}
// Using MeshPartitionTool class
Teuchos::ParameterList meshParameters;
meshParameters.set ("num-parts", Comm->NumProc() , "");
meshParameters.set ("offline-mode", false, "");
meshParameters.set ("graph-lib", partitionerType, "");
meshPartitioner_Type meshCutter ( mesh, Comm, meshParameters);
if (! meshCutter.success() )
{
std::cout << "Mesh partition failed.";
return EXIT_FAILURE;
}
// Each process gets its own local mesh
local_mesh = meshCutter.meshPart();
// Delete the not partitioned mesh from memory
mesh.reset()
// Each process prints out the number of elements of its local mesh
std::cout << "\n\nProcess " << Comm->MyPID() << ", has " << local_mesh->numVolumes() << " tetrahedra\n\n";
#ifdef HAVE_MPI
std::cout << std::endl << "MPI Finalization" << std::endl;
MPI_Finalize();
#endif
return 0;
}
REMARK: when in the same main file of the application the mesh is loaded and then it is partitioned the user is using the so-called "on-line" mesh partition strategy. As we have seen, in parallel applications, all the MPI processes used will first load the whole mesh and then, after mesh partitioning, will receive also the local subdomain. The command mesh.reset()
is eventually called to delete the not partitioned mesh from memory.
We recommend to use the "on-line" mesh partition strategy when dealing with small meshes on local desktops because this approach can be memory demanding and time consuming. When using large meshes it is more convenient to use an offline mesh partitioning for which we proceed in the following way:
- perform the mesh partitioning using only 1 MPI process and write the local subdomains to file;
- launch the simulation in parallel wherein each MPI process loads from file its local mesh subdomain (parallel execution).
Here you can download the partitionMesh.in, the mainMeshPartitioning.cpp and the cube.mesh files used in this tutorial.
Updated