- changed milestone to 1.5
BoxMesh does not scale well in parallel
BoxMesh
is useful for performing scaling benchmarks in parallel, but it does not
scale well itself, because the Mesh
is built on one process and distributed afterwards.
Cell and vertex data should be created in a distributed way, using LocalMeshData
and MeshPartitioning::build_distributed_mesh()
Comments (9)
-
reporter -
reporter I have made a branch which is memory scalable, i.e. each process constructs part of the
Mesh
and then feeds it to the partitioner. However, this is not very scalable in time, as the partitioner then takes a very long time to compute when the Mesh is very big.There are a few options:
- Stick with the slow partitioning
- Make a "good" subdivision of the
Mesh
across processes, e.g. slice into cuboids in x/y/z, and don't bother with the partitioner. - Use the 'repartitioning' or 'refinement' mode of ParMETIS, which should be faster. (PT-SCOTCH does not have this feature, yet)
I'm heading towards (2), but I need a good algorithm to split an nx by ny by nz box into N=NX * NY * NZ cuboids,
i.e. find NX, NY, NZ (integers) such that NX ~= pow(N nx^2/ny/nz, 1/3) and 0 < NX <= nx etc. repeat for Y and Z
-
Is this worth the effort since one can just refine a coarse mesh in parallel?
-
reporter Given that I'm 90% of the way there, I'll probably persevere. There are some advantages. Refining converts edges to new vertices, and it can be difficult to calculate how many that will produce, especially after multiple refinements... Also, this is faster.
-
reporter - changed milestone to 1.6
-
reporter - changed milestone to 1.7
I don't think this really affects anyone, and may get rolled into Mesh generation factory. There is a branch with something in "chris/boxmesh-parallel", which works quite well at large core counts.
-
- removed milestone
Removing milestone: 1.7 (automated comment)
-
Is this worth doing? @chris_richardson
-
reporter - edited description
- changed status to wontfix
Now using
refine
quite effectively to get big meshes in parallel. - Log in to comment