MPI support?
Issue #8
new
Nathanael, do you envision adding MPI support? I'm just asking because I'd like to know whether it will ultimately be possible to scale my model (gfs-dycore.googlecode.com) beyond a single node, or whether I should look at using something like s2hat (http://www.apc.univ-paris7.fr/APC_CS/Recherche/Adamis/MIDAS09/software/s2hat/s2hat.html) if that becomes necessary.
Comments (2)
-
repo owner -
Account Deleted Thanks Nathanel. That's awfully darn fast!
- Log in to comment
No, I don't plan to add MPI support. I recently ran some benchmark tests. On a sandy-bridge node (AVX support) with 16 cores, the full transform takes only 0.45 seconds for Lmax = 4095 and 9.81 milli-secondes for Lmax=1023.
As a comparison, my rough estimate of the time needed to transfer the data between nodes would be a few tenth of seconds for Lmax=4095...
However, if you need MPI support, rather than s2hat (which seems slow), I would recommend libpsht:
http://sourceforge.net/projects/libpsht/
or libsharp (its successor):
http://sourceforge.net/projects/libsharp/
which has good overall performance, although not as good as SHTns yet ;-)