[JOSS review] Example usage

Issue #13 resolved
Pi-Yueh Chuang created an issue

In a nutshell

This issue is the response to the item on the checklist regarding example usage. I can run all demo cases described in this page. And I think the content in Running Ocellaris and Demos is good enough to serve this purpose. One thing I think that can be added into the documentation is about the parallel use case. See the following.

No explicit statement about how to run cases with MPI

Parallel simulations with MPI is a feature of this software. But I can't find anywhere in the document that clearly describes how to run a case with MPI. End-users may not have enough knowledge to figure out by themselves how to run an MPI program. So I believe it's better to add one or two sentences to show users an example command to launch Ocellaris with MPI.

Especially that the recommended way of using Ocellaris is through Singularity, and running a Singularity application on a cluster with MPI is not that straightforward (unlike running MPI on a single node). For example, users may have to make sure the MPI versions are the same on the host and in the container.

Actually, based on the page of orun.py, it looks like the implied way to run Ocellaris on a cluster is through a direct installation of Ocellaris, instead of through Singularity. If this is indeed the design, then it's better to clarify this in the documentation. Otherwise, users may go down the rabbit hole trying to make Singularity stuff work with clusters (because the installation instruction recommends users to use Singularity at the beginning).

Comments (5)

  1. Tormod Landet

    I added a section to the documentation https://www.ocellaris.org/user_guide/run.html#running-a-simulation-on-multiple-cpus-with-mpi

    As you might know, HPC installation can get quite hairy and very machine specific, but at least there is a place to start now. I did use a local install personally, since the cluster I installed on did not have a working Singularity install at that time. I might try Singularity next time, I have some work coming up on a different cluster.

  2. Log in to comment