- changed status to open
- removed comment
read EOS table on single process the broadcast
this patch limits IO at startup by having a single process (specifiable via a parameter) read the table and the MPI broadcast the data to the others.
Keyword: EOS_Omni
Comments (7)
-
reporter -
- removed comment
I do not understand this patch.
- Where is the if statement that prevents the HDF5 calls if !do_IO? - sizeof *(VAR) is probably 8 (for double), not the size of the whole table - Ideally, the MPI_Bcast would use MPI_DOUBLE, not MPI_BYTE - I would rename "read_table_on_this_process" to "reader_process", since the former sounds like a boolean flag
-
reporter - removed comment
The patch contains at least one bug.
- the statement that prevents HDF5 calls is in HDF5_ERROR which wraps all HDF5 calls
- this is clearly a bug. I'll amend
- That would require several macros since right now the same macro is used for all HDF5 types and I did not want to make a mapping from MPI to HDF5 types
- the rename sounds fine, the current name is awkward
-
reporter - removed comment
I corrected the bug Erik pointed out. I left MPI_Bcast using MPI_BYTE since the alternative is to have a MPI_INT and MPI_DOUBLE version as well as having to worry (even more) about MPI_DOUBLE != H5T_NATIVE_DOULBE != CCTK_REAL. Transferring the data as an undifferentiated stream of bytes should be fine unless we are on a heterogeneous cluster (at which point I will gladly rewrite this routine once the rest of Cactus works :-) ).
-
- changed status to open
- removed comment
Please rename and update the comment for HDF5_ERROR, since it not only checks for errors. It also contains the logic deciding whether to call HDF5 at all.
Please apply.
-
reporter - changed status to resolved
- removed comment
applied as r65 of EOS_Omni.
-
reporter - edited description
- changed status to closed
- Log in to comment