The current default for CarpetIOHDF5::compression_level is 0 (no compression). I have been using compression in most of my HDF5 files for years, and have never run into any problem. CPUs are typically much faster than storage nowadays. I propose that the compression level should default to 9. This would affect output and checkpoint files, and could lead to huge space savings. Apart from the checkpoint files being written and read quicker and taking less disk space, the user should not notice, as the HDF5 library handles compression transparently.