ILIAD: InitiaLIzed-ensemble Analysis/Development framework

This framework is documented in the following manuscript:

O’Brien, T. A., W. D. Collins, K. Kashinath, O. Rübel, S. Byna, J. Gu, H. Krishnan, and P. A. Ullrich (2016), Resolution dependence of precipitation statistical fidelity in hindcast simulations, J. Adv. Model. Earth Syst., 8(2), 976–990, doi:10.1002/2016MS000671.

ILIAD Software Overview

This software package implements the hindcast creation, execution, and analysis capabilities of the ILIAD framework, which is the core analytical framework for the model evaluation work package of the Calibrated And Systematic Characterization, Attribution, and Detection of Extremes (CASCADE) scientific focus area at Lawrence Berkeley National Lab.

This software system is designed to create, execute, and analyze ensembles of hindcasts using the DOE/NSF Community Earth System Model (CESM) and the DOE Accelerated Climate Model for Energy (ACME). The hindcast members are generated by running short CESM/ACME simulations that are started from an initial state that is based on output from the Climate Forecast System (CESM). The initial condition files are created by interpolating the CFS output to the appropriate CESM/ACME grid. The ILIAD software package includes code to automatically generate initial condition files for arbitrary CESM/ACME grids.

How do I get set up?

In the present code version, setup consists of (1) downloading the source, (2) building fortran sub-modules, and (3) setting the PYTHONPATH.

Download the source

The latest version of the code can be downloaded using Git ( Please contact Travis O'Brien if you require permission.

Building Fortran sub-modules

Prior to use, some Fortran-based Python extensions must be built. Do the following from the command line (assuming that the current working directory is the base of the repository):

     cd interpolation


The current version of ILIAD relies on manual installation; PYTHONPATH must be set manually to the parent directory of the ILIAD repository. One way to achieve this is through the use of module files like the following:

## iliad modulefile
## modulefiles/modules.  Generated from by configure.

module-whatis   "loads the iliad python package"

# for Tcl script use only
set     packagename     iliad
set version     development
set prefix      /projects/cascade

setenv          ILIAD_BASE      ${prefix}/iliad
prepend-path    PYTHONPATH      ${prefix}


The ILIAD software package relies on a number of external packages:

It will build and install without all but the NumPy package (because f2py is required for compilation of the Fortran sub-modules), but it will fail to run at various stages.


The runSystemTests.bash script in the testing subdirectory of the repository provides an example of how to test the initial condition portion of the code.


Execution of the ILIAD software occurs in several stages:

Grid configuration

In the grid configuration stage, core/ takes grid information from CESM/ACME initial condition files and grid information from CFS files to create mapping weights that are later used for interpolation. The locations of the CESM/ACME initial condition files must be specified manually, since they are system and grid dependent. Locations of these files can be found by building a dummy CESM/ACME simulation with the desired compset and grid; the namelist files contain these file locations (e.g. use ncdata in atm_in for the -a flag).


python ${ILIAD_BASE}/core/ \
    -p ${CFSDIR}/pgbh06/pgbh06.cdas1.2012082612.grib2 \
    -f ${CFSDIR}/flxf06/flxf06.cdas1.2012082612.grib2 \
    -a ${CESM_INPUT}/atm/cam/chem/ic/  \
    -l ${CESM_INPUT}/lnd/clm2/initdata/ \
    -s ${CESM_INPUT}/atm/cam/sst/ \
    -t ${CESM_INPUT}/atm/cam/topo/ \
    -g fv1.9x2.5

Initial condition creation

In the initial condition creation stage, the core/ script (or alternatively the core/ script for parallel generation of an ensemble) uses the grid information from the grid creation stage to interpolate CFS fields onto the given CESM/ACME grid. (The --forecast flag is used for CFS output after 2011, since the format changed relative to the CFS reanalysis.)


  python ${ILIAD_BASE}/core/ \
      -m ./regrid-preproc-fv1.9x2.5/  \
      -d ${CFSDIR}  \
      -t 2013:12:27:06 \

or for parallel creation:

  mpirun -n 2 python ${ILIAD_BASE}/core/ \
      -m ./regrid-preproc-fv1.9x2.5/  \
      -d ${CFSDIR}  \
      -t 2013:01:01:00 \
      -e 2013:01:01:06 \

Model Execution

In the model execution stage, a helper script creates a CESM/ACME simulation using the multi-instance capability of the model. These scripts are somewhat system dependent. The repository contains helper scripts that function on Edison at NERSC; it should be simple to modify it for use on other systems.

Example for walker (an Ubuntu 12.04 workstation with custom modules installed):

module load iliad


if [ -d "${CASENAME}" ]; then
  rm -rf ${CASENAME}

/projects/cascade/externalCode/cesm1_2_2/scripts/create_newcase \
  -case $CASENAME \
  -compset FC5 \
  -res f19_f19 \
  -mach userdefined

if [ "$?" -ne "0" ]; then
  echo "create_newcase failed with error $?"
  exit $?

#Copy this build script to the case directory so we have a record of how the
#case was made
cp $0 $CASENAME/

#Copy ILIAD namelist template files to the case directory

#Set walker-specific parameters
./xmlchange -file env_build.xml -id OS -val ubuntu
./xmlchange -file env_build.xml -id MPILIB -val openmpi
./xmlchange -file env_build.xml -id COMPILER -val gnu
./xmlchange -file env_mach_pes.xml -id MAX_TASKS_PER_NODE -val 8
./xmlchange -file env_build.xml -id GMAKE_J -val 8
./xmlchange -file env_build.xml -id GMAKE -val "make"

#Run/build location
./xmlchange -file env_build.xml -id EXEROOT -val $PWD/bld
./xmlchange -file env_run.xml -id RUNDIR -val $PWD/run

#Build options
#./xmlchange -file env_build.xml -id COMPILER -val "gnu"
#Turn on debug mode
#./xmlchange -file env_build.xml -id DEBUG -val "TRUE"
./xmlchange -file env_build.xml -id CESMSCRATCHROOT -val $PWD/scratch

#Run length optiosn
./xmlchange -file env_run.xml -id STOP_OPTION -val nhours
./xmlchange -file env_run.xml -id STOP_N -val 12
./xmlchange -file env_run.xml -id RESUBMIT -val 5

#PIO options
#./xmlchange -file env_run.xml -id ATM_PIO_STRIDE -val 6
#./xmlchange -file env_run.xml -id ATM_PIO_ROOT -val 0
#./xmlchange -file env_run.xml -id ATM_PIO_NUMTASKS -val ${APPROX_NUM_NODES}
#./xmlchange -file env_run.xml -id ATM_PIO_TYPENAME -val 'pnetcdf'

#Timer options
./xmlchange -file env_run.xml -id CHECK_TIMING -val TRUE
./xmlchange -file env_run.xml -id SAVE_TIMING -val TRUE
./xmlchange -file env_run.xml -id TIMER_LEVEL -val 4
./xmlchange -file env_run.xml -id COMP_RUN_BARRIERS -val TRUE

#Number of processors
./xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_ATM -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_ATM -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_ATM_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_LND -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_LND -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_LND -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_LND_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_ICE -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_ICE -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_ICE_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_OCN -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_OCN -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_OCN_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_GLC -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_GLC -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_GLC -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_GLC_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_ROF -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_ROF -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_ROF -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_ROF_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_WAV -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_WAV -val $NUM_THRDS
./xmlchange -file env_mach_pes.xml -id NINST_WAV -val $NUM_INST
./xmlchange -file env_mach_pes.xml -id NINST_WAV_LAYOUT -val "concurrent"

./xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val $NUM_PES
./xmlchange -file env_mach_pes.xml -id NTHRDS_CPL -val $NUM_THRDS

#Inputdata directory
./xmlchange -file env_run.xml -id DIN_LOC_ROOT -val ${INPUTDATADIR}
./xmlchange -file env_run.xml -id DIN_LOC_ROOT_CLMFORC -val ${INPUTDATADIR}/atm/datm7

#Archive directory
./xmlchange -file env_run.xml -id DOUT_S_ROOT -val ${PWD}/archive

#Modify env_mach_specific for the GNU compiler to work
#sed -i "s:gcc/.*:gcc/4.8.0:" env_mach_specific

#***************** End of XML configuration *************************************

#***************** Hijack the preview_namelists command *************************

#! /bin/csh -f

source ./Tools/ccsm_getenv || exit -1

#Force the NINST* variables to be one so that only normal namelists are generated
setenv NINST_ATM  2
setenv NINST_LND  2
setenv NINST_ICE  2
setenv NINST_OCN  2
setenv NINST_GLC  2
setenv NINST_ROF  2
setenv NINST_WAV  2

grep -v "#! /bin/csh -f" preview_namelists | grep -v "ccsm_getenv" >> ${HIJACKED_PREVIEW_NAMELISTS}

cat > preview_namelists << EOF
#! /bin/csh -f

#Run the hijacted preview namelists script to create a false double-instance set of CESM namelist files
# that are based on the user_*_0001 files with template flags

#Make a namelist template directory and a namelist file directory
mkdir -p namelistTemplates
mkdir -p namelistFiles

#Copy the just-created namelist files for instance 0001 into the namelistTemplates folder
cp CaseDocs/*_0001 namelistTemplates

#Allow these files to be written (non-writing permissions are inherited from CaseDocs)
chmod +w namelistTemplates/*

#Change any instance of 0001 in the namelist files to a mako template string
sed -i 's/0001/\$\{ensemblenumber}/g' namelistTemplates/*

#Run the mako template script
python ${ILIAD_SOURCE}/core/ \
  --icdirectory=${ILIAD_IC_DIR} \
  --domain=${ILIAD_DOMAIN}  \
  --gmtstartdate=${GMT_START} \
  --gmtenddate=${GMT_END} \
  --cesminputdir=${INPUTDATADIR} \
  --templatedir=./namelistTemplates \

#Copy the templated namelist files to CaseDocs
cp -f namelistFiles/* CaseDocs

#Copy the templated namelist files to run
cp -f namelistFiles/* run

#Print feedback to the screen
cat << EOF2

This is a hijacked version of preview_namelists that first calls
preview_namelists_double to generate a two-instance set of namelists, and then
uses ${ILIAD_SOURCE}/core/ 
to create the actual set of *_NNNN input files.  This is done because
preview_namelists is unacceptably slow to generate large numbers of namelist


#Run cesm_setup

#Add a flag to FFLAGS in the Macros file
#(this must happen after cesm_setup)
FLIBS=`${NETCDF_PATH}/bin/nf-config --flibs`
sed -i "s/\(FFLAGS:=.*\)/\1\ -fno-range-check\ -fcray-pointer/" Macros
sed -i "s:^SLIBS+=.*:SLIBS+=${FLIBS}:" Macros

#Change the PBS queue and add a line to send e-mails about job status
sed -i "s/^#PBS\ -q\ regular/#PBS\ -q\ ${PBSQUEUE}\n#PBS\ -m\ abe/" ${PBSFILE}
#Set the wall time
sed -i "s/^#PBS\ -l\ walltime=.*/#PBS\ -l\ walltime=${WALLTIMESTR}/" ${PBSFILE}


cat > ${CASENAME}.build.pbs << EOF
#!/bin/csh -f

#PBS -m abe
#PBS -l mppwidth=24
#PBS -l walltime=00:29:00
#PBS -j oe
#PBS -S /bin/csh -V




Who do I talk to?

This code is created and maintained by Travis A. O'Brien