Wiki

Clone wiki

qunex / Overview / QuickStart

QuNex quick start using a Docker container

Quick start on deploying the QuNex suite starting from raw data to launching HCP pipelines in under 30 minutes using a Docker container.

Requirements

Software requirements:

Hardware requirements:

  • At least 8 GB RAM.
  • 20 GB storage space for imaging data (processed).
  • ~50 GB storage space for the container image.

Step 0: Getting access to the QuNex container registry

If you do not have access to the QuNex container registry on https://gitlab.qunex.yale.edu/ then you first need to register for it at https://qunex.yale.edu/qunex-registration/.

Step 1: Download and prepare the QuNex container and the qunex_container script

To start, open your console or terminal app. This quick start assumes that you will be working in the ${HOME}/qunex directory. Note here that the ${HOME} path is user dependent; for example, if my username is JohnDoe then my home path will be typically /home/JohnDoe. If you are not already in your home directory you should go there now and create a qunex subfolder where you will do your work:

# -- Go to your HOME FOLDER
cd $HOME

# -- Create the qunex subfolder
mkdir qunex

# -- Go into the newly created folder
cd qunex

# -- Login into the Docker repository for QuNex. Where you replace <username> with your username. You can get it, or change it at https://gitlab.qunex.yale.edu/-/profile/ account. 
docker login gitlab.qunex.yale.edu:5002 -u <username>

Next, you have to download the Docker container image from QuNex GitLab onto your machine. To do this execute:

# -- Pull the latest stable docker image
docker pull gitlab.qunex.yale.edu:5002/qunex/qunexcontainer:<stable_container_tag>

We advise you to use the latest stable container tag. You can find it (along with older released tags) in the QuNex README file. For example:

# -- If the latest stable tag is 0.90.6 you would execute

docker login gitlab.qunex.yale.edu:5002 -u jdemsar
docker pull gitlab.qunex.yale.edu:5002/qunex/qunexcontainer:0.90.6

Once the QuNex Docker container is downloaded you should download the qunex_container script. This script allows executing and scheduling QuNex commands via the previously downloaded container in a user friendly fashion. With the qunex_container you can execute QuNex commands the same way as they would if QuNex would be installed from source. The only difference is that instead of qunex you use the qunex_container command and provide the --container parameter which points to the container you want to use. To use the script we should add it to the PATH variable (you can also copy it into a folder that is already in PATH, e.g. /usr/bin) and make it executable:

# -- Download the script
wget --no-check-certificate -r 'https://drive.google.com/uc?export=download&id=1wdWgKvr67yX5J8pVUa6tBGXNAg3fssWs' -O qunex_container

# -- Add to path
PATH=${HOME}/qunex:${PATH}

# -- Make executable
chmod a+x ${HOME}/qunex/qunex_container

To test if the script is working type qunex_container into the console. If everything is OK, the script's help will be printed out.

Step 2: Download the example data

Now that we can download the example data, we will put the data into the data subfolder inside our ${HOME}/qunex folder. Data is composed from three files:

  • The imaging data (HCPA001.zip) zip file contains the actual fMRI recordings.
  • The batch file (HCPA001_batch.txt) contains a number of parameters to be used in preprocessing and analysis commands. These parameters are typically stable and do not change between various commands. For details see Batch files Wiki page.
  • The mapping specification file (HCPA001_mapping.txt) is an essential element of running QuNex and ensures that the raw nii data is onboarded (mapped to the defined HCP naming convention) correctly. For details see Mapping specification files Wiki page.
# -- Create data dir
mkdir data

# -- Go into the data dir
cd data

# -- Download imaging data
wget --no-check-certificate -r 'https://drive.google.com/uc?id=1CbN9dtOQk3PwUeqnBdNeYmWizay2gSy7&export=download' -O HCPA001.zip

# -- Download the batch file
wget --no-check-certificate -r 'https://drive.google.com/uc?id=16FePg7JoQo2jqWTYoI8-sZPEmPaCzZNd&export=download' -O HCPA001_batch.txt

# -- Download the mapping specification file
wget --no-check-certificate -r 'https://drive.google.com/uc?id=1HtIm0IR7aQc8iJxf29JKW846VO_CnGUC&export=download' -O HCPA001_mapping.txt

If the data is properly prepared the commands below should give you the provided output.

# -- Check our location
pwd

# -- Output should look like this:
# ${HOME}/qunex/data

# -- Inspect the folder structure
tree

# -- Output should look like this:
# .
# ├── HCPA001_batch.txt
# ├── HCPA001_mapping.txt
# └── HCPA001.zip

If you wish to use QuNex (and this quick start) on your own data, you can find all the required information on https://bitbucket.org/oriadev/qunex/wiki/Home.

Step 3: Prepare the parameters

The code below sets and exports the parameters required for processing the example data. In this example we will facilitate the QuNex run_turnkey command which runs a list of specified command in a sequence (when a command in the list finishes successfully, QuNex will execute the next one).

# -- Set the name of the study
export STUDY_NAME="quickstart"

# -- Set your working directory
export WORK_DIR="${HOME}/qunex"

# -- Specify the container
# -- For Docker use the container name and tag:
export QUNEX_CONTAINER="gitlab.qunex.yale.edu:5002/qunex/qunexcontainer:0.90.6"

# -- For Singularity define an absolute path to the image
# export QUNEX_CONTAINER=${WORK_DIR}/container/qunex_suite-0.90.6.sif

# -- Location of previously prepared data
export RAW_DATA="${WORK_DIR}/data"

# -- Batch parameters file          
export INPUT_BATCH_FILE="${RAW_DATA}/HCPA001_batch.txt"

# -- Mapping file   
export INPUT_MAPPING_FILE="${RAW_DATA}/HCPA001_mapping.txt"

# -- Sessions to run
export SESSIONS="HCPA001"

# -- You will run everything on the local file system as opposed to pulling data from a database (e.g. XNAT system)
export RUNTURNKEY_TYPE="local"

# -- List the processing steps (QuNex commands) you want to run
# -- The sequence below first prepares the data 
# -- and then executes the whole HCP minimal preprocessing pipeline
export RUNTURNKEY_STEPS="create_study,map_raw_data,import_dicom,create_session_info,setup_hcp,create_batch,hcp_pre_freesurfer,hcp_freesurfer,hcp_post_freesurfer,hcp_fmri_volume,hcp_fmri_surface"

Note here that if your input data and files are not located in your home folder, container might not be able to access them. In order to overcome this, please consult the Binding/mapping external folders or setting additional container parameters section of the Running commands against a container using qunex_container Wiki page.

Step 4: Run the specified set of QuNex commands

We are almost done, all we have to do now is execute what we prepared. We have to options to do this, we can just run the commands (Step 4a), or we can schedule the prepared execution (Step 4b). If you are not sure what do to here, you should probably use Step 4a, scheduling is used in high performance computing environments and if you need it here, you should probably already know what scheduling is.

You can track the progress of processing inside logs in the study folder. If you used the parameter values provided in this quick start then logs will be in the ${HOME}/qunex/quickstart/processing/logs folder. Details about what logs are created and what you can find in them can be found at https://bitbucket.org/oriadev/qunex/wiki/Overview/Logging.md. In principle each command (processing step) will create a runlog and a comlog. runlogs provide a more general overview of what is going on, while comlogs provide a detailed description of processing progress. If comlog is prefixed with tmp_ then that command is running, if it is prefixed with done_ the command finished successfully and if it is prefixed with error_ there was an error during processing.

For generated outputs, please consult the QuNex data hierarchy document (https://bitbucket.org/oriadev/qunex/wiki/Overview/DataHierarchy.md). For a detailed description of all used commands and their outputs you should consult the usage document of each command, you can find those at https://bitbucket.org/oriadev/qunex/wiki/Home under User guides.

Step 4a: Run the commands without a scheduler

Now that all the parameters are prepared we can execute the run_turnkey command.

qunex_container run_turnkey \
  --rawdatainput="${RAW_DATA}" \
  --batchfile="${INPUT_BATCH_FILE}" \
  --mappingfile="${INPUT_MAPPING_FILE}" \
  --workingdir="${WORK_DIR}" \
  --projectname="${STUDY_NAME}" \
  --path="${WORK_DIR}/${STUDY_NAME}" \
  --sessions="${SESSIONS}" \
  --sessionids="${SESSIONS}" \
  --sessionsfoldername="sessions" \
  --turnkeytype="${RUNTURNKEY_TYPE}" \
  --container="${QUNEX_CONTAINER}" \
  --turnkeysteps="${RUNTURNKEY_STEPS}"

Step 4b: Schedule the commands

Most of HPCs (high performance computing systems) do not allow running commands for a long time on the node that you login into. Instead, commands should be scheduled for execution. The qunex_container scripts allow easy scheduling via SLURM, PBS and LSF systems. Below is an example of how you can schedule the command from this example using SLURM. The example reserves a compute node for a single task on a single CPU with 16 GB memory for 4 days.

qunex_container run_turnkey \
  --rawdatainput="${RAW_DATA}" \
  --batchfile="${INPUT_BATCH_FILE}" \
  --mappingfile="${INPUT_MAPPING_FILE}" \
  --workingdir="${WORK_DIR}" \
  --projectname="${STUDY_NAME}" \
  --path="${WORK_DIR}/${STUDY_NAME}" \
  --sessions="${SESSIONS}" \
  --sessionids="${SESSIONS}" \
  --sessionsfoldername="sessions" \
  --turnkeytype="${RUNTURNKEY_TYPE}" \
  --container="${QUNEX_CONTAINER}" \
  --turnkeysteps="${RUNTURNKEY_STEPS}" \
  --scheduler="SLURM,time=04-00:00:00,ntasks=1,cpus-per-task=1,mem-per-cpu=16000,jobname=qx_quickstart"

Updated