Wiki
Clone wikiupcxx / internal / Platforms
UPC++ Testing and Development Platforms
Note: this is a work-in-progress.
Please advise Paul Hargrove of any errors or omissions.
NERSC Cori (Haswell and KNL nodes)
Stable installs and nightly builds for both PrgEnv-intel and PrgEnv-gnu.
Proper install is selected at module load
time based on the then-current
PrgEnv-{intel,gnu}
and crapye-{haswell,mic-knl}
environment modules.
If you change either then you'll need to swap modules as described at
load time.
Example:
{hargrove@cori12 ~}$ module swap craype-haswell craype-mic-knl {hargrove@cori12 ~}$ module load upcxx ## Loaded 'upcxx/2019.3.2-6.0.4-intel-18.0.1.163' based on currently loaded modules. ## The selected build is for target CPU "mic-knl", and thus if ## you change craype-{haswell,mic-knl} modules then you should ## 'module unload upcxx' and reload to get the correct upcxx module. {hargrove@cori12 ~}$ upcxx -V UPC++ version 20190302 / gex-2019.3.2 Copyright (c) 2019, The Regents of the University of California, through Lawrence Berkeley National Laboratory. https://upcxx.lbl.gov icpc (ICC) 18.0.1 20171018 Copyright (C) 1985-2017 Intel Corporation. All rights reserved.
NERSC Cori (GPU nodes)
Some (not all) NERSC users have access to a set of multi-GPU nodes operated within the Cori infrastructure. They are NOT Cray XC nodes, and use InfiniBand for communication. More info on the hardware and software is available here. Some of the usage instructions are applied in the example, below.
There are stable installs and nightly builds for Intel and GNU compilers. These
are CUDA-enabled and therefore require the cuda
environment module. Note that
one needs to load upcxx-gpu
modules (the upcxx
ones won't work on the GPU
nodes). Also note that the choice between Intel or GNU is explicit, not automatic
based on a PrgEnv, or similar.
Example: 1-node interactive job for building s/w (no GPUs allocated):
{hargrove@cori12 ~}$ module load esslurm {hargrove@cori12 ~}$ salloc -C gpu -n1 --mem 2G -t 60:00 -A m1759 salloc: Granted job allocation 157722 salloc: Waiting for resource configuration salloc: Nodes cgpu17 are ready for job {hargrove@cgpu17 ~}$ module load gcc cuda {hargrove@cgpu17 ~}$ module load upcxx-gpu/2019.3.2-gcc-7.3.0 {hargrove@cgpu17 ~}$ upcxx -V UPC++ version 20190302 / gex-2019.3.2 Copyright (c) 2019, The Regents of the University of California, through Lawrence Berkeley National Laboratory. https://upcxx.lbl.gov g++ (GCC) 7.3.0 20180125 (Cray Inc.) Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Summit and/or Summitdev (at OLCF)
GPU-enabled stable builds.
Details to appear.
Comet (at SDSC)
GPU-enabled stable and nightly builds.
Details to appear.
Dirac
Local InfiniBand cluster.
Stable and nightly builds for Intel, GNU and Clang.
Also access to numerous compiler versions for testing.
Details to appear.
Kotten
Local SMP.
Stable and nightly builds for Intel, GNU and Clang.
Details to appear.
old-high-sierra
Local (old) macOS laptop w/ NVIDA GPU.
Stable and nightly builds for Intel, GNU and Clang.
Details to appear.
sierra, high-sierra and mojave
Local VMs for several macOS releases.
Stable and nightly builds for Apple Clang and GNU/
Details to appear.
Updated