Clone wiki

upcxx / Home

UPC++ Version 1.0

NEWS:

March 15, 2019: We are proud to announce a new v2019.3.0 release of UPC++.

Current Downloads:

Publications

  • Includes Pagoda group publications and citation information for the documentation.

Overview

UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming, and is designed to interoperate smoothly and efficiently with MPI, OpenMP, CUDA and AMTs. It leverages GASNet-EX to deliver low-overhead, fine-grained communication, including Remote Memory Access (RMA) and Remote Procedure Call (RPC).

Design Philosophy

UPC++ exposes a PGAS memory model, including one-sided communication (RMA and RPC). However, there are departures from the approaches taken by some predecessors such as UPC. These changes reflect a design philosophy that encourages the UPC++ programmer to directly express what can be implemented efficiently (ie without a need for parallel compiler analysis).

  1. Most operations are non-blocking, and the powerful synchronization mechanisms encourage applications to design for aggressive asynchrony.

  2. All communication is explicit - there is no implicit data motion.

  3. UPC++ encourages the use of scalable data-structures and avoids non-scalable library features.

What Features Comprise UPC++?

  • RMA. UPC++ provides asynchronous one-sided communication (Remote Memory Access, a.k.a. Put and Get) for movement of data among processes.

  • RPC. UPC++ provides asynchronous Remote Procedure Call for running code (including C++ lambdas) on other processes.

  • Futures, promises and continuations. Futures are central to handling asynchronous operation of RMA and RPC. UPC++ uses a continuation-based model to express task dependencies.

  • Progress guarantees. Because UPC++ has no internal service threads, the library makes progress only when a core enters an active UPC++ call. However, the "persona" concept makes writing progress threads simple.

  • Remote atomics use an abstraction that enables efficient offload where hardware support is available.

  • Distributed objects. UPC++ enables construction of a scalable distributed object from any C++ object type, with one instance on each rank of a team. RPC can be used to access remote instances.

  • View-based Serialization. UPC++ introduces a mechanism for efficiently passing large and/or complicated data arguments to RPCs.

  • Non-contiguous RMA. UPC++ provides functions for non-contiguous data transfers directly on shared memory, for example to efficiently copy or transpose sections of N-dimension dense arrays.

  • Teams represent ordered sets of processes and play a role in collective communication. Initially we support barrier, broadcast and reductions, including abstractions to enable offload of reductions supported in hardware.

  • Memory kinds. UPC++ provides uniform interfaces for transfers between memory with different properties. Beginning in the 2019.3.0 release, UPC++ provides a prototype implementation for CUDA GPUs. Future releases will refine this capability, and may expand this to include other forms of non-host memory.

A comparison to the feature set of UPC++ v0.1 is also available.

Notable applications using UPC++:

  • HipMer: An Extreme-Scale De Novo Genome Assembler
  • symPACK: A sparse symmetric matrix direct linear solver
  • SWE-UPC++: Shallow Water Equations for tsunami simulation

Other related software:

  • upcxx-extras: UPC++ extra examples and optional extensions (available soon!)
  • Berkeley UPC: Now supports hybrid UPC/UPC++ applications!
  • GASNet-EX: The portable, high-performance communication runtime used by UPC++
  • MRG8: An efficient, high-period PRNG with skip-ahead, designed for exascale HPC

Previous Releases:

Contact Info

Updated