OMPi @ ppg downloads

The source code of OMPi is available here; the compiler has been tested on a variety of Linux, Solaris, Irix and Windows (WSL2/Cygwin) machines. Actually the only system requirements are compatibility with POSIX and a native C compiler (e.g. gcc).

The most recent stable version is 2.7.0.

Version Link Notes
2.7.0 Download »  new module for  offloading to CUDA GPUs with compute capability ≥ 3.5
»  full OpenMP support (CPU+GPU) for Jetson Nano 2/4GB
»  support for extra worksharing loop schedules
» restructured compiler transformations and code generation
» many improvements and fixes

 
» implementation-defined OpenMP behaviors for OMPi (HTML, PDF).
2.5.0 Download »  task dependencies
»  doacross loops
»  support OpenMP 4.5 target-related device directives and runtime
    functions; partial support for OpenMP 5.0
» new `mpinode’ module treats cluster nodes as OpenMP devices
» adaptive, compiler-assisted runtime flavors for devices
» affinity control and places conforming to OpenMP 5.1
» improvements and fixes
 
» implementation-defined OpenMP behaviors for OMPi (HTML, PDF).
2.0.0 Download » initial support for OpenMP 4.0 (device, cancellation, taskgroup constructs)
» includes seamless support for the Parallella-16 board
» major reorganization of the compiler and runtime trees
» improvements everywhere
» bug fixes
 
» implementation-defined OpenMP behaviors for OMPi (HTML, PDF).
1.2.3 Download » OpenMP 3.1
» addition of a process library (with Sys-V IPC)
» tasking improvements
» bug fixes
1.2.2 Download » support for most parts of OpenMP 3.1
» tasking improvements
» experimental tasking support in psthr (psthreads V1.0.4 required)
» bug fixes
1.2.0 Download » full OpenMP 3.0 support
» runtime system enhancements
» bug fixes
» dropped a few thread libraries (availabe on request)
1.1.0 Download » OpenMP 3.0 support (most parts)
» restructured runtime system

 

Older, special version of the OMPi compiler for the Parallella board have been superseded by versions ≥ 2.0.0 of OMPi. You may contact us for further inquiries.