Summary

Source: https://github.com/alpaka-group/llama

License: GNU Lesser General Public License v3.0

Path: /software/llama/

Documentation: https://alpaka-group.github.io/llama/

Build script:


Low-Level Abstraction of Memory Access. LLAMA is a cross-platform C++17 template header-only library for the abstraction of memory access patterns. It distinguishes between the view of the algorithm on the memory and the real layout in the background. This enables performance portability for multicore, manycore and gpu applications with the very same code.

Using  llama

 There is no need for any initialization. You just need to know the location of headers:

module load maxwell
module avail llama

--------------- /software/etc/modulefiles -------------
llama/0.2 llama/0.4

module load maxwell llama

     this is just a dummy module not actually setting up anything

     CMAKE: /software/llama/0.4/lib64/cmake/llama/
     INCLUDES: /software/llama/0.4/include/