Doxygen 1.9.8
Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN: Tasmanian v8.2
 
Loading...
Searching...
No Matches
Classes and functions used for acceleration methods
Collaboration diagram for Classes and functions used for acceleration methods:

Files

file  tsgAcceleratedDataStructures.hpp
 Data structures for interacting with CUDA and MAGMA environments.
 
file  tsgCacheLagrange.hpp
 Cache data structure for evaluate with Global grids.
 

Namespaces

namespace  TasGrid::AccHandle
 Type tags for the different handle types.
 
namespace  TasGrid::TasGpu
 Wrappers around custom CUDA kernels to handle domain transforms and basis evaluations, the kernels are instantiated in tsgCudaKernels.cu.
 
namespace  TasGrid::AccelerationMeta
 Common methods for manipulating acceleration options and reading CUDA environment properties.
 

Classes

struct  TasGrid::HandleDeleter< ehandle >
 Deleter template for the GPU handles, e.g., cuBlas and rocBlas. More...
 
class  TasGrid::GpuVector< T >
 Template class that wraps around a single GPU array, providing functionality that mimics std::vector. More...
 
struct  TasGrid::GpuEngine
 Wrapper class around calls GPU accelerated linear algebra libraries. More...
 
class  TasGrid::AccelerationDomainTransform
 Implements the domain transform algorithms in case the user data is provided on the GPU. More...
 
struct  TasGrid::AccelerationContext
 Wrapper class around GPU device ID, acceleration type and GpuEngine. More...
 
class  TasGrid::CacheLagrange< T >
 Cache that holds the values of 1D Lagrange polynomials. More...
 
class  TasGrid::CacheLagrangeDerivative< T >
 Cache that holds the derivatives of 1D Lagrange polynomials. Uses the same interface as CacheLagrange. . More...
 

Functions

template<typename >
void TasGrid::deleteHandle (int *)
 Deletes the handle, specialized for each TPL backend and tag in TasGrid::AccHandle namepace.
 

Detailed Description

RAII Memory Management
CUDA uses C-style of memory management with cudaMalloc(), cudaMemcopy(), cudaFree(), but templated C++ std::vector-style class is far more handy and more fail-safe. The GpuVector template class guards against memory leaks and offers more seamless integration between CPU and GPU data structures. See the GpuVector documentation for details.
Streams and Handles Encapsulation
CUDA linear algebra libraries (as well as MAGAM), use streams and handles for all their calls. The handles have to be allocated, deleted, and passed around which causes unnecessary code clutter. Encapsulating the handles in a single GpuEngine class greatly simplifies the work-flow. Furthermore, some (sparse) linear operations require multiple calls to CUDA/MAGMA libraries, and it is easier to combine those into a single call to a GpuEngine method.
Acceleration Metadata
The AccelerationMeta namespace offers several methods used throughout the library and in the testing:
  • Tasmanian specific acceleration fallback logic
  • Reading CUDA device properties, e.g., number of devices or total memory
  • Error handling for common CUDA/cuBlas/cuSparse calls
C++ Wrappers to Fortran BLAS API
The standard BLAS API follows Fortran calling conventions, e.g., call by value and underscore at the end of function names. A C++ wrapper is provided that handles Tasmanian specific cases of dense matrix-matrix and matrix-vector multiplication using C++ compatible API.

Function Documentation

◆ deleteHandle()

template<typename >
void TasGrid::deleteHandle ( int *  )

Deletes the handle, specialized for each TPL backend and tag in TasGrid::AccHandle namepace.