Thin C++-flavored wrappers for the CUDA Runtime API
Public Member Functions | List of all members
cuda::texture_view Class Reference

Use texture memory for optimized read only cache access. More...

#include <texture_view.hpp>

Public Member Functions

bool is_owning () const noexcept
raw_handle_type raw_handle () const noexcept
device_t associated_device () const noexcept
 texture_view (const texture_view &other)=delete
 texture_view (texture_view &&other) noexcept
template<typename T , dimensionality_t NumDimensions>
 texture_view (const cuda::array_t< T, NumDimensions > &arr, texture::descriptor_t descriptor=texture::descriptor_t())
texture_viewoperator= (const texture_view &other)=delete
texture_viewoperator= (texture_view &other)=delete

Detailed Description

Use texture memory for optimized read only cache access.

This represents a view on the memory owned by a CUDA array. Thus you can first create a CUDA array (cuda::array_t) and subsequently create a texture_view from it. In CUDA kernels elements of the array can be accessed with e.g. float val = tex3D<float>(tex_obj, x, y, z);, where tex_obj can be obtained by the member function get() of this class.

See also the following sections in the CUDA programming guide:

texture_view's are essentially owning - the view is a resource the CUDA runtime creates for you, which then needs to be freed.

The documentation for this class was generated from the following files: