cuda-api-wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
|
Contains a proxy class for CUDA execution contexts. More...
#include "current_context.hpp"
#include "versions.hpp"
#include "error.hpp"
#include "constants.hpp"
#include "types.hpp"
#include <string>
#include <utility>
Go to the source code of this file.
Classes | |
struct | cuda::context::stream_priority_range_t |
A range of priorities supported by a CUDA context; ranges from the higher numeric value to the lower. More... | |
class | cuda::context_t |
Wrapper class for a CUDA context. More... | |
class | cuda::context_t::global_memory_type |
A class to create a faux member in a context_t, in lieu of an in-class namespace (which C++ does not support); whenever you see a function my_context.memory::foo() , think of it as a my_dev::memory::foo() . More... | |
Namespaces | |
cuda | |
Definitions and functionality wrapping CUDA APIs. | |
cuda::link | |
Definitions related to CUDA linking-processes, captured by the link_t wrapper class. | |
Typedefs | |
using | cuda::context::limit_t = CUlimit |
Features of contexts which can be configured individually during a context's lifetime. | |
using | cuda::context::limit_value_t = size_t |
Type for the actual values for context (see limit_t for the possible kinds of limits whose value can be set) | |
using | cuda::context::shared_memory_bank_size_t = CUsharedconfig |
Choice of the number of bytes in each bank of the shared memory. | |
Functions | |
context_t | cuda::context::wrap (device::id_t device_id, context::handle_t context_id, bool take_ownership=false) noexcept |
Obtain a wrapper for an already-existing CUDA context. More... | |
void | cuda::synchronize (const context_t &context) |
Waits for all previously-scheduled tasks on all streams (= queues) in a CUDA context to conclude, before returning. More... | |
context_t | cuda::context::create (const device_t &device, host_thread_sync_scheduling_policy_t sync_scheduling_policy=heuristic, bool keep_larger_local_mem_after_resize=false) |
creates a new context on a given device More... | |
context_t | cuda::context::create_and_push (const device_t &device, host_thread_sync_scheduling_policy_t sync_scheduling_policy=heuristic, bool keep_larger_local_mem_after_resize=false) |
Creates a new CUDA context on a given device, as would create() - and pushes it onto the top of the context stack. More... | |
context_t | cuda::context::current::get () |
Obtain the current CUDA context, if one exists. More... | |
void | cuda::context::current::set (const context_t &context) |
Set the context at the top of the stack to a specified context. More... | |
bool | cuda::context::current::push_if_not_on_top (const context_t &context) |
Push a (reference to a) context onto the top of the context stack - unless that context is already at the top of the stack, in which case do nothing. | |
void | cuda::context::current::push (const context_t &context) |
Push a (reference to a) context onto the top of the context stack. More... | |
context_t | cuda::context::current::pop () |
Pop the top off of the context stack. More... | |
bool | cuda::context::is_primary (const context_t &context) |
bool | cuda::operator== (const context_t &lhs, const context_t &rhs) noexcept |
bool | cuda::operator!= (const context_t &lhs, const context_t &rhs) noexcept |
Contains a proxy class for CUDA execution contexts.
|
inline |
creates a new context on a given device
device | The device which the new context will regard |
sync_scheduling_policy | Choice of how host threads are to perform synchronization with pending actions in streams within this context. See host_thread_sync_scheduling_policy_t for a description of these choices. |
keep_larger_local_mem_after_resize | If true, larger allocations of global device memory, used by kernels requiring a larger amount of local memory, will be kept (so that future kernels with such requirements will not trigger a re-allocation). |
|
inline |
Creates a new CUDA context on a given device, as would create() - and pushes it onto the top of the context stack.
|
inline |
Obtain the current CUDA context, if one exists.
::std::runtime_error | in case there is no current context |
|
inline |
|
inline |
Pop the top off of the context stack.
|
inline |
Push a (reference to a) context onto the top of the context stack.
|
inline |
Set the context at the top of the stack to a specified context.
|
inlinenoexcept |
Obtain a wrapper for an already-existing CUDA context.
Wrap the fundamental information regarding a CUDA context into an instance of the context_t wrapper class.
device_id | Device with which the context is associated |
context_id | id of the context to wrap with a proxy |
take_ownership | when true, the wrapper will have the CUDA driver destroy the context when the wrapper itself destruct; otherwise, it is assumed that the context is "owned" elsewhere in the code, and that location or entity is responsible for destroying it when relevant (possibly after this wrapper ceases to exist) |