|
cuda-api-wrappers
Thin C++-flavored wrappers for the CUDA Runtime API
|
A class for holding the primary context of a CUDA device. More...
#include <primary_context.hpp>


Public Member Functions | |
| stream_t | default_stream () const noexcept |
| primary_context_t (const primary_context_t &other) | |
| primary_context_t (primary_context_t &&other) noexcept=default | |
| primary_context_t & | operator= (const primary_context_t &other)=delete |
| primary_context_t & | operator= (primary_context_t &&other)=default |
Public Member Functions inherited from cuda::context_t | |
| context::handle_t | handle () const noexcept |
| device::id_t | device_id () const noexcept |
| device_t | device () const |
| bool | is_owning () const noexcept |
| size_t | total_memory () const |
| The amount of total global device memory available to this context, including memory already allocated. | |
| size_t | free_memory () const |
| The amount of unallocated global device memory available to this context and not yet allocated. More... | |
| stream_t | default_stream () const |
| template<typename Kernel , typename ... KernelParameters> | |
| void | launch (Kernel kernel, launch_configuration_t launch_configuration, KernelParameters... parameters) const |
| multiprocessor_cache_preference_t | cache_preference () const |
| Determines the balance between L1 space and shared memory space set for kernels executing within this context. | |
| size_t | stack_size () const |
| context::limit_value_t | printf_buffer_size () const |
| context::limit_value_t | memory_allocation_heap_size () const |
| context::limit_value_t | maximum_depth_of_child_grid_sync_calls () const |
| global_memory_type | memory () const |
| Get a wrapper object for this context's associated device-global memory. | |
| context::limit_value_t | maximum_outstanding_kernel_launches () const |
| context::shared_memory_bank_size_t | shared_memory_bank_size () const |
| Returns the shared memory bank size, as described in this Parallel-for-all blog entry More... | |
| bool | is_current () const |
| bool | is_primary () const |
| context::stream_priority_range_t | stream_priority_range () const |
| Get the range of priority values one can set for streams in this context. | |
| context::limit_value_t | get_limit (context::limit_t limit_id) const |
| Get one of the configurable limits for this context (and events, streams, kernels, etc. More... | |
| version_t | api_version () const |
| Returns a version number corresponding to the capabilities of this context, which can be used can use to direct work targeting a specific API version among contexts. More... | |
| context::host_thread_sync_scheduling_policy_t | sync_scheduling_policy () const |
| Gets the synchronization policy to be used for threads synchronizing with this CUDA context. More... | |
| bool | keeping_larger_local_mem_after_resize () const |
| stream_t | create_stream (bool will_synchronize_with_default_stream, stream::priority_t priority=cuda::stream::default_priority) const |
| Create a new event within this context; see cuda::stream::create() for details regarding the parameters. | |
| event_t | create_event (bool uses_blocking_sync=event::sync_by_busy_waiting, bool records_timing=event::do_record_timings, bool interprocess=event::not_interprocess) const |
| Create a new event within this context; see cuda::event::create() for details regarding the parameters. | |
| void | enable_access_to (const context_t &peer) const |
| Allow kernels and memory operations within this context to involve memory allocated in a peer context. | |
| void | disable_access_to (const context_t &peer) const |
| Prevent kernels and memory operations within this context from involving memory allocated in a peer context. | |
| void | reset_persisting_l2_cache () const |
| Clear the L2 cache memory which persists between invocations of kernels. | |
| void | set_shared_memory_bank_size (context::shared_memory_bank_size_t bank_size) const |
| Sets the shared memory bank size, described in this Parallel-for-all blog entry More... | |
| void | set_cache_preference (multiprocessor_cache_preference_t preference) const |
| Controls the balance between L1 space and shared memory space for kernels executing within this context. More... | |
| void | set_limit (context::limit_t limit_id, context::limit_value_t new_value) const |
| Set one of the configurable limits for this context (and events, streams, kernels, etc. More... | |
| void | stack_size (context::limit_value_t new_value) const |
| Set the limit on the size of the stack a kernel thread can use when running. More... | |
| void | printf_buffer_size (context::limit_value_t new_value) const |
| void | memory_allocation_heap_size (context::limit_value_t new_value) const |
| void | set_maximum_depth_of_child_grid_sync_calls (context::limit_value_t new_value) const |
| void | set_maximum_outstanding_kernel_launches (context::limit_value_t new_value) const |
| void | synchronize () const |
| Avoid executing any additional instructions on this thread until all work on all streams in this context has been concluded. More... | |
| context_t (const context_t &other) | |
| context_t (context_t &&other) noexcept | |
| context_t & | operator= (const context_t &)=delete |
| context_t & | operator= (context_t &&other) noexcept |
| template<typename ContiguousContainer , cuda::detail_::enable_if_t< detail_::is_kinda_like_contiguous_container< ContiguousContainer >::value, bool > = true> | |
| module_t | create_module (ContiguousContainer module_data, const link::options_t &link_options) const |
| Create a new module of kernels and global memory regions within this context; see also cuda::module::create() | |
| template<typename ContiguousContainer , cuda::detail_::enable_if_t< detail_::is_kinda_like_contiguous_container< ContiguousContainer >::value, bool > = true> | |
| module_t | create_module (ContiguousContainer module_data) const |
Friends | |
| class | device_t |
A class for holding the primary context of a CUDA device.
|
inlinenoexcept |