Thin C++-flavored wrappers for the CUDA Runtime API
Classes | Public Types | Public Member Functions | Public Attributes | Friends | List of all members
cuda::stream_t Class Reference

Proxy class for a CUDA stream. More...

#include <stream.hpp>

Collaboration diagram for cuda::stream_t:
Collaboration graph


class  enqueue_t
 A gadget through which commands are enqueued on the stream. More...

Public Types

enum  : bool {
  doesnt_synchronizes_with_default_stream = false,
  does_synchronize_with_default_stream = true
using priority_t = stream::priority_t

Public Member Functions

stream::id_t id () const noexcept
device_t device () const noexcept
bool is_owning () const noexcept
bool synchronizes_with_default_stream () const
 When true, work running in the created stream may run concurrently with work in stream 0 (the NULL stream), and there is no implicit synchronization performed between it and stream 0.
priority_t priority () const
bool has_work_remaining () const
 Determines whether all work on this stream has been completed. More...
bool is_clear () const
 The opposite of has_work() More...
bool query () const
 An alias for is_clear() - to conform to how the CUDA runtime API names this functionality.
void synchronize () const
 Block or busy-wait until all previously-scheduled work on this stream has been completed.
 stream_t (const stream_t &other) noexcept
 stream_t (stream_t &&other) noexcept
stream_toperator= (const stream_t &other)=delete
stream_toperator= (stream_t &other)=delete

Public Attributes

enqueue_t enqueue { *this }


class enqueue_t
bool operator== (const stream_t &lhs, const stream_t &rhs) noexcept

Detailed Description

Proxy class for a CUDA stream.

Use this class - built around an event ID - to perform almost, if not all operations related to CUDA events,

this is one of the three main classes in the Runtime API wrapper library, together with cuda::device_t and cuda::event_t

Member Function Documentation

◆ has_work_remaining()

bool cuda::stream_t::has_work_remaining ( ) const

Determines whether all work on this stream has been completed.

having work is not the same as being busy executing that work!
What if there are incomplete operations, but they're all waiting on something on another queue? Should the queue count as "busy" then?
true if there is still work pending, false otherwise

◆ is_clear()

bool cuda::stream_t::is_clear ( ) const

The opposite of has_work()

true if there is no work pending, false if all previously-scheduled work has been completed

The documentation for this class was generated from the following files: