Fleet  0.0.9
Inference in the LOT
Public Member Functions | Public Attributes | List of all members
ThreadedInferenceInterface< X, Args > Class Template Referenceabstract

#include <ThreadedInferenceInterface.h>

Collaboration diagram for ThreadedInferenceInterface< X, Args >:
Collaboration graph
[legend]

Public Member Functions

virtual generator< X & > run_thread (Control &ctl, Args... args)=0
 
 ThreadedInferenceInterface ()
 
unsigned long next_index ()
 Return the next index to operate on (in a thread-safe way). More...
 
size_t nthreads ()
 How many threads are currently run in this interface? More...
 
void run_thread_generator_wrapper (size_t thr, Control &ctl, Args... args)
 We have to wrap run_thread in something that manages the sync with main. This really just synchronizes the output of run_thread with run below. NOTE this makes a copy of x into the local next_x, so that when the thread keeps running, it doesn't mess anything up. We may in the future block the thread and return a reference, but its not clear that's faster. More...
 
generator< X & > run (Control ctl, Args... args)
 Set up the multiple threads and actually run, calling run_thread_generator_wrapper. More...
 
generator< X & > unthreaded_run (Control ctl, Args... args)
 

Public Attributes

std::atomic< size_t > index
 
size_t __nthreads
 
std::atomic< size_t > __nrunning
 
ConcurrentQueue< Xto_yield
 

Detailed Description

template<typename X, typename... Args>
class ThreadedInferenceInterface< X, Args >

Author
piantado
Date
07/06/20

Constructor & Destructor Documentation

◆ ThreadedInferenceInterface()

template<typename X, typename... Args>
ThreadedInferenceInterface< X, Args >::ThreadedInferenceInterface ( )
inline

Member Function Documentation

◆ next_index()

template<typename X, typename... Args>
unsigned long ThreadedInferenceInterface< X, Args >::next_index ( )
inline

Return the next index to operate on (in a thread-safe way).

Returns

◆ nthreads()

template<typename X, typename... Args>
size_t ThreadedInferenceInterface< X, Args >::nthreads ( )
inline

How many threads are currently run in this interface?

Returns

◆ run()

template<typename X, typename... Args>
generator<X&> ThreadedInferenceInterface< X, Args >::run ( Control  ctl,
Args...  args 
)
inline

Set up the multiple threads and actually run, calling run_thread_generator_wrapper.

Parameters
ctl
Returns

◆ run_thread()

template<typename X, typename... Args>
virtual generator<X&> ThreadedInferenceInterface< X, Args >::run_thread ( Control ctl,
Args...  args 
)
pure virtual

◆ run_thread_generator_wrapper()

template<typename X, typename... Args>
void ThreadedInferenceInterface< X, Args >::run_thread_generator_wrapper ( size_t  thr,
Control ctl,
Args...  args 
)
inline

We have to wrap run_thread in something that manages the sync with main. This really just synchronizes the output of run_thread with run below. NOTE this makes a copy of x into the local next_x, so that when the thread keeps running, it doesn't mess anything up. We may in the future block the thread and return a reference, but its not clear that's faster.

Parameters
ctl

◆ unthreaded_run()

template<typename X, typename... Args>
generator<X&> ThreadedInferenceInterface< X, Args >::unthreaded_run ( Control  ctl,
Args...  args 
)
inline

Member Data Documentation

◆ __nrunning

template<typename X, typename... Args>
std::atomic<size_t> ThreadedInferenceInterface< X, Args >::__nrunning

◆ __nthreads

template<typename X, typename... Args>
size_t ThreadedInferenceInterface< X, Args >::__nthreads

◆ index

template<typename X, typename... Args>
std::atomic<size_t> ThreadedInferenceInterface< X, Args >::index

◆ to_yield

template<typename X, typename... Args>
ConcurrentQueue<X> ThreadedInferenceInterface< X, Args >::to_yield

The documentation for this class was generated from the following file: