mlpack
Classes | Public Types | Public Member Functions | List of all members
mlpack::rl::RandomReplay< EnvironmentType > Class Template Reference

Implementation of random experience replay. More...

#include <random_replay.hpp>

Classes

struct  Transition
 

Public Types

using ActionType = typename EnvironmentType::Action
 Convenient typedef for action.
 
using StateType = typename EnvironmentType::State
 Convenient typedef for state.
 

Public Member Functions

 RandomReplay (const size_t batchSize, const size_t capacity, const size_t nSteps=1, const size_t dimension=StateType::dimension)
 Construct an instance of random experience replay class. More...
 
void Store (StateType state, ActionType action, double reward, StateType nextState, bool isEnd, const double &discount)
 Store the given experience. More...
 
void GetNStepInfo (double &reward, StateType &nextState, bool &isEnd, const double &discount)
 Get the reward, next state and terminal boolean for nth step. More...
 
void Sample (arma::mat &sampledStates, std::vector< ActionType > &sampledActions, arma::rowvec &sampledRewards, arma::mat &sampledNextStates, arma::irowvec &isTerminal)
 Sample some experiences. More...
 
const size_t & Size ()
 Get the number of transitions in the memory. More...
 
void Update (arma::mat, std::vector< ActionType >, arma::mat, arma::mat &)
 Update the priorities of transitions and Update the gradients. More...
 
const size_t & NSteps () const
 Get the number of steps for n-step agent.
 

Detailed Description

template<typename EnvironmentType>
class mlpack::rl::RandomReplay< EnvironmentType >

Implementation of random experience replay.

At each time step, interactions between the agent and the environment will be saved to a memory buffer. When necessary, we can simply sample previous experiences from the buffer to train the agent. Typically this would be a random sample and the memory will be a First-In-First-Out buffer.

For more information, see the following.

@phdthesis{lin1993reinforcement,
title = {Reinforcement learning for robots using neural networks},
author = {Lin, Long-Ji},
year = {1993},
school = {Fujitsu Laboratories Ltd}
}
Template Parameters
EnvironmentTypeDesired task.

Constructor & Destructor Documentation

◆ RandomReplay()

template<typename EnvironmentType >
mlpack::rl::RandomReplay< EnvironmentType >::RandomReplay ( const size_t  batchSize,
const size_t  capacity,
const size_t  nSteps = 1,
const size_t  dimension = StateType::dimension 
)
inline

Construct an instance of random experience replay class.

Parameters
batchSizeNumber of examples returned at each sample.
capacityTotal memory size in terms of number of examples.
nStepsNumber of steps to look in the future.
dimensionThe dimension of an encoded state.

Member Function Documentation

◆ GetNStepInfo()

template<typename EnvironmentType >
void mlpack::rl::RandomReplay< EnvironmentType >::GetNStepInfo ( double &  reward,
StateType nextState,
bool &  isEnd,
const double &  discount 
)
inline

Get the reward, next state and terminal boolean for nth step.

Parameters
rewardGiven reward.
nextStateGiven next state.
isEndWhether next state is terminal state.
discountThe discount parameter.

◆ Sample()

template<typename EnvironmentType >
void mlpack::rl::RandomReplay< EnvironmentType >::Sample ( arma::mat &  sampledStates,
std::vector< ActionType > &  sampledActions,
arma::rowvec &  sampledRewards,
arma::mat &  sampledNextStates,
arma::irowvec &  isTerminal 
)
inline

Sample some experiences.

Parameters
sampledStatesSampled encoded states.
sampledActionsSampled actions.
sampledRewardsSampled rewards.
sampledNextStatesSampled encoded next states.
isTerminalIndicate whether corresponding next state is terminal state.

◆ Size()

template<typename EnvironmentType >
const size_t& mlpack::rl::RandomReplay< EnvironmentType >::Size ( )
inline

Get the number of transitions in the memory.

Returns
Actual used memory size

◆ Store()

template<typename EnvironmentType >
void mlpack::rl::RandomReplay< EnvironmentType >::Store ( StateType  state,
ActionType  action,
double  reward,
StateType  nextState,
bool  isEnd,
const double &  discount 
)
inline

Store the given experience.

Parameters
stateGiven state.
actionGiven action.
rewardGiven reward.
nextStateGiven next state.
isEndWhether next state is terminal state.
discountThe discount parameter.

◆ Update()

template<typename EnvironmentType >
void mlpack::rl::RandomReplay< EnvironmentType >::Update ( arma::mat  ,
std::vector< ActionType ,
arma::mat  ,
arma::mat &   
)
inline

Update the priorities of transitions and Update the gradients.

Parameters
*(target) The learned value
*(sampledActions) Agent's sampled action
*(nextActionValues) Agent's next action
*(gradients) The model's gradients

The documentation for this class was generated from the following file: