mlpack
|
Declaration of the Batch Normalization layer class. More...
#include <batch_norm.hpp>
Public Member Functions | |
BatchNorm () | |
Create the BatchNorm object. More... | |
BatchNorm (const size_t size, const double eps=1e-8, const bool average=true, const double momentum=0.1) | |
Create the BatchNorm layer object for a specified number of input units. More... | |
void | Reset () |
Reset the layer parameters. | |
template<typename eT > | |
void | Forward (const arma::Mat< eT > &input, arma::Mat< eT > &output) |
Forward pass of the Batch Normalization layer. More... | |
template<typename eT > | |
void | Backward (const arma::Mat< eT > &input, const arma::Mat< eT > &gy, arma::Mat< eT > &g) |
Backward pass through the layer. More... | |
template<typename eT > | |
void | Gradient (const arma::Mat< eT > &input, const arma::Mat< eT > &error, arma::Mat< eT > &gradient) |
Calculate the gradient using the output delta and the input activations. More... | |
OutputDataType const & | Parameters () const |
Get the parameters. | |
OutputDataType & | Parameters () |
Modify the parameters. | |
OutputDataType const & | OutputParameter () const |
Get the output parameter. | |
OutputDataType & | OutputParameter () |
Modify the output parameter. | |
OutputDataType const & | Delta () const |
Get the delta. | |
OutputDataType & | Delta () |
Modify the delta. | |
OutputDataType const & | Gradient () const |
Get the gradient. | |
OutputDataType & | Gradient () |
Modify the gradient. | |
bool | Deterministic () const |
Get the value of deterministic parameter. | |
bool & | Deterministic () |
Modify the value of deterministic parameter. | |
OutputDataType const & | TrainingMean () const |
Get the mean over the training data. | |
OutputDataType & | TrainingMean () |
Modify the mean over the training data. | |
OutputDataType const & | TrainingVariance () const |
Get the variance over the training data. | |
OutputDataType & | TrainingVariance () |
Modify the variance over the training data. | |
size_t | InputSize () const |
Get the number of input units / channels. | |
double | Epsilon () const |
Get the epsilon value. | |
double | Momentum () const |
Get the momentum value. | |
bool | Average () const |
Get the average parameter. | |
size_t | WeightSize () const |
Get size of weights. | |
template<typename Archive > | |
void | serialize (Archive &ar, const uint32_t) |
Serialize the layer. | |
Declaration of the Batch Normalization layer class.
The layer transforms the input data into zero mean and unit variance and then scales and shifts the data by parameters, gamma and beta respectively. These parameters are learnt by the network.
If deterministic is false (training), the mean and variance over the batch is calculated and the data is normalized. If it is set to true (testing) then the mean and variance accrued over the training set is used.
For more information, refer to the following paper,
InputDataType | Type of the input data (arma::colvec, arma::mat, arma::sp_mat or arma::cube). |
OutputDataType | Type of the output data (arma::colvec, arma::mat, arma::sp_mat or arma::cube). |
mlpack::ann::BatchNorm< InputDataType, OutputDataType >::BatchNorm | ( | ) |
Create the BatchNorm object.
Artificial Neural Network.
mlpack::ann::BatchNorm< InputDataType, OutputDataType >::BatchNorm | ( | const size_t | size, |
const double | eps = 1e-8 , |
||
const bool | average = true , |
||
const double | momentum = 0.1 |
||
) |
Create the BatchNorm layer object for a specified number of input units.
size | The number of input units / channels. |
eps | The epsilon added to variance to ensure numerical stability. |
average | Boolean to determine whether cumulative average is used for updating the parameters or momentum is used. |
momentum | Parameter used to to update the running mean and variance. |
void mlpack::ann::BatchNorm< InputDataType, OutputDataType >::Backward | ( | const arma::Mat< eT > & | input, |
const arma::Mat< eT > & | gy, | ||
arma::Mat< eT > & | g | ||
) |
Backward pass through the layer.
input | The input activations |
gy | The backpropagated error. |
g | The calculated gradient. |
void mlpack::ann::BatchNorm< InputDataType, OutputDataType >::Forward | ( | const arma::Mat< eT > & | input, |
arma::Mat< eT > & | output | ||
) |
Forward pass of the Batch Normalization layer.
Transforms the input data into zero mean and unit variance, scales the data by a factor gamma and shifts it by beta.
input | Input data for the layer |
output | Resulting output activations. |
void mlpack::ann::BatchNorm< InputDataType, OutputDataType >::Gradient | ( | const arma::Mat< eT > & | input, |
const arma::Mat< eT > & | error, | ||
arma::Mat< eT > & | gradient | ||
) |
Calculate the gradient using the output delta and the input activations.
input | The input activations |
error | The calculated error |
gradient | The calculated gradient. |