mlpack
Public Member Functions | List of all members
mlpack::sparse_coding::SparseCoding Class Reference

An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net). More...

#include <sparse_coding.hpp>

Public Member Functions

template<typename DictionaryInitializer = DataDependentRandomInitializer>
 SparseCoding (const arma::mat &data, const size_t atoms, const double lambda1, const double lambda2=0, const size_t maxIterations=0, const double objTolerance=0.01, const double newtonTolerance=1e-6, const DictionaryInitializer &initializer=DictionaryInitializer())
 Set the parameters to SparseCoding. More...
 
 SparseCoding (const size_t atoms=0, const double lambda1=0, const double lambda2=0, const size_t maxIterations=0, const double objTolerance=0.01, const double newtonTolerance=1e-6)
 Set the parameters to SparseCoding. More...
 
template<typename DictionaryInitializer = DataDependentRandomInitializer>
double Train (const arma::mat &data, const DictionaryInitializer &initializer=DictionaryInitializer())
 Train the sparse coding model on the given dataset. More...
 
void Encode (const arma::mat &data, arma::mat &codes)
 Sparse code each point in the given dataset via LARS, using the current dictionary and store the encoded data in the codes matrix. More...
 
double OptimizeDictionary (const arma::mat &data, const arma::mat &codes, const arma::uvec &adjacencies)
 Learn dictionary via Newton method based on Lagrange dual. More...
 
void ProjectDictionary ()
 Project each atom of the dictionary back onto the unit ball, if necessary.
 
double Objective (const arma::mat &data, const arma::mat &codes) const
 Compute the objective function.
 
const arma::mat & Dictionary () const
 Access the dictionary.
 
arma::mat & Dictionary ()
 Modify the dictionary.
 
size_t Atoms () const
 Access the number of atoms.
 
size_t & Atoms ()
 Modify the number of atoms.
 
double Lambda1 () const
 Access the L1 regularization term.
 
double & Lambda1 ()
 Modify the L1 regularization term.
 
double Lambda2 () const
 Access the L2 regularization term.
 
double & Lambda2 ()
 Modify the L2 regularization term.
 
size_t MaxIterations () const
 Get the maximum number of iterations.
 
size_t & MaxIterations ()
 Modify the maximum number of iterations.
 
double ObjTolerance () const
 Get the objective tolerance.
 
double & ObjTolerance ()
 Modify the objective tolerance.
 
double NewtonTolerance () const
 Get the tolerance for Newton's method (dictionary optimization step).
 
double & NewtonTolerance ()
 Modify the tolerance for Newton's method (dictionary optimization step).
 
template<typename Archive >
void serialize (Archive &ar, const uint32_t)
 Serialize the sparse coding model.
 

Detailed Description

An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net).

Let d be the number of dimensions in the original space, m the number of training points, and k the number of atoms in the dictionary (the dimension of the learned feature space). The training data X is a d-by-m matrix where each column is a point and each row is a dimension. The dictionary D is a d-by-k matrix, and the sparse codes matrix Z is a k-by-m matrix. This program seeks to minimize the objective:

\[ \min_{D,Z} 0.5 ||X - D Z||_{F}^2\ + \lambda_1 \sum_{i=1}^m ||Z_i||_1 + 0.5 \lambda_2 \sum_{i=1}^m ||Z_i||_2^2 \]

subject to \( ||D_j||_2 <= 1 \) for \( 1 <= j <= k \) where typically \( lambda_1 > 0 \) and \( lambda_2 = 0 \).

This problem is solved by an algorithm that alternates between a dictionary learning step and a sparse coding step. The dictionary learning step updates the dictionary D using a Newton method based on the Lagrange dual (see the paper below for details). The sparse coding step involves solving a large number of sparse linear regression problems; this can be done efficiently using LARS, an algorithm that can solve the LASSO or the Elastic Net (papers below).

Here are those papers:

@incollection{lee2007efficient,
title = {Efficient sparse coding algorithms},
author = {Honglak Lee and Alexis Battle and Rajat Raina and Andrew Y. Ng},
booktitle = {Advances in Neural Information Processing Systems 19},
editor = {B. Sch\"{o}lkopf and J. Platt and T. Hoffman},
publisher = {MIT Press},
address = {Cambridge, MA},
pages = {801--808},
year = {2007}
}
@article{efron2004least,
title={Least angle regression},
author={Efron, B. and Hastie, T. and Johnstone, I. and Tibshirani, R.},
journal={The Annals of statistics},
volume={32},
number={2},
pages={407--499},
year={2004},
publisher={Institute of Mathematical Statistics}
}
@article{zou2005regularization,
title={Regularization and variable selection via the elastic net},
author={Zou, H. and Hastie, T.},
journal={Journal of the Royal Statistical Society Series B},
volume={67},
number={2},
pages={301--320},
year={2005},
publisher={Royal Statistical Society}
}

Note that the implementation here does not use the feature-sign search algorithm from Honglak Lee's paper, but instead the LARS algorithm suggested in that paper.

When Train() is called, the dictionary is initialized using the DictionaryInitializationPolicy class. Possible choices include the RandomInitializer, which provides an entirely random dictionary, the DataDependentRandomInitializer, which provides a random dictionary based loosely on characteristics of the dataset, and the NothingInitializer, which does not initialize the dictionary – instead, the user should set the dictionary using the Dictionary() mutator method.

Once a dictionary is trained with Train(), another matrix may be encoded with the Encode() function.

Template Parameters
DictionaryInitializationPolicyThe class to use to initialize the dictionary; must have 'void Initialize(const arma::mat& data, arma::mat& dictionary)' function.

Constructor & Destructor Documentation

◆ SparseCoding() [1/2]

template<typename DictionaryInitializer >
mlpack::sparse_coding::SparseCoding::SparseCoding ( const arma::mat &  data,
const size_t  atoms,
const double  lambda1,
const double  lambda2 = 0,
const size_t  maxIterations = 0,
const double  objTolerance = 0.01,
const double  newtonTolerance = 1e-6,
const DictionaryInitializer &  initializer = DictionaryInitializer() 
)

Set the parameters to SparseCoding.

lambda2 defaults to 0. This constructor will train the model. If that is not desired, call the other constructor that does not take a data matrix. This constructor will also initialize the dictionary using the given DictionaryInitializer before training.

If you want to initialize the dictionary to a custom matrix, consider either writing your own DictionaryInitializer class (with void Initialize(const arma::mat& data, arma::mat& dictionary) function), or call the constructor that does not take a data matrix, then call Dictionary() to set the dictionary matrix to a matrix of your choosing, and then call Train() with NothingInitializer (i.e. Train<NothingInitializer>(data)).

Parameters
dataData matrix.
atomsNumber of atoms in dictionary.
lambda1Regularization parameter for l1-norm penalty.
lambda2Regularization parameter for l2-norm penalty.
maxIterationsMaximum number of iterations to run algorithm. If 0, the algorithm will run until convergence (or forever).
objToleranceTolerance for objective function. When an iteration of the algorithm produces an improvement smaller than this, the algorithm will terminate.
newtonToleranceTolerance for the Newton's method dictionary optimization step.
initializerThe initializer to use.

◆ SparseCoding() [2/2]

mlpack::sparse_coding::SparseCoding::SparseCoding ( const size_t  atoms = 0,
const double  lambda1 = 0,
const double  lambda2 = 0,
const size_t  maxIterations = 0,
const double  objTolerance = 0.01,
const double  newtonTolerance = 1e-6 
)

Set the parameters to SparseCoding.

lambda2 defaults to 0. This constructor will not train the model, and a subsequent call to Train() will be required before the model can encode points with Encode().

Parameters
atomsNumber of atoms in dictionary.
lambda1Regularization parameter for l1-norm penalty.
lambda2Regularization parameter for l2-norm penalty.
maxIterationsMaximum number of iterations to run algorithm. If 0, the algorithm will run until convergence (or forever).
objToleranceTolerance for objective function. When an iteration of the algorithm produces an improvement smaller than this, the algorithm will terminate.
newtonToleranceTolerance for the Newton's method dictionary optimization step.

Member Function Documentation

◆ Encode()

void mlpack::sparse_coding::SparseCoding::Encode ( const arma::mat &  data,
arma::mat &  codes 
)

Sparse code each point in the given dataset via LARS, using the current dictionary and store the encoded data in the codes matrix.

Parameters
dataInput data matrix to be encoded.
codesOutput codes matrix.

◆ OptimizeDictionary()

double mlpack::sparse_coding::SparseCoding::OptimizeDictionary ( const arma::mat &  data,
const arma::mat &  codes,
const arma::uvec &  adjacencies 
)

Learn dictionary via Newton method based on Lagrange dual.

Parameters
dataData matrix.
codesMatrix of codes.
adjacenciesIndices of entries (unrolled column by column) of the coding matrix Z that are non-zero (the adjacency matrix for the bipartite graph of points and atoms).
Returns
the norm of the gradient of the Lagrange dual with respect to the dual variables

◆ Train()

template<typename DictionaryInitializer >
double mlpack::sparse_coding::SparseCoding::Train ( const arma::mat &  data,
const DictionaryInitializer &  initializer = DictionaryInitializer() 
)

Train the sparse coding model on the given dataset.

Returns
The final objective value.

The documentation for this class was generated from the following files: