mlpack
|
The SSE (Sum of Squared Errors) loss is a loss function to measure the quality of prediction of response values present in the node of each xgboost tree. More...
#include <sse_loss.hpp>
Public Member Functions | |
SSELoss (const double alpha, const double lambda) | |
template<typename VecType > | |
VecType::elem_type | InitialPrediction (const VecType &values) |
Returns the initial predition for gradient boosting. | |
template<typename MatType , typename WeightVecType > | |
double | OutputLeafValue (const MatType &, const WeightVecType &) |
Returns the output value for the leaf in the tree. | |
double | Evaluate (const size_t begin, const size_t end) |
Calculates the gain from begin to end. More... | |
template<bool UseWeights, typename MatType , typename WeightVecType > | |
double | Evaluate (const MatType &input, const WeightVecType &) |
Calculates the gain of the node before splitting. More... | |
The SSE (Sum of Squared Errors) loss is a loss function to measure the quality of prediction of response values present in the node of each xgboost tree.
It is also a good measure to compare the spread of two distributions. We will try to minimize this value while training.
Loss = 1 / 2 * (Observed - Predicted)^2
|
inline |
Calculates the gain from begin to end.
begin | The begin index to calculate gain. |
end | The end index to calculate gain. |
|
inline |
Calculates the gain of the node before splitting.
It also initializes the gradients and hessians used later for finding split. UseWeights and weights are ignored here. These are just to make the API consistent.
input | This is a 2D matrix. The first row stores the true observed values and the second row stores the prediction at the current step of boosting. |