|
template<class Func , class Deriv > |
| ConjugateGradient (const Vector< Size > &start, const Func &func, const Deriv &deriv) |
| Initialize the ConjugateGradient class with sensible values. More...
|
|
template<class Func > |
| ConjugateGradient (const Vector< Size > &start, const Func &func, const Vector< Size > &deriv) |
| Initialize the ConjugateGradient class with sensible values. More...
|
|
void | init (const Vector< Size > &start, const Precision &func, const Vector< Size > &deriv) |
| Initialize the ConjugateGradient class with sensible values. More...
|
|
template<class Func > |
void | find_next_point (const Func &func) |
| Perform a linesearch from the current point (x) along the current conjugate vector (h). More...
|
|
bool | finished () |
| Check to see it iteration should stop. More...
|
|
void | update_vectors_PR (const Vector< Size > &grad) |
| After an iteration, update the gradient and conjugate using the Polak-Ribiere equations. More...
|
|
template<class Func , class Deriv > |
bool | iterate (const Func &func, const Deriv &deriv) |
| Use this function to iterate over the optimization. More...
|
|
|
const int | size |
| Dimensionality of the space.
|
|
Vector< Size > | g |
| Gradient vector used by the next call to iterate()
|
|
Vector< Size > | h |
| Conjugate vector to be searched along in the next call to iterate()
|
|
Vector< Size > | minus_h |
| negative of h as this is required to be passed into a function which uses references (so can't be temporary)
|
|
Vector< Size > | old_g |
| Gradient vector used to compute $h$ in the last call to iterate()
|
|
Vector< Size > | old_h |
| Conjugate vector searched along in the last call to iterate()
|
|
Vector< Size > | x |
| Current position (best known point)
|
|
Vector< Size > | old_x |
| Previous best known point (not set at construction)
|
|
Precision | y |
| Function at \(x\).
|
|
Precision | old_y |
| Function at old_x.
|
|
Precision | tolerance |
| Tolerance used to determine if the optimization is complete. Defaults to square root of machine precision.
|
|
Precision | epsilon |
| Additive term in tolerance to prevent excessive iterations if \(x_\mathrm{optimal} = 0\). Known as ZEPS in numerical recipies. Defaults to 1e-20.
|
|
int | max_iterations |
| Maximum number of iterations. Defaults to size \(*100\).
|
|
Precision | bracket_initial_lambda |
| Initial stepsize used in bracketing the minimum for the line search. Defaults to 1.
|
|
Precision | linesearch_tolerance |
| Tolerance used to determine if the linesearch is complete. Defaults to square root of machine precision.
|
|
Precision | linesearch_epsilon |
| Additive term in tolerance to prevent excessive iterations if \(x_\mathrm{optimal} = 0\). Known as ZEPS in numerical recipies. Defaults to 1e-20.
|
|
int | linesearch_max_iterations |
| Maximum number of iterations in the linesearch. Defaults to 100.
|
|
Precision | bracket_epsilon |
| Minimum size for initial minima bracketing. Below this, it is assumed that the system has converged. Defaults to 1e-20.
|
|
int | iterations |
| Number of iterations performed.
|
|
template<int Size = Dynamic, class Precision = double>
struct TooN::ConjugateGradient< Size, Precision >
This class provides a nonlinear conjugate-gradient optimizer.
The following code snippet will perform an optimization on the Rosenbrock Bananna function in two dimensions:
double Rosenbrock(const Vector<2>& v)
{
return sq(1 - v[0]) + 100 * sq(v[1] - sq(v[0]));
}
Vector<2> RosenbrockDerivatives(const Vector<2>& v)
{
double y = v[1];
Vector<2> ret;
ret[0] = -2+2*x-400*(y-sq(x))*x;
ret[1] = 200*y-200*sq(x);
return ret;
}
int main()
{
ConjugateGradient<2> cg(makeVector(0,0), Rosenbrock, RosenbrockDerivatives);
while(cg.iterate(Rosenbrock, RosenbrockDerivatives))
cout << "y_" << iteration << " = " << cg.y << endl;
cout << "Optimal value: " << cg.y << endl;
}
The chances are that you will want to read the documentation for ConjugateGradient::ConjugateGradient and ConjugateGradient::iterate.
Linesearch is currently performed using golden-section search and conjugate vector updates are performed using the Polak-Ribiere equations. There many tunable parameters, and the internals are readily accessible, so alternative termination conditions etc can easily be substituted. However, ususally these will not be necessary.
template<int Size = Dynamic, class Precision = double>
template<class Func , class Deriv >
Use this function to iterate over the optimization.
Note that after iterate returns false, g, h, old_g and old_h will not have been updated. This function updates:
- x
- old_c
- y
- old_y
- iterations
- g*
- old_g*
- h*
- old_h* *'d variables not updated on the last iteration.
- Parameters
-
func | Functor returning the function value at a given point. |
deriv | Functor to compute derivatives at the specified point. |
- Returns
- Whether to continue.