This is an implementation of the Downhill Simplex (Nelder & Mead, 1965) algorithm.
More...
|
template<class Function > |
| DownhillSimplex (const Function &func, const Vector< N > &c, Precision spread=1) |
| Initialize the DownhillSimplex class. More...
|
|
template<class Function > |
void | restart (const Function &func, const Vector< N > &c, Precision spread) |
| This function sets up the simplex around, with one point at c and the remaining points are made by moving by spread along each axis aligned unit vector. More...
|
|
bool | finished () |
| Check to see it iteration should stop. More...
|
|
template<class Function > |
void | restart (const Function &func, Precision spread) |
| This function resets the simplex around the best current point. More...
|
|
const Simplex & | get_simplex () const |
| Return the simplex.
|
|
const Values & | get_values () const |
| Return the score at the vertices.
|
|
int | get_best () const |
| Get the index of the best vertex.
|
|
int | get_worst () const |
| Get the index of the worst vertex.
|
|
template<class Function > |
void | find_next_point (const Function &func) |
| Perform one iteration of the downhill Simplex algorithm. More...
|
|
template<class Function > |
bool | iterate (const Function &func) |
| Perform one iteration of the downhill Simplex algorithm, and return the result of not DownhillSimplex::finished. More...
|
|
template<int N = -1, typename Precision = double>
class TooN::DownhillSimplex< N, Precision >
This is an implementation of the Downhill Simplex (Nelder & Mead, 1965) algorithm.
This particular instance will minimize a given function.
The function maintains \(N+1\) points for a $N$ dimensional function, \(f\)
At each iteration, the following algorithm is performed:
- Find the worst (largest) point, \(x_w\).
- Find the centroid of the remaining points, \(x_0\).
- Let \(v = x_0 - x_w\)
- Compute a reflected point, \( x_r = x_0 + \alpha v\)
- If \(f(x_r)\) is better than the best point
- Expand the simplex by extending the reflection to \(x_e = x_0 + \rho \alpha v \)
- Replace \(x_w\) with the best point of \(x_e\), and \(x_r\).
- Else, if \(f(x_r)\) is between the best and second-worst point
- Replace \(x_w\) with \(x_r\).
- Else, if \(f(x_r)\) is better than \(x_w\)
- Contract the simplex by computing \(x_c = x_0 + \gamma v\)
- If \(f(x_c) < f(x_r)\)
- Replace \(x_w\) with \(x_c\).
- If \(x_w\) has not been replaced, then shrink the simplex by a factor of \(\sigma\) around the best point.
This implementation uses:
- \(\alpha = 1\)
- \(\rho = 2\)
- \(\gamma = 1/2\)
- \(\sigma = 1/2\)
Example usage:
#include <TooN/optimization/downhill_simplex.h>
double sq(double x)
{
return x*x;
}
{
return sq(1 - v[0]) + 100 * sq(v[1] - sq(v[0]));
}
int main()
{
Vector<2> starting_point = makeVector( -1, 1);
while(dh_fixed.iterate(Rosenbrock))
{
cout << dh.get_values()[dh.get_best()] << endl;
}
cout << dh_fixed.get_simplex()[dh_fixed.get_best()] << endl;
}
@param N The dimension of the function to optimize. As usual, the default value of <i>N</i> (-1) indicates
that the class is sized at run-time.