Sequential Quantum Gate Decomposer
v1.9.3
Powerful decomposition of general unitarias into one- and two-qubit gates gates
|
A class implementing the BFGS iterations on the. More...
#include <grad_descend.h>
Public Member Functions | |
Grad_Descend (void(*f_pointer)(Matrix_real, void *, double *, Matrix_real &), void *meta_data_in) | |
Constructor of the class. More... | |
Grad_Descend (void(*f_pointer)(Matrix_real, void *, double *, Matrix_real &), void(*export_pointer)(double, Matrix_real &, void *), void *meta_data_in) | |
Constructor of the class. More... | |
double | Start_Optimization (Matrix_real &x, long maximal_iterations_in=5001) |
Call this method to start the optimization. More... | |
~Grad_Descend () | |
Destructor of the class. More... | |
Protected Member Functions | |
void | get_f_ang_gradient (Matrix_real &x, double &f, Matrix_real &g) |
Call this method to obtain the cost function and its gradient at a gives position given by x. More... | |
void | get_Maximal_Line_Search_Step (Matrix_real &search_direction, double &maximal_step, double &search_direction__grad_overlap) |
Call this method to obtain the maximal step size during the line search. More... | |
virtual void | get_search_direction (Matrix_real &g, Matrix_real &search_direction, double &search_direction__grad_overlap) |
Method to get the search direction in the next line search. More... | |
void | line_search (Matrix_real &x, Matrix_real &g, Matrix_real &search_direction, Matrix_real &x0_search, Matrix_real &g0_search, double &maximal_step, double &d__dot__g, double &f) |
Call to perform inexact line search terminated with Wolfe 1st and 2nd conditions. More... | |
virtual void | Optimize (Matrix_real &x, double &f) |
Call this method to start the optimization process. More... | |
Protected Attributes | |
void(* | costfnc__and__gradient )(Matrix_real x, void *params, double *f, Matrix_real &g) |
function pointer to evaluate the cost function and its gradient vector More... | |
void(* | export_fnc )(double, Matrix_real &, void *) |
function pointer to evaluate the cost function and its gradient vector More... | |
long | function_call_count |
number of function calls during the optimization process More... | |
long | maximal_iterations |
maximal count of iterations during the optimization More... | |
void * | meta_data |
additional data needed to evaluate the cost function More... | |
double | num_precision |
numerical precision used in the calculations More... | |
enum solver_status | status |
status of the solver More... | |
int | variable_num |
number of independent variables in the problem More... | |
A class implementing the BFGS iterations on the.
Definition at line 31 of file grad_descend.h.
Grad_Descend::Grad_Descend | ( | void(*)(Matrix_real, void *, double *, Matrix_real &) | f_pointer, |
void * | meta_data_in | ||
) |
Constructor of the class.
f_pointer | A function pointer (x, meta_data, f, grad) to evaluate the cost function and its gradients. The cost function and the gradient vector are returned via reference by the two last arguments. |
meta_data | void pointer to additional meta data needed to evaluate the cost function. |
Definition at line 31 of file common/grad_descend.cpp.
Grad_Descend::Grad_Descend | ( | void(*)(Matrix_real, void *, double *, Matrix_real &) | f_pointer, |
void(*)(double, Matrix_real &, void *) | export_pointer, | ||
void * | meta_data_in | ||
) |
Constructor of the class.
f_pointer | A function pointer (x, meta_data, f, grad) to evaluate the cost function and its gradients. The cost function and the gradient vector are returned via reference by the two last arguments. |
meta_data | void pointer to additional meta data needed to evaluate the cost function. |
Definition at line 57 of file common/grad_descend.cpp.
Grad_Descend::~Grad_Descend | ( | ) |
Destructor of the class.
Definition at line 488 of file common/grad_descend.cpp.
|
protected |
Call this method to obtain the cost function and its gradient at a gives position given by x.
x | The array of the current coordinates |
f | The value of the cost function is returned via this argument. |
g | The gradient of the cost function at x. |
Definition at line 443 of file common/grad_descend.cpp.
|
protected |
Call this method to obtain the maximal step size during the line search.
providing at least 2*PI periodicity, unless the search direction component in a specific direction is to small.
search_direction | The search direction. |
maximal_step | The maximal allowed step in the search direction returned via this argument. |
search_direction__grad_overlap | The overlap of the gradient with the search direction to test downhill. |
Providing at least 2*PI periodicity, unless the search direction component in a specific direction is to small.
search_direction | The search direction. |
maximal_step | The maximal allowed step in the search direction returned via this argument. |
search_direction__grad_overlap | The overlap of the gradient with the search direction to test downhill. |
Definition at line 401 of file common/grad_descend.cpp.
|
protectedvirtual |
Method to get the search direction in the next line search.
g | The gradient at the current coordinates. |
search_direction | The search direction is returned via this argument (it is -g) |
search_direction__grad_overlap | The overlap of the gradient with the search direction to test downhill. |
Reimplemented in BFGS_Powell.
Definition at line 461 of file common/grad_descend.cpp.
|
protected |
Call to perform inexact line search terminated with Wolfe 1st and 2nd conditions.
x | The guess for the starting point. The coordinated of the optimized cost function are returned via x. |
g | The gradient at x. The updated gradient is returned via this argument. |
search_direction | The search direction. |
x0_search | Stores the starting point. (automatically updated with the starting x during the execution) |
g0_search | Stores the starting gradient. (automatically updated with the starting x during the execution) |
maximal_step | The maximal allowed step in the search direction |
d__dot__g | The overlap of the gradient with the search direction to test downhill. |
f | The value of the minimized cost function is returned via this argument |
Definition at line 127 of file common/grad_descend.cpp.
|
protectedvirtual |
Call this method to start the optimization process.
x | The guess for the starting point. The coordinated of the optimized cost function are returned via x. |
f | The value of the minimized cost function |
Reimplemented in BFGS_Powell.
Definition at line 284 of file common/grad_descend.cpp.
double Grad_Descend::Start_Optimization | ( | Matrix_real & | x, |
long | maximal_iterations_in = 5001 |
||
) |
Call this method to start the optimization.
x | The initial solution guess. |
maximal_iterations_in | The maximal number of function+gradient evaluations. Reaching this threshold the solver returns with the current solution. |
Definition at line 83 of file common/grad_descend.cpp.
|
protected |
function pointer to evaluate the cost function and its gradient vector
Definition at line 50 of file grad_descend.h.
|
protected |
function pointer to evaluate the cost function and its gradient vector
Definition at line 53 of file grad_descend.h.
|
protected |
number of function calls during the optimization process
Definition at line 44 of file grad_descend.h.
|
protected |
maximal count of iterations during the optimization
Definition at line 41 of file grad_descend.h.
|
protected |
additional data needed to evaluate the cost function
Definition at line 56 of file grad_descend.h.
|
protected |
numerical precision used in the calculations
Definition at line 47 of file grad_descend.h.
|
protected |
status of the solver
Definition at line 59 of file grad_descend.h.
|
protected |
number of independent variables in the problem
Definition at line 38 of file grad_descend.h.