Sequential Quantum Gate Decomposer  v1.9.3
Powerful decomposition of general unitarias into one- and two-qubit gates gates
List of all members | Public Member Functions | Protected Member Functions | Protected Attributes
Grad_Descend Class Reference

A class implementing the BFGS iterations on the. More...

#include <grad_descend.h>

Inheritance diagram for Grad_Descend:
Inheritance graph
[legend]

Public Member Functions

 Grad_Descend (void(*f_pointer)(Matrix_real, void *, double *, Matrix_real &), void *meta_data_in)
 Constructor of the class. More...
 
 Grad_Descend (void(*f_pointer)(Matrix_real, void *, double *, Matrix_real &), void(*export_pointer)(double, Matrix_real &, void *), void *meta_data_in)
 Constructor of the class. More...
 
double Start_Optimization (Matrix_real &x, long maximal_iterations_in=5001)
 Call this method to start the optimization. More...
 
 ~Grad_Descend ()
 Destructor of the class. More...
 

Protected Member Functions

void get_f_ang_gradient (Matrix_real &x, double &f, Matrix_real &g)
 Call this method to obtain the cost function and its gradient at a gives position given by x. More...
 
void get_Maximal_Line_Search_Step (Matrix_real &search_direction, double &maximal_step, double &search_direction__grad_overlap)
 Call this method to obtain the maximal step size during the line search. More...
 
virtual void get_search_direction (Matrix_real &g, Matrix_real &search_direction, double &search_direction__grad_overlap)
 Method to get the search direction in the next line search. More...
 
void line_search (Matrix_real &x, Matrix_real &g, Matrix_real &search_direction, Matrix_real &x0_search, Matrix_real &g0_search, double &maximal_step, double &d__dot__g, double &f)
 Call to perform inexact line search terminated with Wolfe 1st and 2nd conditions. More...
 
virtual void Optimize (Matrix_real &x, double &f)
 Call this method to start the optimization process. More...
 

Protected Attributes

void(* costfnc__and__gradient )(Matrix_real x, void *params, double *f, Matrix_real &g)
 function pointer to evaluate the cost function and its gradient vector More...
 
void(* export_fnc )(double, Matrix_real &, void *)
 function pointer to evaluate the cost function and its gradient vector More...
 
long function_call_count
 number of function calls during the optimization process More...
 
long maximal_iterations
 maximal count of iterations during the optimization More...
 
void * meta_data
 additional data needed to evaluate the cost function More...
 
double num_precision
 numerical precision used in the calculations More...
 
enum solver_status status
 status of the solver More...
 
int variable_num
 number of independent variables in the problem More...
 

Detailed Description

A class implementing the BFGS iterations on the.

Definition at line 31 of file grad_descend.h.

Constructor & Destructor Documentation

◆ Grad_Descend() [1/2]

Grad_Descend::Grad_Descend ( void(*)(Matrix_real, void *, double *, Matrix_real &)  f_pointer,
void *  meta_data_in 
)

Constructor of the class.

Parameters
f_pointerA function pointer (x, meta_data, f, grad) to evaluate the cost function and its gradients. The cost function and the gradient vector are returned via reference by the two last arguments.
meta_datavoid pointer to additional meta data needed to evaluate the cost function.
Returns
An instance of the class

Definition at line 31 of file common/grad_descend.cpp.

◆ Grad_Descend() [2/2]

Grad_Descend::Grad_Descend ( void(*)(Matrix_real, void *, double *, Matrix_real &)  f_pointer,
void(*)(double, Matrix_real &, void *)  export_pointer,
void *  meta_data_in 
)

Constructor of the class.

Parameters
f_pointerA function pointer (x, meta_data, f, grad) to evaluate the cost function and its gradients. The cost function and the gradient vector are returned via reference by the two last arguments.
meta_datavoid pointer to additional meta data needed to evaluate the cost function.
Returns
An instance of the class

Definition at line 57 of file common/grad_descend.cpp.

◆ ~Grad_Descend()

Grad_Descend::~Grad_Descend ( )

Destructor of the class.

Definition at line 488 of file common/grad_descend.cpp.

Member Function Documentation

◆ get_f_ang_gradient()

void Grad_Descend::get_f_ang_gradient ( Matrix_real x,
double &  f,
Matrix_real g 
)
protected

Call this method to obtain the cost function and its gradient at a gives position given by x.

Parameters
xThe array of the current coordinates
fThe value of the cost function is returned via this argument.
gThe gradient of the cost function at x.

Definition at line 443 of file common/grad_descend.cpp.

Here is the caller graph for this function:

◆ get_Maximal_Line_Search_Step()

void Grad_Descend::get_Maximal_Line_Search_Step ( Matrix_real search_direction,
double &  maximal_step,
double &  search_direction__grad_overlap 
)
protected

Call this method to obtain the maximal step size during the line search.

providing at least 2*PI periodicity, unless the search direction component in a specific direction is to small.

Parameters
search_directionThe search direction.
maximal_stepThe maximal allowed step in the search direction returned via this argument.
search_direction__grad_overlapThe overlap of the gradient with the search direction to test downhill.

Providing at least 2*PI periodicity, unless the search direction component in a specific direction is to small.

Parameters
search_directionThe search direction.
maximal_stepThe maximal allowed step in the search direction returned via this argument.
search_direction__grad_overlapThe overlap of the gradient with the search direction to test downhill.

Definition at line 401 of file common/grad_descend.cpp.

Here is the caller graph for this function:

◆ get_search_direction()

void Grad_Descend::get_search_direction ( Matrix_real g,
Matrix_real search_direction,
double &  search_direction__grad_overlap 
)
protectedvirtual

Method to get the search direction in the next line search.

Parameters
gThe gradient at the current coordinates.
search_directionThe search direction is returned via this argument (it is -g)
search_direction__grad_overlapThe overlap of the gradient with the search direction to test downhill.

Reimplemented in BFGS_Powell.

Definition at line 461 of file common/grad_descend.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ line_search()

void Grad_Descend::line_search ( Matrix_real x,
Matrix_real g,
Matrix_real search_direction,
Matrix_real x0_search,
Matrix_real g0_search,
double &  maximal_step,
double &  d__dot__g0,
double &  f 
)
protected

Call to perform inexact line search terminated with Wolfe 1st and 2nd conditions.

Parameters
xThe guess for the starting point. The coordinated of the optimized cost function are returned via x.
gThe gradient at x. The updated gradient is returned via this argument.
search_directionThe search direction.
x0_searchStores the starting point. (automatically updated with the starting x during the execution)
g0_searchStores the starting gradient. (automatically updated with the starting x during the execution)
maximal_stepThe maximal allowed step in the search direction
d__dot__gThe overlap of the gradient with the search direction to test downhill.
fThe value of the minimized cost function is returned via this argument

Definition at line 127 of file common/grad_descend.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Optimize()

void Grad_Descend::Optimize ( Matrix_real x,
double &  f 
)
protectedvirtual

Call this method to start the optimization process.

Parameters
xThe guess for the starting point. The coordinated of the optimized cost function are returned via x.
fThe value of the minimized cost function

Reimplemented in BFGS_Powell.

Definition at line 284 of file common/grad_descend.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ Start_Optimization()

double Grad_Descend::Start_Optimization ( Matrix_real x,
long  maximal_iterations_in = 5001 
)

Call this method to start the optimization.

Parameters
xThe initial solution guess.
maximal_iterations_inThe maximal number of function+gradient evaluations. Reaching this threshold the solver returns with the current solution.

Definition at line 83 of file common/grad_descend.cpp.

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ costfnc__and__gradient

void(* Grad_Descend::costfnc__and__gradient) (Matrix_real x, void *params, double *f, Matrix_real &g)
protected

function pointer to evaluate the cost function and its gradient vector

Definition at line 50 of file grad_descend.h.

◆ export_fnc

void(* Grad_Descend::export_fnc) (double, Matrix_real &, void *)
protected

function pointer to evaluate the cost function and its gradient vector

Definition at line 53 of file grad_descend.h.

◆ function_call_count

long Grad_Descend::function_call_count
protected

number of function calls during the optimization process

Definition at line 44 of file grad_descend.h.

◆ maximal_iterations

long Grad_Descend::maximal_iterations
protected

maximal count of iterations during the optimization

Definition at line 41 of file grad_descend.h.

◆ meta_data

void* Grad_Descend::meta_data
protected

additional data needed to evaluate the cost function

Definition at line 56 of file grad_descend.h.

◆ num_precision

double Grad_Descend::num_precision
protected

numerical precision used in the calculations

Definition at line 47 of file grad_descend.h.

◆ status

enum solver_status Grad_Descend::status
protected

status of the solver

Definition at line 59 of file grad_descend.h.

◆ variable_num

int Grad_Descend::variable_num
protected

number of independent variables in the problem

Definition at line 38 of file grad_descend.h.


The documentation for this class was generated from the following files: