Doxygen
1.9.1
|
Stores the information about a gradient descent run. More...
#include <tsgGradientDescent.hpp>
Public Member Functions | |
GradientDescentState ()=delete | |
The default constructor is NOT allowed. | |
GradientDescentState (const std::vector< double > &x0, const double initial_stepsize) | |
Constructor for a gradient descent state with the initial candidate x and stepsize lambda0. | |
GradientDescentState (const GradientDescentState &source)=default | |
Copy constructor. | |
GradientDescentState (GradientDescentState &&source)=default | |
Move constructor. | |
GradientDescentState & | operator= (GradientDescentState &&source)=default |
Move assignment. | |
GradientDescentState & | operator= (GradientDescentState &source)=default |
Copy assignment. | |
operator std::vector< double > & () | |
Implicit conversion to the current candidate x by reference. | |
size_t | getNumDimensions () const |
Return the number of dimensions. | |
double | getAdaptiveStepsize () const |
Return the stepsize. | |
void | getX (double x_out[]) const |
Return the current candidate point. | |
std::vector< double > | getX () const |
Overload for when the output is a vector. | |
void | setAdaptiveStepsize (const double new_stepsize) |
Set the stepsize. | |
void | setX (const double x_new[]) |
Set the current candidate point. | |
void | setX (const std::vector< double > &x_new) |
Overload for when the input is a vector. | |
Friends | |
OptimizationStatus | GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const ProjectionFunctionSingle &proj, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state) |
Applies the adaptive gradient descent algorithm on a restricted domain. More... | |
OptimizationStatus | GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state) |
Applies the adaptive gradient descent algorithm on unrestricted domain. More... | |
OptimizationStatus | GradientDescent (const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state) |
Applies the constant step-size gradient descent algorithm for functions with unbounded domains. More... | |
Stores the information about a gradient descent run.
Nesterov, Y. (2013). Gradient methods for minimizing composite functions. Mathematical programming, 140(1), 125-161.
|
friend |
Applies the adaptive gradient descent algorithm on a restricted domain.
Similar to the adaptive step-size algorithm on the unrestricted domain, but it uses a projection function to constrain each iterate to a user-defined domain.
The proj function computes the orthogonal projection of a point inside the domain, e.g., restricts the point to a hypercube.
|
friend |
Applies the adaptive gradient descent algorithm on unrestricted domain.
Similar to the constant step-size algorithm GradientDescent() but applying an adaptive stepping. This method is guaranteed to converge to a stationary point if the gradient of f is Lipschitz continuous on its domain. The algorithm is known as Non-Proximal, i.e., no restriction is applied to the domain which implies either work on an unbounded domain or the starting point and the minimum are sufficiently far from the boundary and the restriction is not needed.
This variant requires the value of the functional that is to be minimized, in addition to the gradient. There are two control parameters increase_coeff and decrease_coeff that guide the rate at which the step-size is adjusted. The parameters can affect the convergence rate, but not the final result.
func | is the objective function to be minimized |
grad | is the gradient of the objective function |
increase_coeff | Controls how quickly the step-size is increased; should be greater than 1 |
decrease_coeff | Controls how quickly the step-size is decreased; should be greater than 1 |
max_iterations | Maximum number of iterations to perform |
tolerance | same as in GradientDescent() |
state | Holds the state of the gradient descent algorithm, including the current iterate and the current adaptive step-size. |
|
friend |
Applies the constant step-size gradient descent algorithm for functions with unbounded domains.
Minimize a function with gradient g over an unconstrained domain. Perform work until reaching the desired tolerance (measured in the stationarity residual), or until max_iterations is reached. See also TasOptimization::computeStationarityResidual()
grad | Gradient of the objective functional |
stepsize | is the step-size of the algorithm |
max_iterations | is the maximum number of iterations to perform |
tolerance | Stationarity tolerance; the algorithm terminates when the stationarity residual computed by TasOptimization::computeStationarityResidual() is less than or equal to tolerance |
state | contains the current iterate and returns the best iterate. This algorithm does not use the adaptive step-size, so the state can be just a vector, but the signature accepts a GradientDescentState with an automatic conversion. |