Doxygen 1.9.1
Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN: Tasmanian v8.2 (development)
TasOptimization::GradientDescentState Class Reference

Stores the information about a gradient descent run. More...

#include <tsgGradientDescent.hpp>

Public Member Functions

 GradientDescentState ()=delete
 The default constructor is NOT allowed.
 
 GradientDescentState (const std::vector< double > &x0, const double initial_stepsize)
 Constructor for a gradient descent state with the initial candidate x and stepsize lambda0.
 
 GradientDescentState (const GradientDescentState &source)=default
 Copy constructor.
 
 GradientDescentState (GradientDescentState &&source)=default
 Move constructor.
 
GradientDescentStateoperator= (GradientDescentState &&source)=default
 Move assignment.
 
GradientDescentStateoperator= (GradientDescentState &source)=default
 Copy assignment.
 
 operator std::vector< double > & ()
 Implicit conversion to the current candidate x by reference.
 
size_t getNumDimensions () const
 Return the number of dimensions.
 
double getAdaptiveStepsize () const
 Return the stepsize.
 
void getX (double x_out[]) const
 Return the current candidate point.
 
std::vector< double > getX () const
 Overload for when the output is a vector.
 
void setAdaptiveStepsize (const double new_stepsize)
 Set the stepsize.
 
void setX (const double x_new[])
 Set the current candidate point.
 
void setX (const std::vector< double > &x_new)
 Overload for when the input is a vector.
 

Friends

OptimizationStatus GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const ProjectionFunctionSingle &proj, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
 Applies the adaptive gradient descent algorithm on a restricted domain. More...
 
OptimizationStatus GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
 Applies the adaptive gradient descent algorithm on unrestricted domain. More...
 
OptimizationStatus GradientDescent (const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state)
 Applies the constant step-size gradient descent algorithm for functions with unbounded domains. More...
 

Detailed Description

Stores the information about a gradient descent run.

class GradientDescentState
A state class associated with the Gradient Descent algorithm.
Constructors and Copy/Move assignment
Constructors require information about an initial candidate point and initial step-size. The class is movable and copyable by constructor or operator=. Once set, the number of dimensions cannot be modified.
Set and Get Candidate and Stepsize
The Gradient Descent algorithm has one parameter, the step-size, and the current (best) point. The two have set/get methods with overloads for both std::vector and raw-arrays. Using common math conventions, the current best point is designated by X.
Gradient Descent Algorithm
See TasOptimization::GradientDescent()
More information about the gradient descent algorithm can be found in the following paper:

Nesterov, Y. (2013). Gradient methods for minimizing composite functions. Mathematical programming, 140(1), 125-161.

Friends And Related Function Documentation

◆ GradientDescent [1/3]

OptimizationStatus GradientDescent ( const ObjectiveFunctionSingle func,
const GradientFunctionSingle grad,
const ProjectionFunctionSingle proj,
const double  increase_coeff,
const double  decrease_coeff,
const int  max_iterations,
const double  tolerance,
GradientDescentState state 
)
friend

Applies the adaptive gradient descent algorithm on a restricted domain.

Similar to the adaptive step-size algorithm on the unrestricted domain, but it uses a projection function to constrain each iterate to a user-defined domain.

The proj function computes the orthogonal projection of a point inside the domain, e.g., restricts the point to a hypercube.

◆ GradientDescent [2/3]

OptimizationStatus GradientDescent ( const ObjectiveFunctionSingle func,
const GradientFunctionSingle grad,
const double  increase_coeff,
const double  decrease_coeff,
const int  max_iterations,
const double  tolerance,
GradientDescentState state 
)
friend

Applies the adaptive gradient descent algorithm on unrestricted domain.

Similar to the constant step-size algorithm GradientDescent() but applying an adaptive stepping. This method is guaranteed to converge to a stationary point if the gradient of f is Lipschitz continuous on its domain. The algorithm is known as Non-Proximal, i.e., no restriction is applied to the domain which implies either work on an unbounded domain or the starting point and the minimum are sufficiently far from the boundary and the restriction is not needed.

This variant requires the value of the functional that is to be minimized, in addition to the gradient. There are two control parameters increase_coeff and decrease_coeff that guide the rate at which the step-size is adjusted. The parameters can affect the convergence rate, but not the final result.

Parameters
funcis the objective function to be minimized
gradis the gradient of the objective function
increase_coeffControls how quickly the step-size is increased; should be greater than 1
decrease_coeffControls how quickly the step-size is decreased; should be greater than 1
max_iterationsMaximum number of iterations to perform
tolerancesame as in GradientDescent()
stateHolds the state of the gradient descent algorithm, including the current iterate and the current adaptive step-size.
Returns
TasOptimization::OptimizationStatus struct that contains information about the last iterate.

◆ GradientDescent [3/3]

OptimizationStatus GradientDescent ( const GradientFunctionSingle grad,
const double  stepsize,
const int  max_iterations,
const double  tolerance,
std::vector< double > &  state 
)
friend

Applies the constant step-size gradient descent algorithm for functions with unbounded domains.

Minimize a function with gradient g over an unconstrained domain. Perform work until reaching the desired tolerance (measured in the stationarity residual), or until max_iterations is reached. See also TasOptimization::computeStationarityResidual()

Parameters
gradGradient of the objective functional
stepsizeis the step-size of the algorithm
max_iterationsis the maximum number of iterations to perform
toleranceStationarity tolerance; the algorithm terminates when the stationarity residual computed by TasOptimization::computeStationarityResidual() is less than or equal to tolerance
statecontains the current iterate and returns the best iterate. This algorithm does not use the adaptive step-size, so the state can be just a vector, but the signature accepts a GradientDescentState with an automatic conversion.
Returns
TasOptimization::OptimizationStatus struct that contains information about the last iterate.

The documentation for this class was generated from the following file: