Doxygen 1.9.1
Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN: Tasmanian v8.2 (development)
Optimization Algorithms
Collaboration diagram for Optimization Algorithms:

Functions

void TasOptimization::ParticleSwarm (const ObjectiveFunction f, const TasDREAM::DreamDomain inside, const double inertia_weight, const double cognitive_coeff, const double social_coeff, const int num_iterations, ParticleSwarmState &state, const std::function< double(void)> get_random01=TasDREAM::tsgCoreUniform01)
 Applies the classic particle swarm algorithm to a particle swarm state. More...
 
OptimizationStatus TasOptimization::GradientDescent (const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state)
 Applies the constant step-size gradient descent algorithm for functions with unbounded domains. More...
 
OptimizationStatus TasOptimization::GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
 Applies the adaptive gradient descent algorithm on unrestricted domain. More...
 
OptimizationStatus TasOptimization::GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const ProjectionFunctionSingle &proj, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
 Applies the adaptive gradient descent algorithm on a restricted domain. More...
 

Detailed Description

The optimization algorithms are written in functional programming applied to combinations of objective functionals and optimization states. The state and the algorithm are split so that different functionals can be used with a single state in a multi-fidelity paradigm. An example would be the use of a sparse grid surrogate for the first few steps of the process and switching to the full-model for the last few iterations.

Function Documentation

◆ ParticleSwarm()

void TasOptimization::ParticleSwarm ( const ObjectiveFunction  f,
const TasDREAM::DreamDomain  inside,
const double  inertia_weight,
const double  cognitive_coeff,
const double  social_coeff,
const int  num_iterations,
ParticleSwarmState state,
const std::function< double(void)>  get_random01 = TasDREAM::tsgCoreUniform01 
)

Applies the classic particle swarm algorithm to a particle swarm state.

Runs num_iterations of the particle swarm algorithm to a particle swarm state to minimize the function f over the domain inside. The parameters of the algorithm are inertia_weight , cognitive_coeff , and social_coeff. The uniform [0,1] random number generator used by the algorithm is get_random01.

Parameters
fObjective function to be minimized
insideindicates whether a given point is inside or outside of the domain of interest
inertia_weightinertial weight for the particle swarm algorithm
cognitive_coeffcognitive coefficient for the particle swarm algorithm
social_coeffsocial coefficient for the particle swarm algorithm
num_iterationsnumber of iterations to perform
stateholds the state of the particles, e.g., positions and velocities, see TasOptimization::ParticleSwarmState
get_random01random number generator, defaults to rand()
Exceptions
std::runtime_errorif either the positions or the velocities of the state have not been initialized

◆ GradientDescent() [1/3]

OptimizationStatus TasOptimization::GradientDescent ( const GradientFunctionSingle grad,
const double  stepsize,
const int  max_iterations,
const double  tolerance,
std::vector< double > &  state 
)

Applies the constant step-size gradient descent algorithm for functions with unbounded domains.

Minimize a function with gradient g over an unconstrained domain. Perform work until reaching the desired tolerance (measured in the stationarity residual), or until max_iterations is reached. See also TasOptimization::computeStationarityResidual()

Parameters
gradGradient of the objective functional
stepsizeis the step-size of the algorithm
max_iterationsis the maximum number of iterations to perform
toleranceStationarity tolerance; the algorithm terminates when the stationarity residual computed by TasOptimization::computeStationarityResidual() is less than or equal to tolerance
statecontains the current iterate and returns the best iterate. This algorithm does not use the adaptive step-size, so the state can be just a vector, but the signature accepts a GradientDescentState with an automatic conversion.
Returns
TasOptimization::OptimizationStatus struct that contains information about the last iterate.

◆ GradientDescent() [2/3]

OptimizationStatus TasOptimization::GradientDescent ( const ObjectiveFunctionSingle func,
const GradientFunctionSingle grad,
const double  increase_coeff,
const double  decrease_coeff,
const int  max_iterations,
const double  tolerance,
GradientDescentState state 
)

Applies the adaptive gradient descent algorithm on unrestricted domain.

Similar to the constant step-size algorithm GradientDescent() but applying an adaptive stepping. This method is guaranteed to converge to a stationary point if the gradient of f is Lipschitz continuous on its domain. The algorithm is known as Non-Proximal, i.e., no restriction is applied to the domain which implies either work on an unbounded domain or the starting point and the minimum are sufficiently far from the boundary and the restriction is not needed.

This variant requires the value of the functional that is to be minimized, in addition to the gradient. There are two control parameters increase_coeff and decrease_coeff that guide the rate at which the step-size is adjusted. The parameters can affect the convergence rate, but not the final result.

Parameters
funcis the objective function to be minimized
gradis the gradient of the objective function
increase_coeffControls how quickly the step-size is increased; should be greater than 1
decrease_coeffControls how quickly the step-size is decreased; should be greater than 1
max_iterationsMaximum number of iterations to perform
tolerancesame as in GradientDescent()
stateHolds the state of the gradient descent algorithm, including the current iterate and the current adaptive step-size.
Returns
TasOptimization::OptimizationStatus struct that contains information about the last iterate.

◆ GradientDescent() [3/3]

OptimizationStatus TasOptimization::GradientDescent ( const ObjectiveFunctionSingle func,
const GradientFunctionSingle grad,
const ProjectionFunctionSingle proj,
const double  increase_coeff,
const double  decrease_coeff,
const int  max_iterations,
const double  tolerance,
GradientDescentState state 
)

Applies the adaptive gradient descent algorithm on a restricted domain.

Similar to the adaptive step-size algorithm on the unrestricted domain, but it uses a projection function to constrain each iterate to a user-defined domain.

The proj function computes the orthogonal projection of a point inside the domain, e.g., restricts the point to a hypercube.