Doxygen
1.9.1
|
Functions | |
void | TasOptimization::ParticleSwarm (const ObjectiveFunction f, const TasDREAM::DreamDomain inside, const double inertia_weight, const double cognitive_coeff, const double social_coeff, const int num_iterations, ParticleSwarmState &state, const std::function< double(void)> get_random01=TasDREAM::tsgCoreUniform01) |
Applies the classic particle swarm algorithm to a particle swarm state. More... | |
OptimizationStatus | TasOptimization::GradientDescent (const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state) |
Applies the constant step-size gradient descent algorithm for functions with unbounded domains. More... | |
OptimizationStatus | TasOptimization::GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state) |
Applies the adaptive gradient descent algorithm on unrestricted domain. More... | |
OptimizationStatus | TasOptimization::GradientDescent (const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const ProjectionFunctionSingle &proj, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state) |
Applies the adaptive gradient descent algorithm on a restricted domain. More... | |
The optimization algorithms are written in functional programming applied to combinations of objective functionals and optimization states. The state and the algorithm are split so that different functionals can be used with a single state in a multi-fidelity paradigm. An example would be the use of a sparse grid surrogate for the first few steps of the process and switching to the full-model for the last few iterations.
void TasOptimization::ParticleSwarm | ( | const ObjectiveFunction | f, |
const TasDREAM::DreamDomain | inside, | ||
const double | inertia_weight, | ||
const double | cognitive_coeff, | ||
const double | social_coeff, | ||
const int | num_iterations, | ||
ParticleSwarmState & | state, | ||
const std::function< double(void)> | get_random01 = TasDREAM::tsgCoreUniform01 |
||
) |
Applies the classic particle swarm algorithm to a particle swarm state.
Runs num_iterations of the particle swarm algorithm to a particle swarm state to minimize the function f over the domain inside. The parameters of the algorithm are inertia_weight , cognitive_coeff , and social_coeff. The uniform [0,1] random number generator used by the algorithm is get_random01.
f | Objective function to be minimized |
inside | indicates whether a given point is inside or outside of the domain of interest |
inertia_weight | inertial weight for the particle swarm algorithm |
cognitive_coeff | cognitive coefficient for the particle swarm algorithm |
social_coeff | social coefficient for the particle swarm algorithm |
num_iterations | number of iterations to perform |
state | holds the state of the particles, e.g., positions and velocities, see TasOptimization::ParticleSwarmState |
get_random01 | random number generator, defaults to rand() |
std::runtime_error | if either the positions or the velocities of the state have not been initialized |
OptimizationStatus TasOptimization::GradientDescent | ( | const GradientFunctionSingle & | grad, |
const double | stepsize, | ||
const int | max_iterations, | ||
const double | tolerance, | ||
std::vector< double > & | state | ||
) |
Applies the constant step-size gradient descent algorithm for functions with unbounded domains.
Minimize a function with gradient g over an unconstrained domain. Perform work until reaching the desired tolerance (measured in the stationarity residual), or until max_iterations is reached. See also TasOptimization::computeStationarityResidual()
grad | Gradient of the objective functional |
stepsize | is the step-size of the algorithm |
max_iterations | is the maximum number of iterations to perform |
tolerance | Stationarity tolerance; the algorithm terminates when the stationarity residual computed by TasOptimization::computeStationarityResidual() is less than or equal to tolerance |
state | contains the current iterate and returns the best iterate. This algorithm does not use the adaptive step-size, so the state can be just a vector, but the signature accepts a GradientDescentState with an automatic conversion. |
OptimizationStatus TasOptimization::GradientDescent | ( | const ObjectiveFunctionSingle & | func, |
const GradientFunctionSingle & | grad, | ||
const double | increase_coeff, | ||
const double | decrease_coeff, | ||
const int | max_iterations, | ||
const double | tolerance, | ||
GradientDescentState & | state | ||
) |
Applies the adaptive gradient descent algorithm on unrestricted domain.
Similar to the constant step-size algorithm GradientDescent() but applying an adaptive stepping. This method is guaranteed to converge to a stationary point if the gradient of f is Lipschitz continuous on its domain. The algorithm is known as Non-Proximal, i.e., no restriction is applied to the domain which implies either work on an unbounded domain or the starting point and the minimum are sufficiently far from the boundary and the restriction is not needed.
This variant requires the value of the functional that is to be minimized, in addition to the gradient. There are two control parameters increase_coeff and decrease_coeff that guide the rate at which the step-size is adjusted. The parameters can affect the convergence rate, but not the final result.
func | is the objective function to be minimized |
grad | is the gradient of the objective function |
increase_coeff | Controls how quickly the step-size is increased; should be greater than 1 |
decrease_coeff | Controls how quickly the step-size is decreased; should be greater than 1 |
max_iterations | Maximum number of iterations to perform |
tolerance | same as in GradientDescent() |
state | Holds the state of the gradient descent algorithm, including the current iterate and the current adaptive step-size. |
OptimizationStatus TasOptimization::GradientDescent | ( | const ObjectiveFunctionSingle & | func, |
const GradientFunctionSingle & | grad, | ||
const ProjectionFunctionSingle & | proj, | ||
const double | increase_coeff, | ||
const double | decrease_coeff, | ||
const int | max_iterations, | ||
const double | tolerance, | ||
GradientDescentState & | state | ||
) |
Applies the adaptive gradient descent algorithm on a restricted domain.
Similar to the adaptive step-size algorithm on the unrestricted domain, but it uses a projection function to constrain each iterate to a user-defined domain.
The proj function computes the orthogonal projection of a point inside the domain, e.g., restricts the point to a hypercube.