Doxygen 1.9.8
Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN: Tasmanian v8.2
 
Loading...
Searching...
No Matches
tsgGradientDescent.hpp
Go to the documentation of this file.
1/*
2 * Copyright (c) 2022, Miroslav Stoyanov & Weiwei Kong
3 *
4 * This file is part of
5 * Toolkit for Adaptive Stochastic Modeling And Non-Intrusive ApproximatioN: TASMANIAN
6 *
7 * Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following
8 * conditions are met:
9 *
10 * 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
11 *
12 * 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions
13 * and the following disclaimer in the documentation and/or other materials provided with the distribution.
14 *
15 * 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse
16 * or promote products derived from this software without specific prior written permission.
17 *
18 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
19 * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 * IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
21 * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
22 * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
23 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
24 * POSSIBILITY OF SUCH DAMAGE.
25 *
26 * UT-BATTELLE, LLC AND THE UNITED STATES GOVERNMENT MAKE NO REPRESENTATIONS AND DISCLAIM ALL WARRANTIES, BOTH EXPRESSED AND
27 * IMPLIED. THERE ARE NO EXPRESS OR IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, OR THAT THE USE OF
28 * THE SOFTWARE WILL NOT INFRINGE ANY PATENT, COPYRIGHT, TRADEMARK, OR OTHER PROPRIETARY RIGHTS, OR THAT THE SOFTWARE WILL
29 * ACCOMPLISH THE INTENDED RESULTS OR THAT THE SOFTWARE OR ITS USE WILL NOT RESULT IN INJURY OR DAMAGE. THE USER ASSUMES
30 * RESPONSIBILITY FOR ALL LIABILITIES, PENALTIES, FINES, CLAIMS, CAUSES OF ACTION, AND COSTS AND EXPENSES, CAUSED BY, RESULTING
31 * FROM OR ARISING OUT OF, IN WHOLE OR IN PART THE USE, STORAGE OR DISPOSAL OF THE SOFTWARE.
32 */
33
34#ifndef __TASMANIAN_GRADIENT_DESCENT_HPP
35#define __TASMANIAN_GRADIENT_DESCENT_HPP
36
38
50namespace TasOptimization {
51
85 public:
89 GradientDescentState(const std::vector<double> &x0, const double initial_stepsize) :
90 adaptive_stepsize(initial_stepsize), x(x0) {};
91
96
101
103 inline operator std::vector<double>&() {return x;};
104
106 inline size_t getNumDimensions() const {return x.size();}
108 inline double getAdaptiveStepsize() const {return adaptive_stepsize;}
110 inline void getX(double x_out[]) const {std::copy_n(x.begin(), x.size(), x_out);}
112 inline std::vector<double> getX() const {return x;}
113
115 inline void setAdaptiveStepsize(const double new_stepsize) {adaptive_stepsize = new_stepsize;}
117 inline void setX(const double x_new[]) {std::copy_n(x_new, x.size(), x.begin());}
119 inline void setX(const std::vector<double> &x_new) {
120 checkVarSize("GradientDescentState::setCandidate", "candidate point", x_new.size(), x.size());
121 x = x_new;
122 }
123
125 const ProjectionFunctionSingle &proj, const double increase_coeff,
126 const double decrease_coeff, const int max_iterations, const double tolerance,
127 GradientDescentState &state);
129 const double increase_coeff, const double decrease_coeff, const int max_iterations,
130 const double tolerance, GradientDescentState &state);
131 friend OptimizationStatus GradientDescent(const GradientFunctionSingle &grad, const double stepsize, const int max_iterations,
132 const double tolerance, std::vector<double> &state);
133
134 private:
135 double adaptive_stepsize;
136 std::vector<double> x;
137};
138
159OptimizationStatus GradientDescent(const GradientFunctionSingle &grad, const double stepsize, const int max_iterations,
160 const double tolerance, std::vector<double> &state);
161
189 const double increase_coeff, const double decrease_coeff, const int max_iterations,
190 const double tolerance, GradientDescentState &state);
202 const ProjectionFunctionSingle &proj, const double increase_coeff,
203 const double decrease_coeff, const int max_iterations, const double tolerance,
204 GradientDescentState &state);
205
206}
207
208#endif
Stores the information about a gradient descent run.
Definition tsgGradientDescent.hpp:84
GradientDescentState()=delete
The default constructor is NOT allowed.
std::vector< double > getX() const
Overload for when the output is a vector.
Definition tsgGradientDescent.hpp:112
void setX(const double x_new[])
Set the current candidate point.
Definition tsgGradientDescent.hpp:117
GradientDescentState(const std::vector< double > &x0, const double initial_stepsize)
Constructor for a gradient descent state with the initial candidate x and stepsize lambda0.
Definition tsgGradientDescent.hpp:89
double getAdaptiveStepsize() const
Return the stepsize.
Definition tsgGradientDescent.hpp:108
void getX(double x_out[]) const
Return the current candidate point.
Definition tsgGradientDescent.hpp:110
size_t getNumDimensions() const
Return the number of dimensions.
Definition tsgGradientDescent.hpp:106
GradientDescentState(const GradientDescentState &source)=default
Copy constructor.
GradientDescentState & operator=(GradientDescentState &&source)=default
Move assignment.
void setX(const std::vector< double > &x_new)
Overload for when the input is a vector.
Definition tsgGradientDescent.hpp:119
void setAdaptiveStepsize(const double new_stepsize)
Set the stepsize.
Definition tsgGradientDescent.hpp:115
GradientDescentState(GradientDescentState &&source)=default
Move constructor.
friend OptimizationStatus GradientDescent(const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state)
Applies the constant step-size gradient descent algorithm for functions with unbounded domains.
friend OptimizationStatus GradientDescent(const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const ProjectionFunctionSingle &proj, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
Applies the adaptive gradient descent algorithm on a restricted domain.
GradientDescentState & operator=(GradientDescentState &source)=default
Copy assignment.
friend OptimizationStatus GradientDescent(const ObjectiveFunctionSingle &func, const GradientFunctionSingle &grad, const double increase_coeff, const double decrease_coeff, const int max_iterations, const double tolerance, GradientDescentState &state)
Applies the adaptive gradient descent algorithm on unrestricted domain.
OptimizationStatus GradientDescent(const GradientFunctionSingle &grad, const double stepsize, const int max_iterations, const double tolerance, std::vector< double > &state)
Applies the constant step-size gradient descent algorithm for functions with unbounded domains.
std::function< double(const std::vector< double > &x)> ObjectiveFunctionSingle
Generic non-batched objective function signature.
Definition tsgOptimizationUtils.hpp:102
std::function< void(const std::vector< double > &x_single, std::vector< double > &grad)> GradientFunctionSingle
Generic non-batched gradient function signature.
Definition tsgOptimizationUtils.hpp:159
std::function< void(const std::vector< double > &x_single, std::vector< double > &proj)> ProjectionFunctionSingle
Generic non-batched projection function signature.
Definition tsgOptimizationUtils.hpp:175
void checkVarSize(const std::string method_name, const std::string var_name, const int var_size, const int exp_size)
Definition tsgOptimizationUtils.hpp:80
Encapsulates the Tasmanian Optimization module.
Definition TasmanianOptimization.hpp:86
Definition tsgOptimizationUtils.hpp:65
Utility functions and aliases in the optimization module.