Abstract
In this paper, we analyse the worst-case complexity of nonlinear stepsize control (NSC) algorithms for solving convex smooth unconstrained optimization problems. We show that, to drive the norm of the gradient below some given positive ε, such methods take at most (Formula presented.) iterations, which shows that the complexity bound for these methods is in parity with that of gradient descent methods for the same class of problems. As NSC algorithm is a generalization of several methods such as trust-region and adaptive cubic with regularization methods, such bound holds automatically for these methods as well.
Original language | English |
---|---|
Journal | Optimization |
DOIs | |
Publication status | Accepted/In press - Oct 2020 |
Keywords
- 49M37
- 65K05
- 90C25
- 90C30
- convex smooth unconstrained optimization
- Nonlinear stepsize control algorithms
- worst-case complexity