A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we analyse the worst-case complexity of nonlinear stepsize control (NSC) algorithms for solving convex smooth unconstrained optimization problems. We show that, to drive the norm of the gradient below some given positive ε, such methods take at most  (Formula presented.) iterations, which shows that the complexity bound for these methods is in parity with that of gradient descent methods for the same class of problems. As NSC algorithm is a generalization of several methods such as trust-region and adaptive cubic with regularization methods, such bound holds automatically for these methods as well.

Original languageEnglish
JournalOptimization
DOIs
Publication statusAccepted/In press - Oct 2020

Keywords

  • 49M37
  • 65K05
  • 90C25
  • 90C30
  • convex smooth unconstrained optimization
  • Nonlinear stepsize control algorithms
  • worst-case complexity

Fingerprint

Dive into the research topics of 'A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization'. Together they form a unique fingerprint.

Cite this