Properties of the delayed weighted gradient method

Roberto Andreani, Marcos Raydan

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
31 Downloads (Pure)

Abstract

The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n× n real Hessian matrix of the quadratic function has only p< n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the future use of this novel scheme when low precision is required for the minimization of non-quadratic functions.

Original languageEnglish
Pages (from-to)167-180
Number of pages14
JournalComputational Optimization And Applications
Volume78
Issue number1
DOIs
Publication statusPublished - Jan 2021

Keywords

  • Conjugate gradient methods
  • Finite termination
  • Gradient methods
  • Krylov subspace methods
  • Smoothing techniques

Fingerprint

Dive into the research topics of 'Properties of the delayed weighted gradient method'. Together they form a unique fingerprint.

Cite this