TY - JOUR
T1 - Properties of the delayed weighted gradient method
AU - Andreani, Roberto
AU - Raydan, Marcos
N1 - Funding Information:
Roberto Andreani was financially supported by FAPESP (Projects 2013/05475-7 and 2017/18308-2) and CNPq (Project 301888/2017-5).
Marcos Raydan was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Project UIDB/MAT/00297/2020 (Centro de Matemática e Aplicações).
Roberto Andreani would like to thank the Operations Research Group at CMA (Centro de Matemática e Aplicações), FCT, NOVA University of Lisbon, Portugal, for the hospitality during a two-week visit in December 2019.
Funding Information:
We would like to thank two anonymous referees for their comments and suggestions that helped us to improve the final version of this paper. Roberto Andreani was financially supported by FAPESP (Projects 2013/05475-7 and 2017/18308-2) and CNPq (Project 301888/2017-5). Marcos Raydan was financially supported by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Project UIDB/MAT/00297/2020 (Centro de Matemática e Aplicações). Roberto Andreani would like to thank the Operations Research Group at CMA (Centro de Matemática e Aplicações), FCT, NOVA University of Lisbon, Portugal, for the hospitality during a two-week visit in December 2019.
Publisher Copyright:
© 2020, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2021/1
Y1 - 2021/1
N2 - The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n× n real Hessian matrix of the quadratic function has only p< n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the future use of this novel scheme when low precision is required for the minimization of non-quadratic functions.
AB - The delayed weighted gradient method, recently introduced in Oviedo-Leon (Comput Optim Appl 74:729–746, 2019), is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n× n real Hessian matrix of the quadratic function has only p< n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the future use of this novel scheme when low precision is required for the minimization of non-quadratic functions.
KW - Conjugate gradient methods
KW - Finite termination
KW - Gradient methods
KW - Krylov subspace methods
KW - Smoothing techniques
UR - http://www.scopus.com/inward/record.url?scp=85091508626&partnerID=8YFLogxK
U2 - 10.1007/s10589-020-00232-9
DO - 10.1007/s10589-020-00232-9
M3 - Article
AN - SCOPUS:85091508626
SN - 0926-6003
VL - 78
SP - 167
EP - 180
JO - Computational Optimization And Applications
JF - Computational Optimization And Applications
IS - 1
ER -