Advanced search
Start date
Betweenand


An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

Full text
Author(s):
Andreani, R. ; Oviedo, H. ; Raydan, M. ; Secchin, L. D.
Total Authors: 4
Document type: Journal article
Source: Journal of Computational and Applied Mathematics; v. 416, p. 19-pg., 2022-07-02.
Abstract

The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of minimizing the objective function on the entire explored subspace, DWGM minimizes the 2-norm of the gradient vector on the same subspace. The main purpose of this study is to extend DWGM for solving strongly convex nonquadratic minimization problems while keeping a low computational cost per iteration. We incorporate the scheme into a tolerant line search globalization strategy, and we show that it exhibits q-linear convergence to the unique global solution. We compare the proposed extended DWGM with state-of-the-art methods for large-scale unconstrained minimization problems. We use some well-known strongly convex test problems, but also solve some regularized logistic regression problems that appear in machine learning. Our numerical results illustrate that the proposed scheme is promising and exhibits a fast convergence behavior. Moreover, we show through numerical experiments on CUTEst problems that the proposed extended DWGM can be very effective in accelerating the convergence of a well-established Barzilai-Borwein-type method when the iterates get close to minimizers of non-convex functions. (C) 2022 Elsevier B.V. All rights reserved. (AU)

FAPESP's process: 13/07375-0 - CeMEAI - Center for Mathematical Sciences Applied to Industry
Grantee:Francisco Louzada Neto
Support Opportunities: Research Grants - Research, Innovation and Dissemination Centers - RIDC
FAPESP's process: 17/18308-2 - Second-order optimality conditions and algorithms
Grantee:Gabriel Haeser
Support Opportunities: Regular Research Grants
FAPESP's process: 13/05475-7 - Computational methods in optimization
Grantee:Sandra Augusta Santos
Support Opportunities: Research Projects - Thematic Grants