Scholarship 22/02208-7 - Métodos de decomposição, Análise variacional - BV FAPESP
Advanced search
Start date
Betweenand

Contemporary variants of splitting methods for large-scale optimization

Grant number: 22/02208-7
Support Opportunities:Scholarships abroad - Research Internship - Doctorate
Start date: September 01, 2022
End date: February 28, 2023
Field of knowledge:Physical Sciences and Mathematics - Mathematics - Applied Mathematics
Principal Investigator:Paulo José da Silva e Silva
Grantee:Felipe Eduardo Atenas Maldonado
Supervisor: Jonathan Eckstein
Host Institution: Instituto de Matemática, Estatística e Computação Científica (IMECC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Institution abroad: Rutgers The State University of New Jersey, New Brunswick, United States  
Associated to the scholarship:19/20023-1 - Sthochastic variational analysis applied to the energy industry, BP.DR

Abstract

The project proposal is to advance operator splitting methods exploring three related fronts, one theoretical and two of practical nature. While not dependent on the theoretical work, the two practical subprojects are designed to take full advantage of any advances obtained in the theoretical portion of the effort. The research topics described below are challenging, at the cutting edge of mathematical optimization. As such, some parts of this research program may extend beyond the six-month visit, taking the form of a long-distance collaboration with the hosting researcher. The principal idea of theoretical aspects of the proposed project is to extend the candidate's prior work on convergence of descent algorithms to some forms of operator splitting, with the intention of developing practical splitting algorithms with favorable asymptotic convergence properties. The candidate will study how to adapt various kinds of operator splitting iterations to the descent framework recently developed by him and coauthors, and extend the theory therein to obtain schemes that exhibit a superlinear rate of convergence. This goal may prove challenging because the analysis of operator splitting methods tends to be based on reducing the distance to the set of solutions points, rather than reducing objective function values. Nevertheless, operator splitting methods use subgradient vectors in a similar manner to descent methods, and some of the techniques developed by the hosting researcher may prove helpful. Generalizations and adaptations of the developed descent framework will also be considered as necessary. General interest of a unified theory for obtaining superlinear rates in nonsmooth convex optimization is supported by previous works that have been developed separately for specific settings. Such is the case for the VU-bundle method, considering a certain subsequence of "very serious steps". More recently, quadratic convergence of the prox-linear scheme under a sharpness condition was established in the literature. The ability of projective splitting methods to accommodate iteration-by-iteration changes to the proximal parameter may prove helpful in this effort, along with the theory of variable metric quasi-Fejér sequences. A more recent tool is the superlinear directions approach, corresponding to a Newton-type algorithm for finding fixed points of nonexpansive operators. The proposed effort will draw on these works and related ones as appropriate. There are two practical applications in mind: very-large-scale linear programming, and accelerating progressive hedging and related methods for stochastic programming. As for the former, it will explore the possibility of using first-order methods and splitting techniques as an alternative to simplex and Newton barrier methods to solve large-scale linear programming problems. For the latter, the goal is to investigate improvements to the implementation and convergence of progressive-hedging-like stochastic optimization algorithms. The study will include the related "projective hedging" algorithm developed by the hosting researcher and coauthors, and in particular how to dynamically adjust its parameters to enhance "tail" behavior without sacrificing theoretical convergence. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)