Advanced search
Start date
Betweenand

Development of efficient methods for conic linear and nonlinear optimization problems and their applications

Grant number: 20/04585-7
Support type:Research Grants - Visiting Researcher Grant - International
Duration: February 01, 2021 - December 31, 2021
Field of knowledge:Physical Sciences and Mathematics - Mathematics - Applied Mathematics
Principal researcher:Ernesto Julián Goldberg Birgin
Grantee:Ernesto Julián Goldberg Birgin
Visiting researcher: Mituhiro Fukuda
Visiting researcher institution: Tokyo Institute of Technology, Japan
Home Institution: Instituto de Matemática e Estatística (IME). Universidade de São Paulo (USP). São Paulo , SP, Brazil
Associated research grant:18/24293-0 - Computational methods in optimization, AP.TEM

Abstract

Conic optimization problems are mathematical models in which the objective functions are minimized (or maximized) and the variables are finite vectors subject to restrictions described by inequalities of functions and simultaneously belonging to cones. When the function is linear, the problem is called linear and otherwise, non-linear. These problems are also called convex if the objective functions are convex functions (in the case of minimization) and the sets of feasible points are convex. Some example of such problems are the problems of linear optimization, optimization on second-order cones, semidefinite optimization, etc. In practice, these problems can be used to solve problems in the most different areas such as problems involving polynomials, problems in sensor networks, optimal energy flow, fermionic second-order reduced density matrices, system and control problems, regressions, etc. In particular, there is a growing interest due to machine learning applications. In this project, we will focus mainly on three themes related to solving computationally conic optimization problems. (i) Computational analysis of several first-order methods for non-smooth convex optimization problems. There is a discrepancy in the ultimate interests for these methods between the areas of continuous optimization and machine learning. In the first, the main interest is to propose methods that guarantee theoretical complexities to obtain an approximate solution to the problem. In the second, the interest is to propose numerical methods that can obtain the best performance for specific problems. Therefore, the objective of the project is to create a new paradigm to compare several existing methods including those proposed by the visiting professor who were never implemented and subsequently to test on several practical problems; (ii) Extension of the approximate Karush-Kuhn-Tucker conditions to first-order methods applied to semidefinite optimization problems. These conditions that have been proposed for almost 10 years have been increasingly cited for their practicality and for requiring less conditions than traditional constraint qualifications. The purpose of the project on this topic is to analyze and extend these conditions to methods that can solve large-scale semidefinite optimization problems; (iii) Study on proximal subgradient methods for phase retrieval problems. In this last theme, there are two objectives: (a) Remove some hypotheses and extend the results obtained in a recent work by the visiting professor in which the reformulation is done by a difference of convex functions, and the solution by a proximal subgradient method; (b) Identify practical problems, which for example are frequent in machine learning, where the reformulation of the problem becomes essential to obtain superior performance. This fact is confirmed by the recent work of the visiting professor applied to phase retrieval problems. (AU)