Busca avançada
Ano de início
Entree


The Lattice Overparametrization Paradigm for the Machine Learning of Lattice Operators

Texto completo
Autor(es):
Marcondes, Diego ; Barrera, Junior
Número total de Autores: 2
Tipo de documento: Artigo Científico
Fonte: DISCRETE GEOMETRY AND MATHEMATICAL MORPHOLOGY, DGMM 2024; v. 14605, p. 13-pg., 2024-01-01.
Resumo

The machine learning of lattice operators has three possible bottlenecks. From a statistical standpoint, it is necessary to design a constrained class of operators based on prior information with low bias, and low complexity relative to the sample size. From a computational perspective, there should be an efficient algorithm to minimize an empirical error over the class. From an understanding point of view, the properties of the learned operator need to be derived, so its behavior can be theoretically understood. The statistical bottleneck can be overcome due to the rich literature about the representation of lattice operators, but there is no general learning algorithm for them. In this paper, we discuss a learning paradigm in which, by overparametrizing a class via elements in a lattice, an algorithm for minimizing functions in a lattice is applied to learn. We present the stochastic lattice descent algorithm as a general algorithm to learn on constrained classes of operators as long as a lattice overparametrization of it is fixed, and we discuss previous works which are proves of concept. Moreover, if there are algorithms to compute the basis of an operator from its overparametrization, then its properties can be deduced and the understanding bottleneck is also overcome. This learning paradigm has three properties that modern methods based on neural networks lack: control, transparency and interpretability. Nowadays, there is an increasing demand for methods with these characteristics, and we believe that mathematical morphology is in a unique position to supply them. The lattice overparametrization paradigm could be a missing piece for it to achieve its full potential within modern machine learning. (AU)

Processo FAPESP: 22/06211-2 - Aprendizado de máquina via espaços de aprendizado, da teoria para a prática: como a falta de dados pode ser mitigada por alto poder computacional
Beneficiário:Diego Ribeiro Marcondes
Modalidade de apoio: Bolsas no Brasil - Pós-Doutorado
Processo FAPESP: 23/00256-7 - Informação a priori em redes neurais
Beneficiário:Diego Ribeiro Marcondes
Modalidade de apoio: Bolsas no Exterior - Estágio de Pesquisa - Pós-Doutorado
Processo FAPESP: 20/06950-4 - Centro de Pesquisa e Desenvolvimento sobre Conhecimento ao Vivo
Beneficiário:João Eduardo Ferreira
Modalidade de apoio: Auxílio à Pesquisa - Núcleos de Pesquisa Orientada a Problemas em São Paulo