Advanced search
Start date
Betweenand

Prior information in neural networks

Grant number: 23/00256-7
Support Opportunities:Scholarships abroad - Research Internship - Post-doctor
Start date: August 01, 2023
End date: July 31, 2024
Field of knowledge:Physical Sciences and Mathematics - Computer Science
Principal Investigator:Junior Barrera
Grantee:Diego Ribeiro Marcondes
Supervisor: Ulisses Braga-Neto
Host Institution: Instituto de Matemática e Estatística (IME). Universidade de São Paulo (USP). São Paulo , SP, Brazil
Institution abroad: Texas A&M University, United States  
Associated to the scholarship:22/06211-2 - Machine learning via learning spaces, from theory to practice: how the lack of data may be mitigated by high computational power, BP.PD

Abstract

Although modern machine learning methods, such as neural networks (NN), have brought countless benefits in the last years, the indiscriminate use of machine learning has risen issues which are often related to lack of auditability and interpretability. A lot of research effort has been put into attaining interpretability, and a growing field of research which is highly dependent on interpretability is that of scientific machine learning (SciML), which brings together the fields of machine learning and scientific computation, and that, unlike traditional black box machine learning methods, aims to deliver interpretable models compatible with scientific models. An important SciML method are the physics-informed neural networks (PINN), which is an alternative to traditional numerical methods for solving partial differential equations, that have had a great success and is an active area of research. The recent success of PINN brings upon the perspective of applying informed NN in other contexts by developing methods of inserting strong prior information into NN to solve learning problems with interpretability and understanding of the results. In this context, this project aims to, through the development of informed NN, study how prior information may be incorporated into NN during training, in order to better understand their behavior, to enhance their performance as learning models and to apply NN as numerical solvers of scientific problems. In special, we will study how optimization constraints may be inserted into the training of NN to insert prior information. We expect with this research to develop methods of constraining NN that enable the construction of a Learning Space for NN. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)