Advanced search
Start date

Interpretability of deep networks

Grant number: 17/06161-7
Support Opportunities:Scholarships in Brazil - Post-Doctoral
Effective date (Start): June 01, 2017
Effective date (End): December 31, 2021
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computing Methodologies and Techniques
Acordo de Cooperação: Coordination of Improvement of Higher Education Personnel (CAPES)
Principal Investigator:André Carlos Ponce de Leon Ferreira de Carvalho
Grantee:Tiago Botari
Host Institution: Instituto de Ciências Matemáticas e de Computação (ICMC). Universidade de São Paulo (USP). São Carlos , SP, Brazil
Associated research grant:13/07375-0 - CeMEAI - Center for Mathematical Sciences Applied to Industry, AP.CEPID
Associated scholarship(s):19/26617-0 - Unravelling the building blocks of deep learning, BE.EP.PD


Over the last years, several important advances have occurred in the field artificial intelligence, particularly in machine learning. Different applications were developed in academia and industry. Deep learning, a class of machine learning algorithms, has demonstrated high capacity, with performance superior to that of state of the art techniques in tasks like image and voice recognition, and simulation of game players. Despite the fast success of DL, the field has many unexplored features and possibilities. For instance, interpretability and representability of DL are fundamental for the acceptance of these algorithms by the industry and they are not fully understood. In order to achieve better application and reach new developments, the user trust should be improved. In this project, techniques able to improve the interpretability of DL algorithms will be investigated and proposed. The intrinsic aspects of DL will be used, such as hierarchy, generation of distinct representations, the composition of simple elements on complex elements, growth in the abstraction of the data will guide the developments. Theoretical and computational tools will be used in order to create new routes of interpretation of DL. Among these, a hybrid method will be created combining the capacity of DL and the interpretability of other algorithms such as decision tree induction algorithms. Additionally, this project will investigate an application in astronomy for the estimation of galaxies redshift from photometric data. The redshift estimation is fundamental to obtain the parameters in astronomic models. The employment of DL in this problem is very promising due to the very large quantities of photometric data and the inherent presence of noise in this kind of data. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items

Scientific publications
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
MASTELINI, SAULO MARTIELLO; CASSAR, DANIEL R.; ALCOBACA, EDESIO; BOTARI, TIAGO; DE CARVALHO, ANDRE C. P. L. F.; ZANOTTO, EDGAR D.. Machine learning unveils composition-property relationships in chalcogenide glasses. ACTA MATERIALIA, v. 240, p. 13-pg., . (18/14819-5, 13/07793-6, 17/12491-0, 18/07319-6, 17/06161-7, 13/07375-0)
ALCOBACA, EDESIO; MASTELINI, SAULO MARTIELLO; BOTARI, TIAGO; PIMENTEL, BRUNO ALMEIDA; CASSAR, DANIEL ROBERTO; DE LEON FERREIRA DE CARVALHO, ANDRE CARLOS PONCE; ZANOTTO, EDGAR DUTRA. Explainable Machine Learning Algorithms For Predicting Glass Transition Temperatures. ACTA MATERIALIA, v. 188, p. 92-100, . (17/12491-0, 13/07375-0, 18/07319-6, 17/06161-7, 17/20265-0, 13/07793-6, 18/14819-5)
COSCRATO, VICTOR; INACIO, MARCO H. A.; BOTARI, TIAGO; IZBICKI, RAFAEL. NLS: An accurate and yet easy-to-interpret prediction method. NEURAL NETWORKS, v. 162, p. 14-pg., . (19/11321-9, 17/06161-7, 17/03363-8)

Please report errors in scientific publications list using this form.