Scholarship 23/05783-5 - Aprendizado computacional, Redes neurais (computação) - BV FAPESP
Advanced search
Start date
Betweenand

Investigating the disagreement problem in local explanation methods

Grant number: 23/05783-5
Support Opportunities:Scholarships abroad - Research Internship - Doctorate
Start date: September 01, 2023
End date: August 31, 2024
Field of knowledge:Physical Sciences and Mathematics - Computer Science
Principal Investigator:Luis Gustavo Nonato
Grantee:Priscylla Maria da Silva Sousa
Supervisor: José Claudio Teixeira e Silva Junior
Host Institution: Instituto de Ciências Matemáticas e de Computação (ICMC). Universidade de São Paulo (USP). São Carlos , SP, Brazil
Institution abroad: New York University, United States  
Associated to the scholarship:22/03941-0 - An interpretable predictive model for crime forecasting using Graph Neural Network, BP.DR

Abstract

Responsible AI is a hot topic nowadays. One of the issues related to this topic is the need to understand predictions made by predictive models. In particular, models used in the health and criminal field need to be reliable and trustworthy. In other words, the model's accuracy is not the only important thing, understanding the model's behavior against input data is also essential. In order to tackle this problem, many explanation methods have been proposed for both global and local explanations (specific for each instance) models. The emergence of distinct machine learning explanation methods has leveraged several new issues to be investigated. The disagreement problem is one such issue, as there may be scenarios where the output of different explanation methods disagree with each other. This issue is of concern, especially for areas where explanations can have a high impact, such as health and security. Although understanding how often, when, and where explanation methods agree or disagree is essential to increase confidence in the explanations, only a few works have been dedicated to investigating the disagreement problem. This project focuses on analyzing the disagreement problem for local explanation methods, aiming to understand which scenarios disagreements occur. A visualization tool will be implemented to support the analytical process, enabling interactive resources to scrutinize the disagreement problem and its relation with other entities, such as model accuracy and explanation quality. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)