Advanced search
Start date
Betweenand

Interpretability and fairness in machine learning: Capacity-based functions and interaction indices

Grant number: 21/11086-0
Support Opportunities:Scholarships abroad - Research Internship - Post-doctor
Start date: April 01, 2022
End date: March 22, 2023
Field of knowledge:Engineering - Electrical Engineering - Telecommunications
Principal Investigator:Leonardo Tomazeli Duarte
Grantee:Guilherme Dean Pelegrina
Supervisor: Michel Grabisch
Host Institution: Faculdade de Ciências Aplicadas (FCA). Universidade Estadual de Campinas (UNICAMP). Limeira , SP, Brazil
Institution abroad: Université Paris 1 Panthéon-Sorbonne, France  
Associated to the scholarship:20/10572-5 - Novel approaches for fairness and transparency in machine learning problems, BP.PD

Abstract

Currently, the research community has been putting an effort into the development of mechanisms that improve interpretability and fairness in machine learning problems. Recent works addressed interpretability by means of model-agnostic method called SHAP. This method is based on the Shapley value, a classical concept in cooperative game theory that indicates the marginal contribution of an attribute in the output model. However, the idea of the Shapley value, as well as for other importance values, can be extended to coalitions of attributes. Therefore, we will investigate if these methods can be used to understand either the effect of single attributes or interactions between them in the trained model. Moreover, we may use these interpretations to understand how the trained model leads to unfair results.Most of the model-agnostic methods for local explanation (e.g., SHAP method) approximate the model to a linear interpretable function. In the literature, one may find capacity-based functions, such as the Choquet integral and the multilinear model, that could be used to locally fit the model and provide a clear understanding of the attributes in the output model. Indeed, the capacity coefficients in the Choquet integral are directly associated to the Shapley values. For the multilinear model, the parameters are associated to the Banzhaf values, another concept in cooperative game theory. Therefore, another goal would be to investigate if we could use these functions to improve interpretability in machine learning.It is worth mentioning that this internship research project lies in the context of the Post-Doctoral fellowship number 2020/10572-5, which comprises the investigation of novel approaches to deal with interpretability and fairness in machine learning. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Scientific publications
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
PELEGRINA, GUILHERME D.; DUARTE, LEONARDO T.; GRABISCH, MICHEL. Interpreting the Contribution of Sensors in Blind Source Extraction by Means of Shapley Values. IEEE SIGNAL PROCESSING LETTERS, v. 30, p. 5-pg., . (20/09838-0, 20/10572-5, 21/11086-0)
PELEGRINA, GUILHERME D.; BROTTO, RENAN D. B.; DUARTE, LEONARDO T.; ATTUX, ROMIS; ROMANO, JOAO M. T.; IEEE. Analysis of Trade-offs in Fair Principal Component Analysis Based on Multi-objective Optimization. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), v. N/A, p. 8-pg., . (20/09838-0, 19/20899-4, 20/01089-9, 20/10572-5, 21/11086-0)