Advanced search
Start date
Betweenand


Comparing LIME and SHAP Global Explanations for Human Activity Recognition

Full text
Author(s):
Alves, Patrick ; Delgado, Jaime ; Gonzalez, Luis ; Rocha, Anderson R. ; Boccato, Levy ; Borin, Edson
Total Authors: 6
Document type: Journal article
Source: INTELLIGENT SYSTEMS, BRACIS 2024, PT III; v. 15414, p. 15-pg., 2025-01-01.
Abstract

The development of complex machine learning models has been increasing in recent years, and the need to understand the decisions made by these models has become essential. In this context, eXplainable Artificial Intelligence (XAI) has emerged as a field of study that aims to provide explanations for the decisions made by ML models. This work presents a comparison between two state-of-the-art XAI techniques, LIME and SHAP, in the context of Human Activity Recognition (HAR). As LIME provides only local explanations, we present a way to compute global feature importance from LIME explanations based on a global aggregation approach and use correlation metrics to compare the feature importance provided by LIME and SHAP across different HAR datasets and models. The results show that using correlation metrics to compare XAI techniques is not enough to conclude if the techniques are similar or are not, so we employ a feature removal and retrain approach and show that, besides some divergences in the correlation metrics, both XAI techniques successfully identify the most and least important features used by the model for the task. (AU)

FAPESP's process: 13/08293-7 - CCES - Center for Computational Engineering and Sciences
Grantee:Munir Salomao Skaf
Support Opportunities: Research Grants - Research, Innovation and Dissemination Centers - RIDC