Busca avançada
Ano de início
Entree


Are Explanations Helpful? A Comparative Analysis of Explainability Methods in Skin Lesion Classifiers

Texto completo
Autor(es):
Paccotacya-Yanque, Rosa Y. G. ; Bissoto, Alceu ; Avila, Sandra
Número total de Autores: 3
Tipo de documento: Artigo Científico
Fonte: 2024 20TH INTERNATIONAL SYMPOSIUM ON MEDICAL INFORMATION PROCESSING AND ANALYSIS, SIPAIM 2024; v. N/A, p. 5-pg., 2024-01-01.
Resumo

Deep Learning has shown outstanding results in computer vision tasks; healthcare is no exception. However, there is no straightforward way to expose the decision-making process of DL models. Good accuracy is not enough for skin cancer predictions. Understanding the model's behavior is crucial for clinical application and reliable outcomes. In this work, we identify desiderata for explanations in skin-lesion models. We analyzed seven methods, four based on pixel-attribution (Grad-CAM, Score-CAM, LIME, SHAP) and three high-level concepts (ACE, ICE, CME), for a deep neural network trained on the International Skin Imaging Collaboration Archive. Our findings indicate that while these techniques reveal biases, there is room for improving the comprehensiveness of explanations to achieve transparency in skin-lesion models. (AU)

Processo FAPESP: 23/12086-9 - Araceli: Inteligência Artificial no Combate ao Abuso Sexual Infantil
Beneficiário:Sandra Eliza Fontes de Avila
Modalidade de apoio: Auxílio à Pesquisa - Regular
Processo FAPESP: 20/09838-0 - BI0S - Brazilian Institute of Data Science
Beneficiário:João Marcos Travassos Romano
Modalidade de apoio: Auxílio à Pesquisa - Programa Centros de Pesquisa em Engenharia
Processo FAPESP: 13/08293-7 - CECC - Centro de Engenharia e Ciências Computacionais
Beneficiário:Munir Salomao Skaf
Modalidade de apoio: Auxílio à Pesquisa - Centros de Pesquisa, Inovação e Difusão - CEPIDs