Busca avançada
Ano de início
Entree


RADAR-MIX: How to Uncover Adversarial Attacks in Medical Image Analysis through Explainability

Texto completo
Autor(es):
de Aguiar, Erikson J. ; Traina, Caetano, Jr. ; Traina, Agma J. M.
Número total de Autores: 3
Tipo de documento: Artigo Científico
Fonte: 2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024; v. N/A, p. 6-pg., 2024-01-01.
Resumo

Medical image analysis is an important asset in the clinical process, providing resources to assist physicians in detecting diseases and making accurate diagnoses. Deep Learning (DL) models have been widely applied in these tasks, improving the ability to recognize patterns, including accurate and fast diagnosis. However, DL can present issues related to security violations that reduce the system's confidence. Uncovering these attacks before they happen and visualizing their behavior is challenging Current solutions are limited to binary analysis of the problem, only classifying the sample into attacked or not attacked. In this paper, we propose the RADAR-MIX framework for uncovering adversarial attacks using quantitative metrics and analysis of the attack's behavior based on visual analysis. The RADAR-MIX provides a framework to assist practitioners in checking the possibility of adversarial examples in medical applications. Our experimental evaluation shows that the Deep Fool and Carlini & Wagner (CW) attacks significantly evade the ResNet50V2 with a slight noise level of 0.001. Furthermore, our results revealed that the gradient-based methods, such as Gradient-weighted Class Activation Mapping (Grad -CAM) and SHapley Additive exPlanations (SHAY), achieved high attack detection effectiveness. While Local Interpretable Model-agnostic Explanations (LIME) presents low consistency, implying the most ability to uncover robust attacks supported by visual analysis. (AU)

Processo FAPESP: 16/17078-0 - Mineração, indexação e visualização de Big Data no contexto de sistemas de apoio à decisão clínica (MIVisBD)
Beneficiário:Agma Juci Machado Traina
Modalidade de apoio: Auxílio à Pesquisa - Temático
Processo FAPESP: 21/08982-3 - Segurança e privacidade em modelos de aprendizagem de máquina para imagens médicas contra ataques adversários
Beneficiário:Erikson Júlio de Aguiar
Modalidade de apoio: Bolsas no Brasil - Doutorado
Processo FAPESP: 20/07200-9 - Analisando dados complexos vinculados a COVID-19 para apoio à tomada de decisão e prognóstico
Beneficiário:Agma Juci Machado Traina
Modalidade de apoio: Auxílio à Pesquisa - Regular
Processo FAPESP: 23/14759-0 - Preservação da privacidade e defesa de backdoors: rumo à aprendizagem federada em contextos médicos
Beneficiário:Erikson Júlio de Aguiar
Modalidade de apoio: Bolsas no Exterior - Estágio de Pesquisa - Doutorado