Advanced search
Start date
Betweenand


RADAR-MIX: How to Uncover Adversarial Attacks in Medical Image Analysis through Explainability

Full text
Author(s):
de Aguiar, Erikson J. ; Traina, Caetano, Jr. ; Traina, Agma J. M.
Total Authors: 3
Document type: Journal article
Source: 2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024; v. N/A, p. 6-pg., 2024-01-01.
Abstract

Medical image analysis is an important asset in the clinical process, providing resources to assist physicians in detecting diseases and making accurate diagnoses. Deep Learning (DL) models have been widely applied in these tasks, improving the ability to recognize patterns, including accurate and fast diagnosis. However, DL can present issues related to security violations that reduce the system's confidence. Uncovering these attacks before they happen and visualizing their behavior is challenging Current solutions are limited to binary analysis of the problem, only classifying the sample into attacked or not attacked. In this paper, we propose the RADAR-MIX framework for uncovering adversarial attacks using quantitative metrics and analysis of the attack's behavior based on visual analysis. The RADAR-MIX provides a framework to assist practitioners in checking the possibility of adversarial examples in medical applications. Our experimental evaluation shows that the Deep Fool and Carlini & Wagner (CW) attacks significantly evade the ResNet50V2 with a slight noise level of 0.001. Furthermore, our results revealed that the gradient-based methods, such as Gradient-weighted Class Activation Mapping (Grad -CAM) and SHapley Additive exPlanations (SHAY), achieved high attack detection effectiveness. While Local Interpretable Model-agnostic Explanations (LIME) presents low consistency, implying the most ability to uncover robust attacks supported by visual analysis. (AU)

FAPESP's process: 16/17078-0 - Mining, indexing and visualizing Big Data in clinical decision support systems (MIVisBD)
Grantee:Agma Juci Machado Traina
Support Opportunities: Research Projects - Thematic Grants
FAPESP's process: 21/08982-3 - Security and privacy in machine learning models to medical images against adversarial attacks
Grantee:Erikson Júlio de Aguiar
Support Opportunities: Scholarships in Brazil - Doctorate
FAPESP's process: 20/07200-9 - Analyzing complex data from COVID-19 to support decision making and prognosis
Grantee:Agma Juci Machado Traina
Support Opportunities: Regular Research Grants
FAPESP's process: 23/14759-0 - Privacy-preserving and backdoors defending: towards federated learning in medical settings
Grantee:Erikson Júlio de Aguiar
Support Opportunities: Scholarships abroad - Research Internship - Doctorate