Advanced search
Start date
Betweenand


Multi-label classification of chest x-rays using deep learning

Full text
Author(s):
Vinicius Teixeira de Melo
Total Authors: 1
Document type: Master's Dissertation
Press: Campinas, SP.
Institution: Universidade Estadual de Campinas (UNICAMP). Instituto de Computação
Defense date:
Examining board members:
Zanoni Dias; Levy Boccato; Alexandre Mello Ferreira
Advisor: Hélio Pedrini; Zanoni Dias
Abstract

Chest X-ray is one of the most accessible radiological exams for screening and diagnosing possible lung and heart diseases. In addition, this type of examination is used to identify whether devices such as pacemakers, venous catheters and tubes are correctly positioned. In recent years, much attention and efforts have been devoted to improving Computer Aided Diagnostic systems, with the classification of medical images being one of the main problems addressed. Deep Learning techniques have been increasingly used to provide predictions for the detection and classification of pathologies and lesions in chest X-ray images. Considering this information, we propose a method to classify chest X-ray images, called DuaLAnet, using deep learning techniques, such as convolutional neural networks and attention mechanisms. Our method aims to explore the complementarity between convolutional neural networks and attention modules to guide the learning process regarding the distinct classes, showing that the combination of complementary information extracted from chest X-ray images has a better rate of prediction when compared with the case with only a neural network. To validate our method, we use the ChestX-ray14 and CheXpert datasets, which have a wide variety of chest X-ray images with 14 classes each. We carried out experiments to verify the best way to initialize the weights of the neural networks, considering the initialization from ImageNet and from the radiography dataset that is not being used in the training. In addition, we experimented with four types of architectures and their variations to check which neural networks we should use as feature extractors. Then, we checked which attention mechanism was best suited to each feature extractor chosen previously, from the following attention mechanisms options: Class Activation Mapping (CAM), Soft Activation Mapping (SAM), and Feature Pyramid Attention (FPA). Finally, we carried out the experiments with the DuaLAnet method, after choosing the settings that best fit each dataset. The obtained results indicate that our method has a competitive AUROC score, compared to state-of-the-art methods in the ChestX-ray14 dataset, and several ways we can follow to improve the hit rate in the base CheXpert dataset (AU)

FAPESP's process: 19/20875-8 - Chest X-ray image classification using deep neural networks
Grantee:Vinicius Teixeira de Melo
Support Opportunities: Scholarships in Brazil - Master