Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks

Full text
Author(s):
Lucena, Oeslle [1, 2] ; Souza, Roberto [3, 4, 5] ; Rittner, Leticia [1] ; Frayne, Richard [3, 4, 5] ; Lotufo, Roberto [1]
Total Authors: 5
Affiliation:
[1] Univ Estadual Campinas, Sch Elect & Comp Engn, Dept Comp Engn & Ind Automat, Med Image Comp Lab, Campinas, SP - Brazil
[2] Kings Coll London, Sch Biomed Engn & Imaging Sci, London - England
[3] Univ Calgary, Hotchkiss Brain Inst, Dept Radiol, Calgary, AB - Canada
[4] Univ Calgary, Hotchkiss Brain Inst, Dept Clin Neurosci, Calgary, AB - Canada
[5] Alberta Hlth Serv, Seaman Family Magnet Resonance Res Ctr, Foothills Med Ctr, Calgary, AB - Canada
Total Affiliations: 5
Document type: Journal article
Source: ARTIFICIAL INTELLIGENCE IN MEDICINE; v. 98, p. 48-58, JUL 2019.
Web of Science Citations: 1
Abstract

Manual annotation is considered to be the ``gold standard{''} in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as ``silver standard{''} masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with ``silver standard{''} masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modem a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes. (AU)

FAPESP's process: 13/07559-3 - BRAINN - The Brazilian Institute of Neuroscience and Neurotechnology
Grantee:Fernando Cendes
Support type: Research Grants - Research, Innovation and Dissemination Centers - RIDC
FAPESP's process: 16/18332-8 - Deep learning for brain structures segmentation in MR imaging
Grantee:Oeslle Alexandre Soares de Lucena
Support type: Scholarships in Brazil - Master