Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks

Texto completo
Autor(es):
Lucena, Oeslle [1, 2] ; Souza, Roberto [3, 4, 5] ; Rittner, Leticia [1] ; Frayne, Richard [3, 4, 5] ; Lotufo, Roberto [1]
Número total de Autores: 5
Afiliação do(s) autor(es):
[1] Univ Estadual Campinas, Sch Elect & Comp Engn, Dept Comp Engn & Ind Automat, Med Image Comp Lab, Campinas, SP - Brazil
[2] Kings Coll London, Sch Biomed Engn & Imaging Sci, London - England
[3] Univ Calgary, Hotchkiss Brain Inst, Dept Radiol, Calgary, AB - Canada
[4] Univ Calgary, Hotchkiss Brain Inst, Dept Clin Neurosci, Calgary, AB - Canada
[5] Alberta Hlth Serv, Seaman Family Magnet Resonance Res Ctr, Foothills Med Ctr, Calgary, AB - Canada
Número total de Afiliações: 5
Tipo de documento: Artigo Científico
Fonte: ARTIFICIAL INTELLIGENCE IN MEDICINE; v. 98, p. 48-58, JUL 2019.
Citações Web of Science: 1
Resumo

Manual annotation is considered to be the ``gold standard{''} in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as ``silver standard{''} masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with ``silver standard{''} masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modem a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes. (AU)

Processo FAPESP: 13/07559-3 - Instituto Brasileiro de Neurociência e Neurotecnologia - BRAINN
Beneficiário:Fernando Cendes
Modalidade de apoio: Auxílio à Pesquisa - Centros de Pesquisa, Inovação e Difusão - CEPIDs
Processo FAPESP: 16/18332-8 - Segmentação de estruturas cerebrais de imagens de ressonância magnética utilizando deep learning
Beneficiário:Oeslle Alexandre Soares de Lucena
Modalidade de apoio: Bolsas no Brasil - Mestrado