Handling Missing MRI Input Data in Deep Learning Segmentation of Brain
Metastases: A Multi-Center Study
- URL: http://arxiv.org/abs/1912.11966v1
- Date: Fri, 27 Dec 2019 02:49:45 GMT
- Title: Handling Missing MRI Input Data in Deep Learning Segmentation of Brain
Metastases: A Multi-Center Study
- Authors: Endre Gr{\o}vik, Darvin Yi, Michael Iv, Elizabeth Tong, Line Brennhaug
Nilsen, Anna Latysheva, Cathrine Saxhaug, Kari Dolven Jacobsen, {\AA}slaug
Helland, Kyrre Eeg Emblem, Daniel Rubin, Greg Zaharchuk
- Abstract summary: A deep learning based segmentation model for automatic segmentation of brain metastases, named DropOut, was trained on multi-sequence MRI.
The segmentation results were compared with the performance of a state-of-the-art DeepLabV3 model.
The DropOut model showed a significantly higher score compared to the DeepLabV3 model.
- Score: 1.4463443378902883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose was to assess the clinical value of a novel DropOut model for
detecting and segmenting brain metastases, in which a neural network is trained
on four distinct MRI sequences using an input dropout layer, thus simulating
the scenario of missing MRI data by training on the full set and all possible
subsets of the input data. This retrospective, multi-center study, evaluated
165 patients with brain metastases. A deep learning based segmentation model
for automatic segmentation of brain metastases, named DropOut, was trained on
multi-sequence MRI from 100 patients, and validated/tested on 10/55 patients.
The segmentation results were compared with the performance of a
state-of-the-art DeepLabV3 model. The MR sequences in the training set included
pre- and post-gadolinium (Gd) T1-weighted 3D fast spin echo, post-Gd
T1-weighted inversion recovery (IR) prepped fast spoiled gradient echo, and 3D
fluid attenuated inversion recovery (FLAIR), whereas the test set did not
include the IR prepped image-series. The ground truth were established by
experienced neuroradiologists. The results were evaluated using precision,
recall, Dice score, and receiver operating characteristics (ROC) curve
statistics, while the Wilcoxon rank sum test was used to compare the
performance of the two neural networks. The area under the ROC curve (AUC),
averaged across all test cases, was 0.989+-0.029 for the DropOut model and
0.989+-0.023 for the DeepLabV3 model (p=0.62). The DropOut model showed a
significantly higher Dice score compared to the DeepLabV3 model (0.795+-0.105
vs. 0.774+-0.104, p=0.017), and a significantly lower average false positive
rate of 3.6/patient vs. 7.0/patient (p<0.001) using a 10mm3 lesion-size limit.
The DropOut model may facilitate accurate detection and segmentation of brain
metastases on a multi-center basis, even when the test cohort is missing MRI
input data.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset [11.173478552040441]
Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain.
In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification.
arXiv Detail & Related papers (2024-06-20T11:26:32Z) - Classification of Prostate Cancer in 3D Magnetic Resonance Imaging Data based on Convolutional Neural Networks [0.0]
Prostate cancer is a commonly diagnosed cancerous disease among men world-wide.
CNN are evaluated on their abilities to reliably classify whether an MRI sequence contains malignant lesions.
The best result was achieved by a ResNet3D, yielding an average precision score of 0.4583 and AUC ROC score of 0.6214.
arXiv Detail & Related papers (2024-04-16T13:18:02Z) - Artificial Intelligence in Fetal Resting-State Functional MRI Brain
Segmentation: A Comparative Analysis of 3D UNet, VNet, and HighRes-Net Models [1.2382905694337476]
This study introduced a novel application of artificial intelligence (AI) for automated brain segmentation in fetal brain fMRI, magnetic resonance imaging (fMRI)
Three AI models were employed: 3D UNet, VNet, and HighResNet.
Our findings shed light on the performance of different AI models for fetal resting-state fMRI brain segmentation.
arXiv Detail & Related papers (2023-11-17T19:57:05Z) - Predicting recovery following stroke: deep learning, multimodal data and
feature selection using explainable AI [3.797471910783104]
Major challenges include the very high dimensionality of neuroimaging data and the relatively small size of the datasets available for learning.
We introduce a novel approach of training a convolutional neural network (CNN) on images that combine regions-of-interest extracted from MRIs.
We conclude by proposing how the current models could be improved to achieve even higher levels of accuracy using images from hospital scanners.
arXiv Detail & Related papers (2023-10-29T22:31:20Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - Detection of Large Vessel Occlusions using Deep Learning by Deforming
Vessel Tree Segmentations [5.408694811103598]
This work uses convolutional neural networks for case-level classification trained with elastic deformation of the vessel tree segmentation masks to artificially augment training data.
The neural network classifies the presence of an LVO and the affected hemisphere.
In a 5-fold cross validated ablation study, we demonstrate that the use of the suggested augmentation enables us to train robust models even from few data sets.
arXiv Detail & Related papers (2021-12-03T09:07:29Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.