3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI
- URL: http://arxiv.org/abs/2109.06540v1
- Date: Tue, 14 Sep 2021 09:17:27 GMT
- Title: 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI
- Authors: Marcel Bengs, Finn Behrendt, Julia Kr\"uger, Roland Opfer, Alexander
Schlaefer
- Abstract summary: We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
- Score: 55.97060983868787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose. Brain Magnetic Resonance Images (MRIs) are essential for the
diagnosis of neurological diseases. Recently, deep learning methods for
unsupervised anomaly detection (UAD) have been proposed for the analysis of
brain MRI. These methods rely on healthy brain MRIs and eliminate the
requirement of pixel-wise annotated data compared to supervised deep learning.
While a wide range of methods for UAD have been proposed, these methods are
mostly 2D and only learn from MRI slices, disregarding that brain lesions are
inherently 3D and the spatial context of MRI volumes remains unexploited.
Methods. We investigate whether using increased spatial context by using MRI
volumes combined with spatial erasing leads to improved unsupervised anomaly
segmentation performance compared to learning from slices. We evaluate and
compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D
input erasing, and systemically study the impact of the data set size on the
performance.
Results. Using two publicly available segmentation data sets for evaluation,
3D VAE outperform their 2D counterpart, highlighting the advantage of
volumetric context. Also, our 3D erasing methods allow for further performance
improvements. Our best performing 3D VAE with input erasing leads to an average
DICE score of 31.40% compared to 25.76% for the 2D VAE.
Conclusions. We propose 3D deep learning methods for UAD in brain MRI
combined with 3D erasing and demonstrate that 3D methods clearly outperform
their 2D counterpart for anomaly segmentation. Also, our spatial erasing method
allows for further performance improvements and reduces the requirement for
large data sets.
Related papers
- Domain Aware Multi-Task Pretraining of 3D Swin Transformer for T1-weighted Brain MRI [4.453300553789746]
We propose novel domain-aware multi-task learning tasks to pretrain a 3D Swin Transformer for brain magnetic resonance imaging (MRI)
Our method considers the domain knowledge in brain MRI by incorporating brain anatomy and morphology as well as standard pretext tasks adapted for 3D imaging in a contrastive learning setting.
Our method outperforms existing supervised and self-supervised methods in three downstream tasks of Alzheimer's disease classification, Parkinson's disease classification, and age prediction tasks.
arXiv Detail & Related papers (2024-10-01T05:21:02Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - MinD-3D: Reconstruct High-quality 3D objects in Human Brain [50.534007259536715]
Recon3DMind is an innovative task aimed at reconstructing 3D visuals from Functional Magnetic Resonance Imaging (fMRI) signals.
We present the fMRI-Shape dataset, which includes data from 14 participants and features 360-degree videos of 3D objects.
We propose MinD-3D, a novel and effective three-stage framework specifically designed to decode the brain's 3D visual information from fMRI signals.
arXiv Detail & Related papers (2023-12-12T18:21:36Z) - CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation [3.1657395760137406]
We propose a semi-supervised segmentation framework built upon a novel atlas-based pseudo-labeling method and a 3D deep convolutional neural network (DCNN) for 3D brain MRI segmentation.
The experimental results demonstrate the superiority of the proposed framework over the baseline method both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-05-19T14:42:49Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Unsupervised Anomaly Detection in 3D Brain MRI using Deep Learning with
Multi-Task Brain Age Prediction [53.122045119395594]
Unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results.
We propose deep learning for UAD in 3D brain MRI considering additional age information.
Based on our analysis, we propose a novel deep learning approach for UAD with multi-task age prediction.
arXiv Detail & Related papers (2022-01-31T09:39:52Z) - Leveraging 3D Information in Unsupervised Brain MRI Segmentation [1.6148039130053087]
Unsupervised Anomaly Detection (UAD) methods are proposed, detecting anomalies as outliers of a healthy model learned using a Variational Autoencoder (VAE)
Here, we propose to perform UAD in a 3D fashion and compare 2D and 3D VAEs.
As a side contribution, we present a new loss function guarantying a robust training. Learning is performed using a multicentric dataset of healthy brain MRIs, and segmentation performances are estimated on White-Matter Hyperintensities and tumors lesions.
arXiv Detail & Related papers (2021-01-26T10:04:57Z) - MRI brain tumor segmentation and uncertainty estimation using 3D-UNet
architectures [0.0]
This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data.
We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively.
The model and uncertainty estimation measurements proposed in this work have been used in the BraTS'20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.
arXiv Detail & Related papers (2020-12-30T19:28:53Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.