Self-Supervised Learning for 3D Medical Image Analysis using 3D SimCLR
and Monte Carlo Dropout
- URL: http://arxiv.org/abs/2109.14288v2
- Date: Fri, 1 Oct 2021 08:33:36 GMT
- Title: Self-Supervised Learning for 3D Medical Image Analysis using 3D SimCLR
and Monte Carlo Dropout
- Authors: Yamen Ali, Aiham Taleb, Marina M. -C. H\"ohne and Christoph Lippert
- Abstract summary: We propose a 3D self-supervised method that is based on the contrastive (SimCLR) method.
We show that employing Bayesian neural networks during the inference phase can further enhance the results on the downstream tasks.
- Score: 4.190672342471606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning methods can be used to learn meaningful
representations from unlabeled data that can be transferred to supervised
downstream tasks to reduce the need for labeled data. In this paper, we propose
a 3D self-supervised method that is based on the contrastive (SimCLR) method.
Additionally, we show that employing Bayesian neural networks (with Monte-Carlo
Dropout) during the inference phase can further enhance the results on the
downstream tasks. We showcase our models on two medical imaging segmentation
tasks: i) Brain Tumor Segmentation from 3D MRI, ii) Pancreas Tumor Segmentation
from 3D CT. Our experimental results demonstrate the benefits of our proposed
methods in both downstream data-efficiency and performance.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Brain MRI Segmentation using Template-Based Training and Visual
Perception Augmentation [0.0]
We introduce a template-based training method to train a 3D U-Net model from scratch using only one population-averaged brain MRI template and its associated segmentation label.
We trained 3D U-Net models for mouse, rat, marmoset, rhesus, and human brain MRI to achieve segmentation tasks such as skull-stripping, brain segmentation, and tissue probability mapping.
arXiv Detail & Related papers (2023-08-04T14:53:20Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Medical Transformer: Universal Brain Encoder for 3D MRI Analysis [1.6287500717172143]
Existing 3D-based methods have transferred the pre-trained models to downstream tasks.
They demand a massive amount of parameters to train the model for 3D medical imaging.
We propose a novel transfer learning framework, called Medical Transformer, that effectively models 3D volumetric images in the form of a sequence of 2D image slices.
arXiv Detail & Related papers (2021-04-28T08:34:21Z) - Leveraging 3D Information in Unsupervised Brain MRI Segmentation [1.6148039130053087]
Unsupervised Anomaly Detection (UAD) methods are proposed, detecting anomalies as outliers of a healthy model learned using a Variational Autoencoder (VAE)
Here, we propose to perform UAD in a 3D fashion and compare 2D and 3D VAEs.
As a side contribution, we present a new loss function guarantying a robust training. Learning is performed using a multicentric dataset of healthy brain MRIs, and segmentation performances are estimated on White-Matter Hyperintensities and tumors lesions.
arXiv Detail & Related papers (2021-01-26T10:04:57Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data
Segmentation [0.0]
We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels.
The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16.
arXiv Detail & Related papers (2020-11-23T17:11:50Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.