Learning to segment with limited annotations: Self-supervised
pretraining with regression and contrastive loss in MRI
- URL: http://arxiv.org/abs/2205.13109v1
- Date: Thu, 26 May 2022 02:23:14 GMT
- Title: Learning to segment with limited annotations: Self-supervised
pretraining with regression and contrastive loss in MRI
- Authors: Lavanya Umapathy, Zhiyang Fu, Rohit Philip, Diego Martin, Maria
Altbach, Ali Bilgin
- Abstract summary: We consider two pre-training approaches for driving a deep learning model to learn different representations.
The effect of pretraining techniques is evaluated in two downstream segmentation applications using Magnetic Resonance (MR) images.
We observed that DL models pretrained using self-supervision can be finetuned for comparable performance with fewer labeled datasets.
- Score: 1.419070105368302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining manual annotations for large datasets for supervised training of
deep learning (DL) models is challenging. The availability of large unlabeled
datasets compared to labeled ones motivate the use of self-supervised
pretraining to initialize DL models for subsequent segmentation tasks. In this
work, we consider two pre-training approaches for driving a DL model to learn
different representations using: a) regression loss that exploits spatial
dependencies within an image and b) contrastive loss that exploits semantic
similarity between pairs of images. The effect of pretraining techniques is
evaluated in two downstream segmentation applications using Magnetic Resonance
(MR) images: a) liver segmentation in abdominal T2-weighted MR images and b)
prostate segmentation in T2-weighted MR images of the prostate. We observed
that DL models pretrained using self-supervision can be finetuned for
comparable performance with fewer labeled datasets. Additionally, we also
observed that initializing the DL model using contrastive loss based
pretraining performed better than the regression loss.
Related papers
- Assessing the Performance of the DINOv2 Self-supervised Learning Vision Transformer Model for the Segmentation of the Left Atrium from MRI Images [1.2499537119440245]
DINOv2 is a self-supervised learning vision transformer trained on natural images for LA segmentation using MRI.
We demonstrate its ability to provide accurate & consistent segmentation, achieving a mean Dice score of.871 & a Jaccard Index of.792 for end-to-end fine-tuning.
These results suggest that DINOv2 effectively adapts to MRI with limited data, highlighting its potential as a competitive tool for segmentation & encouraging broader use in medical imaging.
arXiv Detail & Related papers (2024-11-14T17:15:51Z) - Train smarter, not harder: learning deep abdominal CT registration on
scarce data [0.8179387741893692]
We explore training strategies to improve convolutional neural network-based image-to-image registration for abdominal imaging.
Guiding registration using segmentations in the training step proved beneficial for deep-learning-based image registration.
Finetuning the pretrained model from the brain MRI dataset to the abdominal CT dataset further improved performance.
arXiv Detail & Related papers (2022-11-28T19:03:01Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Adaptive Contrast for Image Regression in Computer-Aided Disease
Assessment [22.717658723840255]
We propose the first contrastive learning framework for deep image regression, namely AdaCon.
AdaCon consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch.
We demonstrate the effectiveness of AdaCon on two medical image regression tasks.
arXiv Detail & Related papers (2021-12-22T07:13:02Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - DMT: Dynamic Mutual Training for Semi-Supervised Learning [69.17919491907296]
Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels.
We propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training.
Our experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation.
arXiv Detail & Related papers (2020-04-18T03:12:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.