Leveraging Unlabeled Data for 3D Medical Image Segmentation through
Self-Supervised Contrastive Learning
- URL: http://arxiv.org/abs/2311.12617v1
- Date: Tue, 21 Nov 2023 14:03:16 GMT
- Title: Leveraging Unlabeled Data for 3D Medical Image Segmentation through
Self-Supervised Contrastive Learning
- Authors: Sanaz Karimijafarbigloo, Reza Azad, Yury Velichko, Ulas Bagci, Dorit
Merhof
- Abstract summary: Current 3D semi-supervised segmentation methods face significant challenges such as limited consideration of contextual information.
We introduce two distinctworks designed to explore and exploit the discrepancies between them, ultimately correcting the erroneous prediction results.
We employ a self-supervised contrastive learning paradigm to distinguish between reliable and unreliable predictions.
- Score: 3.7395287262521717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current 3D semi-supervised segmentation methods face significant challenges
such as limited consideration of contextual information and the inability to
generate reliable pseudo-labels for effective unsupervised data use. To address
these challenges, we introduce two distinct subnetworks designed to explore and
exploit the discrepancies between them, ultimately correcting the erroneous
prediction results. More specifically, we identify regions of inconsistent
predictions and initiate a targeted verification training process. This
procedure strategically fine-tunes and harmonizes the predictions of the
subnetworks, leading to enhanced utilization of contextual information.
Furthermore, to adaptively fine-tune the network's representational capacity
and reduce prediction uncertainty, we employ a self-supervised contrastive
learning paradigm. For this, we use the network's confidence to distinguish
between reliable and unreliable predictions. The model is then trained to
effectively minimize unreliable predictions. Our experimental results for organ
segmentation, obtained from clinical MRI and CT scans, demonstrate the
effectiveness of our approach when compared to state-of-the-art methods. The
codebase is accessible on
\href{https://github.com/xmindflow/SSL-contrastive}{GitHub}.
Related papers
- Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning [37.21211404608413]
We propose a shift to deep metric-based meta-learning in EMG PR to supervise the creation of meaningful and interpretable representations.
We derive a robust class proximity-based confidence estimator that leads to a better rejection of incorrect decisions.
arXiv Detail & Related papers (2024-04-17T23:37:50Z) - Estimation and Analysis of Slice Propagation Uncertainty in 3D Anatomy Segmentation [5.791830972526084]
Supervised methods for 3D anatomy segmentation demonstrate superior performance but are often limited by the availability of annotated data.
This limitation has led to a growing interest in self-supervised approaches in tandem with the abundance of available un-annotated data.
Slice propagation has emerged as an self-supervised approach that leverages slice registration as a self-supervised task to achieve full anatomy segmentation with minimal supervision.
arXiv Detail & Related papers (2024-03-18T22:26:19Z) - Uncertainty Quantification in Deep Neural Networks through Statistical
Inference on Latent Space [0.0]
We develop an algorithm that exploits the latent-space representation of data points fed into the network to assess the accuracy of their prediction.
We show on a synthetic dataset that commonly used methods are mostly overconfident.
In contrast, our method can detect such out-of-distribution data points as inaccurately predicted, thus aiding in the automatic detection of outliers.
arXiv Detail & Related papers (2023-05-18T09:52:06Z) - Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised
Medical Image Segmentation [9.745971699005857]
We propose a novel uncertainty-guided mutual consistency learning framework for medical image segmentation.
It integrates intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information.
Our method achieves performance gains by leveraging unlabeled data and outperforms existing semi-supervised segmentation methods.
arXiv Detail & Related papers (2021-12-05T08:19:41Z) - Uncertainty-Aware Deep Co-training for Semi-supervised Medical Image
Segmentation [4.935055133266873]
We propose a novel uncertainty-aware scheme to make models learn regions purposefully.
Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map.
In the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network.
arXiv Detail & Related papers (2021-11-23T03:26:24Z) - Guided Point Contrastive Learning for Semi-supervised Point Cloud
Semantic Segmentation [90.2445084743881]
We present a method for semi-supervised point cloud semantic segmentation to adopt unlabeled point clouds in training to boost the model performance.
Inspired by the recent contrastive loss in self-supervised tasks, we propose the guided point contrastive loss to enhance the feature representation and model generalization ability.
arXiv Detail & Related papers (2021-10-15T16:38:54Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.