Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation
- URL: http://arxiv.org/abs/2105.12924v1
- Date: Thu, 27 May 2021 03:27:58 GMT
- Title: Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation
- Authors: Jinxi Xiang, Zhuowei Li, Wenji Wang, Qing Xia and Shaoting Zhang
- Abstract summary: We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
- Score: 6.889911520730388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has demonstrated significant improvements in medical image
segmentation using a sufficiently large amount of training data with manual
labels. Acquiring well-representative labels requires expert knowledge and
exhaustive labors. In this paper, we aim to boost the performance of
semi-supervised learning for medical image segmentation with limited labels
using a self-ensembling contrastive learning technique. To this end, we propose
to train an encoder-decoder network at image-level with small amounts of
labeled images, and more importantly, we learn latent representations directly
at feature-level by imposing contrastive loss on unlabeled images. This method
strengthens intra-class compactness and inter-class separability, so as to get
a better pixel classifier. Moreover, we devise a student encoder for online
learning and an exponential moving average version of it, called teacher
encoder, to improve the performance iteratively in a self-ensembling manner. To
construct contrastive samples with unlabeled images, two sampling strategies
that exploit structure similarity across medical images and utilize
pseudo-labels for construction, termed region-aware and anatomical-aware
contrastive sampling, are investigated. We conduct extensive experiments on an
MRI and a CT segmentation dataset and demonstrate that in a limited label
setting, the proposed method achieves state-of-the-art performance. Moreover,
the anatomical-aware strategy that prepares contrastive samples on-the-fly
using pseudo-labels realizes better contrastive regularization on feature
representations.
Related papers
- MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.