GCL: Gradient-Guided Contrastive Learning for Medical Image Segmentation
with Multi-Perspective Meta Labels
- URL: http://arxiv.org/abs/2309.08888v1
- Date: Sat, 16 Sep 2023 05:56:38 GMT
- Title: GCL: Gradient-Guided Contrastive Learning for Medical Image Segmentation
with Multi-Perspective Meta Labels
- Authors: Yixuan Wu, Jintai Chen, Jiahuan Yan, Yiheng Zhu, Danny Z. Chen, Jian
Wu
- Abstract summary: In medical imaging scenarios, ready-made meta labels inherently reveal semantic relationships among images.
In this paper, we propose a gradient-guided method to unifies multi-perspective meta labels to enable a pre-trained model to attain a better high-level semantic recognition ability.
Experiments on four medical image segmentation datasets verify that our new method GCL: (1) learns informative image representations and considerably boosts segmentation performance with limited labels, and (2) shows promising generalizability on out-of-distribution datasets.
- Score: 22.515761041939914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since annotating medical images for segmentation tasks commonly incurs
expensive costs, it is highly desirable to design an annotation-efficient
method to alleviate the annotation burden. Recently, contrastive learning has
exhibited a great potential in learning robust representations to boost
downstream tasks with limited labels. In medical imaging scenarios, ready-made
meta labels (i.e., specific attribute information of medical images) inherently
reveal semantic relationships among images, which have been used to define
positive pairs in previous work. However, the multi-perspective semantics
revealed by various meta labels are usually incompatible and can incur
intractable "semantic contradiction" when combining different meta labels. In
this paper, we tackle the issue of "semantic contradiction" in a
gradient-guided manner using our proposed Gradient Mitigator method, which
systematically unifies multi-perspective meta labels to enable a pre-trained
model to attain a better high-level semantic recognition ability. Moreover, we
emphasize that the fine-grained discrimination ability is vital for
segmentation-oriented pre-training, and develop a novel method called Gradient
Filter to dynamically screen pixel pairs with the most discriminating power
based on the magnitude of gradients. Comprehensive experiments on four medical
image segmentation datasets verify that our new method GCL: (1) learns
informative image representations and considerably boosts segmentation
performance with limited labels, and (2) shows promising generalizability on
out-of-distribution datasets.
Related papers
- Dual Cross-image Semantic Consistency with Self-aware Pseudo Labeling for Semi-supervised Medical Image Segmentation [14.93815368545141]
Semi-supervised learning has proven highly effective in tackling the challenge of limited labeled training data in medical image segmentation.<n>We present a new underlineDual underlineimage underlineSemantic underlineConsistency (DuCiSC) learning framework, for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2025-07-29T02:26:56Z) - ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification [16.415582577355536]
Multi-label image classification presents a challenging task in many domains, including computer vision and medical imaging.
Recent advancements have introduced graph-based and transformer-based methods to improve performance and capture label dependencies.
We propose Probabilistic Multi-label Contrastive Learning (ProbMCL), a novel framework to address these challenges.
arXiv Detail & Related papers (2024-01-02T22:15:20Z) - DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition
with Limited Annotations [79.433122872973]
Multi-label image recognition in the low-label regime is a task of great challenge and practical significance.
We leverage the powerful alignment between textual and visual features pretrained with millions of auxiliary image-text pairs.
We introduce an efficient and effective framework called Evidence-guided Dual Context Optimization (DualCoOp++)
arXiv Detail & Related papers (2023-08-03T17:33:20Z) - Semantic Contrastive Bootstrapping for Single-positive Multi-label
Recognition [36.3636416735057]
We present a semantic contrastive bootstrapping (Scob) approach to gradually recover the cross-object relationships.
We then propose a recurrent semantic masked transformer to extract iconic object-level representations.
Extensive experimental results demonstrate that the proposed joint learning framework surpasses the state-of-the-art models.
arXiv Detail & Related papers (2023-07-15T01:59:53Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.