Dual Cross-image Semantic Consistency with Self-aware Pseudo Labeling for Semi-supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2507.21440v1
- Date: Tue, 29 Jul 2025 02:26:56 GMT
- Title: Dual Cross-image Semantic Consistency with Self-aware Pseudo Labeling for Semi-supervised Medical Image Segmentation
- Authors: Han Wu, Chong Wang, Zhiming Cui,
- Abstract summary: Semi-supervised learning has proven highly effective in tackling the challenge of limited labeled training data in medical image segmentation.<n>We present a new underlineDual underlineimage underlineSemantic underlineConsistency (DuCiSC) learning framework, for semi-supervised medical image segmentation.
- Score: 14.93815368545141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning has proven highly effective in tackling the challenge of limited labeled training data in medical image segmentation. In general, current approaches, which rely on intra-image pixel-wise consistency training via pseudo-labeling, overlook the consistency at more comprehensive semantic levels (e.g., object region) and suffer from severe discrepancy of extracted features resulting from an imbalanced number of labeled and unlabeled data. To overcome these limitations, we present a new \underline{Du}al \underline{C}ross-\underline{i}mage \underline{S}emantic \underline{C}onsistency (DuCiSC) learning framework, for semi-supervised medical image segmentation. Concretely, beyond enforcing pixel-wise semantic consistency, DuCiSC proposes dual paradigms to encourage region-level semantic consistency across: 1) labeled and unlabeled images; and 2) labeled and fused images, by explicitly aligning their prototypes. Relying on the dual paradigms, DuCiSC can effectively establish consistent cross-image semantics via prototype representations, thereby addressing the feature discrepancy issue. Moreover, we devise a novel self-aware confidence estimation strategy to accurately select reliable pseudo labels, allowing for exploiting the training dynamics of unlabeled data. Our DuCiSC method is extensively validated on four datasets, including two popular binary benchmarks in segmenting the left atrium and pancreas, a multi-class Automatic Cardiac Diagnosis Challenge dataset, and a challenging scenario of segmenting the inferior alveolar nerve that features complicated anatomical structures, showing superior segmentation results over previous state-of-the-art approaches. Our code is publicly available at \href{https://github.com/ShanghaiTech-IMPACT/DuCiSC}{https://github.com/ShanghaiTech-IMPACT/DuCiSC}.
Related papers
- SemSim: Revisiting Weak-to-Strong Consistency from a Semantic Similarity Perspective for Semi-supervised Medical Image Segmentation [18.223854197580145]
Semi-supervised learning (SSL) for medical image segmentation is a challenging yet highly practical task.
We propose a novel framework based on FixMatch, named SemSim, powered by two appealing designs from semantic similarity perspective.
We show that SemSim yields consistent improvements over the state-of-the-art methods across three public segmentation benchmarks.
arXiv Detail & Related papers (2024-10-17T12:31:37Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - GCL: Gradient-Guided Contrastive Learning for Medical Image Segmentation
with Multi-Perspective Meta Labels [22.515761041939914]
In medical imaging scenarios, ready-made meta labels inherently reveal semantic relationships among images.
In this paper, we propose a gradient-guided method to unifies multi-perspective meta labels to enable a pre-trained model to attain a better high-level semantic recognition ability.
Experiments on four medical image segmentation datasets verify that our new method GCL: (1) learns informative image representations and considerably boosts segmentation performance with limited labels, and (2) shows promising generalizability on out-of-distribution datasets.
arXiv Detail & Related papers (2023-09-16T05:56:38Z) - DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition
with Limited Annotations [79.433122872973]
Multi-label image recognition in the low-label regime is a task of great challenge and practical significance.
We leverage the powerful alignment between textual and visual features pretrained with millions of auxiliary image-text pairs.
We introduce an efficient and effective framework called Evidence-guided Dual Context Optimization (DualCoOp++)
arXiv Detail & Related papers (2023-08-03T17:33:20Z) - PatchCT: Aligning Patch Set and Label Set with Conditional Transport for
Multi-Label Image Classification [48.929583521641526]
Multi-label image classification is a prediction task that aims to identify more than one label from a given image.
This paper introduces the conditional transport theory to bridge the acknowledged gap.
We find that by formulating the multi-label classification as a CT problem, we can exploit the interactions between the image and label efficiently.
arXiv Detail & Related papers (2023-07-18T08:37:37Z) - RCPS: Rectified Contrastive Pseudo Supervision for Semi-Supervised
Medical Image Segmentation [26.933651788004475]
We propose a novel semi-supervised segmentation method named Rectified Contrastive Pseudo Supervision (RCPS)
RCPS combines a rectified pseudo supervision and voxel-level contrastive learning to improve the effectiveness of semi-supervised segmentation.
Experimental results reveal that the proposed method yields better segmentation performance compared with the state-of-the-art methods in semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-01-13T12:03:58Z) - Co-training with High-Confidence Pseudo Labels for Semi-supervised
Medical Image Segmentation [27.833321555267116]
We propose an Uncertainty-guided Collaborative Mean-Teacher (UCMT) for semi-supervised semantic segmentation with the high-confidence pseudo labels.
UCMT consists of two main components: 1) collaborative mean-teacher (CMT) for encouraging model disagreement and performing co-training between the sub-networks, and 2) uncertainty-guided region mix (UMIX) for manipulating the input images according to the uncertainty maps of CMT and facilitating CMT to produce high-confidence pseudo labels.
arXiv Detail & Related papers (2023-01-11T13:48:37Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - Exploring Feature Representation Learning for Semi-supervised Medical
Image Segmentation [30.608293915653558]
We present a two-stage framework for semi-supervised medical image segmentation.
Key insight is to explore the feature representation learning with labeled and unlabeled (i.e., pseudo labeled) images.
A stage-adaptive contrastive learning method is proposed, containing a boundary-aware contrastive loss.
We present an aleatoric uncertainty-aware method, namely AUA, to generate higher-quality pseudo labels.
arXiv Detail & Related papers (2021-11-22T05:06:12Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.