C3S3: Complementary Competition and Contrastive Selection for Semi-Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2506.07368v2
- Date: Wed, 25 Jun 2025 05:23:29 GMT
- Title: C3S3: Complementary Competition and Contrastive Selection for Semi-Supervised Medical Image Segmentation
- Authors: Jiaying He, Yitong Lin, Jiahe Chen, Honghui Xu, Jianwei Zheng,
- Abstract summary: We introduce C3S3, a novel semi-supervised segmentation model that integrates complementary competition and contrastive selection.<n>This design significantly sharpens boundary delineation and enhances overall precision.<n>The proposed C3S3 undergoes rigorous validation on two publicly accessible datasets.
- Score: 2.3441453628908238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the immanent challenge of insufficiently annotated samples in the medical field, semi-supervised medical image segmentation (SSMIS) offers a promising solution. Despite achieving impressive results in delineating primary target areas, most current methodologies struggle to precisely capture the subtle details of boundaries. This deficiency often leads to significant diagnostic inaccuracies. To tackle this issue, we introduce C3S3, a novel semi-supervised segmentation model that synergistically integrates complementary competition and contrastive selection. This design significantly sharpens boundary delineation and enhances overall precision. Specifically, we develop an Outcome-Driven Contrastive Learning module dedicated to refining boundary localization. Additionally, we incorporate a Dynamic Complementary Competition module that leverages two high-performing sub-networks to generate pseudo-labels, thereby further improving segmentation quality. The proposed C3S3 undergoes rigorous validation on two publicly accessible datasets, encompassing the practices of both MRI and CT scans. The results demonstrate that our method achieves superior performance compared to previous cutting-edge competitors. Especially, on the 95HD and ASD metrics, our approach achieves a notable improvement of at least 6%, highlighting the significant advancements. The code is available at https://github.com/Y-TARL/C3S3.
Related papers
- ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation [11.248082139905865]
We propose a hybrid architecture that models MRI sequences as annotated data.<n>Our method uses a deep, preserving pretrained DeepVLab3 backbone to extract high-level semantic features from each MRI slice and a recurrent convolutional head, built with ConvLSTM layers, to integrate information across slices.<n>Compared to state-of-the-art 2D and 3D segmentation models, our approach demonstrates superior performance in terms of precision, recall, Intersection over Union (IoU), Dice Similarity Coefficient (DSC) and robustness.
arXiv Detail & Related papers (2025-06-24T14:56:55Z) - Semi-Supervised Medical Image Segmentation via Dual Networks [1.904929457002693]
We propose an innovative semi-supervised 3D medical image segmentation method to reduce the dependency on large, expert-labeled datasets.<n>We introduce a dual-network architecture to address the limitations of existing methods in using contextual information.<n> Experiments on clinical magnetic resonance imaging demonstrate that our approach outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2025-05-23T09:59:26Z) - MSA-UNet3+: Multi-Scale Attention UNet3+ with New Supervised Prototypical Contrastive Loss for Coronary DSA Image Segmentation [8.850534640462081]
We propose a Supervised Prototypical Contrastive Loss that fuses supervised and prototypical contrastive learning to enhance coronary DSA image segmentation.<n>We implement the proposed SPCL loss within an MSA-UNet3+: a Multi-Scale Attention-Enhanced UNet3+ architecture.<n> Experiments on a private coronary DSA dataset show that MSA-UNet3+ outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-04-07T15:35:30Z) - Leveraging Labelled Data Knowledge: A Cooperative Rectification Learning Network for Semi-supervised 3D Medical Image Segmentation [27.94353306813293]
Semi-supervised 3D medical image segmentation aims to achieve accurate segmentation using few labelled data and numerous unlabelled data.<n>Main challenge in the design of semi-supervised learning methods is the effective use of the unlabelled data for training.<n>We introduce a new methodology to produce high-quality pseudo-labels for a consistency learning strategy.
arXiv Detail & Related papers (2025-02-17T05:29:50Z) - SGTC: Semantic-Guided Triplet Co-training for Sparsely Annotated Semi-Supervised Medical Image Segmentation [12.168303995947795]
We propose a novel Semantic-Guided Triplet Co-training framework.<n>It achieves high-end medical image segmentation by only annotating three slices of a few volumetric samples.<n>Our method outperforms most state-of-the-art semi-supervised counterparts under sparse annotation settings.
arXiv Detail & Related papers (2024-12-20T03:31:33Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - DHC: Dual-debiased Heterogeneous Co-training Framework for
Class-imbalanced Semi-supervised Medical Image Segmentation [19.033066343869862]
We present a novel Dual-debiased Heterogeneous Co-training (DHC) framework for semi-supervised 3D medical image segmentation.
Specifically, we propose two loss weighting strategies, namely Distribution-aware Debiased Weighting (DistDW) and Difficulty-aware Debiased Weighting (DiffDW)
Our proposed framework brings significant improvements by using pseudo labels for debiasing and alleviating the class imbalance problem.
arXiv Detail & Related papers (2023-07-22T02:16:05Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Enforcing Mutual Consistency of Hard Regions for Semi-supervised Medical
Image Segmentation [68.9233942579956]
We propose a novel mutual consistency network (MC-Net+) to exploit the unlabeled hard regions for semi-supervised medical image segmentation.
The MC-Net+ model is motivated by the observation that deep models trained with limited annotations are prone to output highly uncertain and easily mis-classified predictions.
We compare the segmentation results of the MC-Net+ with five state-of-the-art semi-supervised approaches on three public medical datasets.
arXiv Detail & Related papers (2021-09-21T04:47:42Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine
Framework and Its Adversarial Examples [74.92488215859991]
We propose a novel 3D-based coarse-to-fine framework to efficiently tackle these challenges.
The proposed 3D-based framework outperforms their 2D counterparts by a large margin since it can leverage the rich spatial information along all three axes.
We conduct experiments on three datasets, the NIH pancreas dataset, the JHMI pancreas dataset and the JHMI pathological cyst dataset.
arXiv Detail & Related papers (2020-10-29T15:39:19Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.