SuperCL: Superpixel Guided Contrastive Learning for Medical Image Segmentation Pre-training
- URL: http://arxiv.org/abs/2504.14737v1
- Date: Sun, 20 Apr 2025 20:57:03 GMT
- Title: SuperCL: Superpixel Guided Contrastive Learning for Medical Image Segmentation Pre-training
- Authors: Shuang Zeng, Lei Zhu, Xinliang Zhang, Hangzhou He, Yanye Lu,
- Abstract summary: We propose a novel contrastive learning approach named SuperCL for medical image segmentation pre-training.<n>Our SuperCL exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies.<n>Experiments on 8 medical image datasets indicate our SuperCL outperforms existing 12 methods.
- Score: 17.920724846400585
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Medical image segmentation is a critical yet challenging task, primarily due to the difficulty of obtaining extensive datasets of high-quality, expert-annotated images. Contrastive learning presents a potential but still problematic solution to this issue. Because most existing methods focus on extracting instance-level or pixel-to-pixel representation, which ignores the characteristics between intra-image similar pixel groups. Moreover, when considering contrastive pairs generation, most SOTA methods mainly rely on manually setting thresholds, which requires a large number of gradient experiments and lacks efficiency and generalization. To address these issues, we propose a novel contrastive learning approach named SuperCL for medical image segmentation pre-training. Specifically, our SuperCL exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies: Intra-image Local Contrastive Pairs (ILCP) Generation and Inter-image Global Contrastive Pairs (IGCP) Generation. Considering superpixel cluster aligns well with the concept of contrastive pairs generation, we utilize the superpixel map to generate pseudo masks for both ILCP and IGCP to guide supervised contrastive learning. Moreover, we also propose two modules named Average SuperPixel Feature Map Generation (ASP) and Connected Components Label Generation (CCL) to better exploit the prior structural information for IGCP. Finally, experiments on 8 medical image datasets indicate our SuperCL outperforms existing 12 methods. i.e. Our SuperCL achieves a superior performance with more precise predictions from visualization figures and 3.15%, 5.44%, 7.89% DSC higher than the previous best results on MMWHS, CHAOS, Spleen with 10% annotations. Our code will be released after acceptance.
Related papers
- Semi-supervised Semantic Segmentation for Remote Sensing Images via Multi-scale Uncertainty Consistency and Cross-Teacher-Student Attention [59.19580789952102]
This paper proposes a novel semi-supervised Multi-Scale Uncertainty and Cross-Teacher-Student Attention (MUCA) model for RS image semantic segmentation tasks.<n>MUCA constrains the consistency among feature maps at different layers of the network by introducing a multi-scale uncertainty consistency regularization.<n>MUCA utilizes a Cross-Teacher-Student attention mechanism to guide the student network, guiding the student network to construct more discriminative feature representations.
arXiv Detail & Related papers (2025-01-18T11:57:20Z) - Low-Contrast-Enhanced Contrastive Learning for Semi-Supervised Endoscopic Image Segmentation [5.70832160492074]
We propose a novel semi-supervised segmentation framework termed LoCo via low-contrast-enhanced contrastive learning (LCC)
LCC incorporates two advanced strategies to enhance the distinctiveness of low-contrast pixels, enabling models to segment low-contrast pixels among malignant tumors, benign tumors, and normal tissues.
LoCo achieves state-of-the-art results, significantly outperforming previous methods.
arXiv Detail & Related papers (2024-12-03T09:31:16Z) - Exploring Multi-view Pixel Contrast for General and Robust Image Forgery Localization [4.8454936010479335]
We propose a Multi-view Pixel-wise Contrastive algorithm (MPC) for image forgery localization.
Specifically, we first pre-train the backbone network with the supervised contrastive loss.
Then the localization head is fine-tuned using the cross-entropy loss, resulting in a better pixel localizer.
arXiv Detail & Related papers (2024-06-19T13:51:52Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv Detail & Related papers (2024-03-04T07:40:55Z) - Multi-level Asymmetric Contrastive Learning for Volumetric Medical Image Segmentation Pre-training [17.9004421784014]
We propose a novel contrastive learning framework named MACL for medical image segmentation pre-training.<n>Specifically, we design an asymmetric contrastive learning structure to pre-train encoder and decoder simultaneously.<n>Experiments on 8 medical image datasets indicate our MACL framework outperforms existing 11 contrastive learning strategies.
arXiv Detail & Related papers (2023-09-21T08:22:44Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Information-guided pixel augmentation for pixel-wise contrastive
learning [22.00687816406677]
pixel-wise contrastive learning helps with pixel-wise tasks such as medical landmark detection.
We propose a pixel augmentation method with a pixel granularity for enhancing unsupervised pixel-wise contrastive learning.
arXiv Detail & Related papers (2022-11-14T05:12:23Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - In-N-Out Generative Learning for Dense Unsupervised Video Segmentation [89.21483504654282]
In this paper, we focus on the unsupervised Video Object (VOS) task which learns visual correspondence from unlabeled videos.
We propose the In-aNd-Out (INO) generative learning from a purely generative perspective, which captures both high-level and fine-grained semantics.
Our INO outperforms previous state-of-the-art methods by significant margins.
arXiv Detail & Related papers (2022-03-29T07:56:21Z) - Contrastive Semi-Supervised Learning for 2D Medical Image Segmentation [16.517086214275654]
We present a novel semi-supervised 2D medical segmentation solution that applies Contrastive Learning (CL) on image patches, instead of full images.
These patches are meaningfully constructed using the semantic information of different classes obtained via pseudo labeling.
We also propose a novel consistency regularization scheme, which works in synergy with contrastive learning.
arXiv Detail & Related papers (2021-06-12T15:43:24Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.