Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation
- URL: http://arxiv.org/abs/2304.03406v1
- Date: Thu, 6 Apr 2023 22:43:13 GMT
- Title: Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation
- Authors: Xiangyi Yan, Junayed Naushad, Chenyu You, Hao Tang, Shanlin Sun, Kun
Han, Haoyu Ma, James Duncan, Xiaohui Xie
- Abstract summary: We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
- Score: 27.82940072548603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in self-supervised learning have demonstrated that
effective visual representations can be learned from unlabeled images. This has
led to increased interest in applying self-supervised learning to the medical
domain, where unlabeled images are abundant and labeled images are difficult to
obtain. However, most self-supervised learning approaches are modeled as image
level discriminative or generative proxy tasks, which may not capture the finer
level representations necessary for dense prediction tasks like multi-organ
segmentation. In this paper, we propose a novel contrastive learning framework
that integrates Localized Region Contrast (LRC) to enhance existing
self-supervised pre-training methods for medical image segmentation. Our
approach involves identifying Super-pixels by Felzenszwalb's algorithm and
performing local contrastive learning using a novel contrastive sampling loss.
Through extensive experiments on three multi-organ segmentation datasets, we
demonstrate that integrating LRC to an existing self-supervised method in a
limited annotation setting significantly improves segmentation performance.
Moreover, we show that LRC can also be applied to fully-supervised pre-training
methods to further boost performance.
Related papers
- Weakly Supervised Intracranial Hemorrhage Segmentation using Head-Wise
Gradient-Infused Self-Attention Maps from a Swin Transformer in Categorical
Learning [0.6269243524465492]
Intracranial hemorrhage (ICH) is a life-threatening medical emergency that requires timely diagnosis and accurate treatment.
Deep learning techniques have emerged as the leading approach for medical image analysis and processing.
We introduce a novel weakly supervised method for ICH segmentation, utilizing a Swin transformer trained on an ICH classification task with categorical labels.
arXiv Detail & Related papers (2023-04-11T00:17:34Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Revisiting Contrastive Learning for Few-Shot Classification [74.78397993160583]
Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations.
We show how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks.
We propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
arXiv Detail & Related papers (2021-01-26T19:58:08Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.