Self-aware and Cross-sample Prototypical Learning for Semi-supervised
Medical Image Segmentation
- URL: http://arxiv.org/abs/2305.16214v1
- Date: Thu, 25 May 2023 16:22:04 GMT
- Title: Self-aware and Cross-sample Prototypical Learning for Semi-supervised
Medical Image Segmentation
- Authors: Zhenxi Zhang, Ran Ran, Chunna Tian, Heng Zhou, Xin Li, Fan Yang,
Zhicheng Jiao
- Abstract summary: Consistency learning plays a crucial role in semi-supervised medical image segmentation.
It enables the effective utilization of limited annotated data while leveraging the abundance of unannotated data.
We propose a self-aware and cross-sample prototypical learning method ( SCP-Net) to enhance the diversity of prediction in consistency learning.
- Score: 10.18427897663732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency learning plays a crucial role in semi-supervised medical image
segmentation as it enables the effective utilization of limited annotated data
while leveraging the abundance of unannotated data. The effectiveness and
efficiency of consistency learning are challenged by prediction diversity and
training stability, which are often overlooked by existing studies. Meanwhile,
the limited quantity of labeled data for training often proves inadequate for
formulating intra-class compactness and inter-class discrepancy of pseudo
labels. To address these issues, we propose a self-aware and cross-sample
prototypical learning method (SCP-Net) to enhance the diversity of prediction
in consistency learning by utilizing a broader range of semantic information
derived from multiple inputs. Furthermore, we introduce a self-aware
consistency learning method that exploits unlabeled data to improve the
compactness of pseudo labels within each class. Moreover, a dual loss
re-weighting method is integrated into the cross-sample prototypical
consistency learning method to improve the reliability and stability of our
model. Extensive experiments on ACDC dataset and PROMISE12 dataset validate
that SCP-Net outperforms other state-of-the-art semi-supervised segmentation
methods and achieves significant performance gains compared to the limited
supervised training. Our code will come soon.
Related papers
- Maximally Separated Active Learning [32.98415531556376]
We propose an active learning method that utilizes fixed equiangular hyperspherical points as class prototypes.
We demonstrate strong performance over existing active learning techniques across five benchmark datasets.
arXiv Detail & Related papers (2024-11-26T14:02:43Z) - Consistency-Based Semi-supervised Evidential Active Learning for
Diagnostic Radiograph Classification [2.3545156585418328]
We introduce a novel Consistency-based Semi-supervised Evidential Active Learning framework (CSEAL)
We leverage predictive uncertainty based on theories of evidence and subjective logic to develop an end-to-end integrated approach.
Our approach can substantially improve accuracy on rarer abnormalities with fewer labelled samples.
arXiv Detail & Related papers (2022-09-05T09:28:31Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised
Medical Image Segmentation [9.745971699005857]
We propose a novel uncertainty-guided mutual consistency learning framework for medical image segmentation.
It integrates intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information.
Our method achieves performance gains by leveraging unlabeled data and outperforms existing semi-supervised segmentation methods.
arXiv Detail & Related papers (2021-12-05T08:19:41Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.