All-Around Real Label Supervision: Cyclic Prototype Consistency Learning
for Semi-supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2109.13930v1
- Date: Tue, 28 Sep 2021 14:34:06 GMT
- Title: All-Around Real Label Supervision: Cyclic Prototype Consistency Learning
for Semi-supervised Medical Image Segmentation
- Authors: Zhe Xu, Yixin Wang, Donghuan Lu, Lequan Yu, Jiangpeng Yan, Jie Luo,
Kai Ma, Yefeng Zheng and Raymond Kai-yu Tong
- Abstract summary: Semi-supervised learning has substantially advanced medical image segmentation since it alleviates the heavy burden of acquiring the costly expert-examined annotations.
We propose a novel cyclic prototype consistency learning (CPCL) framework, which is constructed by a labeled-to-unlabeled (L2U) forward process and an unlabeled-to-labeled (U2L) backward process.
Our framework turns previous textit"unsupervised" consistency into new textit"supervised" consistency, obtaining the textit"all-around real label supervision" property of our method.
- Score: 41.157552535752224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning has substantially advanced medical image
segmentation since it alleviates the heavy burden of acquiring the costly
expert-examined annotations. Especially, the consistency-based approaches have
attracted more attention for their superior performance, wherein the real
labels are only utilized to supervise their paired images via supervised loss
while the unlabeled images are exploited by enforcing the perturbation-based
\textit{"unsupervised"} consistency without explicit guidance from those real
labels. However, intuitively, the expert-examined real labels contain more
reliable supervision signals. Observing this, we ask an unexplored but
interesting question: can we exploit the unlabeled data via explicit real label
supervision for semi-supervised training? To this end, we discard the previous
perturbation-based consistency but absorb the essence of non-parametric
prototype learning. Based on the prototypical network, we then propose a novel
cyclic prototype consistency learning (CPCL) framework, which is constructed by
a labeled-to-unlabeled (L2U) prototypical forward process and an
unlabeled-to-labeled (U2L) backward process. Such two processes synergistically
enhance the segmentation network by encouraging more discriminative and compact
features. In this way, our framework turns previous \textit{"unsupervised"}
consistency into new \textit{"supervised"} consistency, obtaining the
\textit{"all-around real label supervision"} property of our method. Extensive
experiments on brain tumor segmentation from MRI and kidney segmentation from
CT images show that our CPCL can effectively exploit the unlabeled data and
outperform other state-of-the-art semi-supervised medical image segmentation
methods.
Related papers
- SP${ }^3$ : Superpixel-propagated pseudo-label learning for weakly semi-supervised medical image segmentation [10.127428696255848]
SuperPixel-Propagated Pseudo-label learning method is proposed to handle the inadequate supervisory information challenge in weakly semi-supervised segmentation.
Our method achieves state-of-the-art performance on both tumor and organ segmentation datasets under the WSSS setting.
arXiv Detail & Related papers (2024-11-18T15:14:36Z) - Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for
Semi-Supervised Medical Image Segmentation [13.707121013895929]
We present a novel semi-supervised learning method, Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation.
We use distinct decoders for student and teacher networks while maintain the same encoder.
To learn from unlabeled data, we create pseudo-labels generated by the teacher networks and augment the training data with the pseudo-labels.
arXiv Detail & Related papers (2023-08-31T09:13:34Z) - COSST: Multi-organ Segmentation with Partially Labeled Datasets Using
Comprehensive Supervisions and Self-training [15.639976408273784]
Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated.
It is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential.
We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training.
arXiv Detail & Related papers (2023-04-27T08:55:34Z) - Inherent Consistent Learning for Accurate Semi-supervised Medical Image
Segmentation [30.06702813637713]
We propose a novel Inherent Consistent Learning (ICL) method to learn robust semantic category representations.
The proposed method can outperform the state-of-the-art, especially when the number of annotated data is extremely limited.
arXiv Detail & Related papers (2023-03-24T17:38:03Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.