Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation
- URL: http://arxiv.org/abs/2112.00735v1
- Date: Wed, 1 Dec 2021 12:21:24 GMT
- Title: Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation
- Authors: Constantin Seibold, Simon Rei{\ss}, Jens Kleesiek, Rainer Stiefelhagen
- Abstract summary: We propose a novel approach to generate supervision for semi-supervised semantic segmentation.
We use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set.
We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images.
- Score: 25.76014072179711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Producing densely annotated data is a difficult and tedious task for medical
imaging applications. To address this problem, we propose a novel approach to
generate supervision for semi-supervised semantic segmentation. We argue that
visually similar regions between labeled and unlabeled images likely contain
the same semantics and therefore should share their label. Following this
thought, we use a small number of labeled images as reference material and
match pixels in an unlabeled image to the semantics of the best fitting pixel
in a reference set. This way, we avoid pitfalls such as confirmation bias,
common in purely prediction-based pseudo-labeling. Since our method does not
require any architectural changes or accompanying networks, one can easily
insert it into existing frameworks. We achieve the same performance as a
standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer
labeled images. Aside from an in-depth analysis of different aspects of our
proposed method, we further demonstrate the effectiveness of our
reference-guided learning paradigm by comparing our approach against existing
methods for retinal fluid segmentation with competitive performance as we
improve upon recent work by up to 15% mean IoU.
Related papers
- Leveraging Fixed and Dynamic Pseudo-labels for Semi-supervised Medical Image Segmentation [7.9449756510822915]
Semi-supervised medical image segmentation has gained growing interest due to its ability to utilize unannotated data.
The current state-of-the-art methods mostly rely on pseudo-labeling within a co-training framework.
We propose a novel approach where multiple pseudo-labels for the same unannotated image are used to learn from the unlabeled data.
arXiv Detail & Related papers (2024-05-12T11:30:01Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic
Segmentation [52.62441404064957]
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the model trained on a labeled source domain.
Many methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts.
We propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel.
arXiv Detail & Related papers (2022-04-19T11:16:29Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Pseudo Pixel-level Labeling for Images with Evolving Content [5.573543601558405]
We propose a pseudo-pixel-level label generation technique to reduce the amount of effort for manual annotation of images.
We train two semantic segmentation models with VGG and ResNet backbones on images labeled using our pseudo labeling method and those of a state-of-the-art method.
The results indicate that using our pseudo-labels instead of those generated using the state-of-the-art method in the training process improves the mean-IoU and the frequency-weighted-IoU of the VGG and ResNet-based semantic segmentation models by 3.36%, 2.58%, 10
arXiv Detail & Related papers (2021-05-20T18:14:19Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.