Pseudo-Label Noise Suppression Techniques for Semi-Supervised Semantic
Segmentation
- URL: http://arxiv.org/abs/2210.10426v1
- Date: Wed, 19 Oct 2022 09:46:27 GMT
- Title: Pseudo-Label Noise Suppression Techniques for Semi-Supervised Semantic
Segmentation
- Authors: Sebastian Scherer, Robin Sch\"on and Rainer Lienhart
- Abstract summary: Semi-consuming learning (SSL) can reduce the need for large labelled datasets by incorporating unsupervised data into the training.
Current SSL approaches use an initially supervised trained model to generate predictions for unlabelled images, called pseudo-labels.
We use three mechanisms to control pseudo-label noise and errors.
- Score: 21.163070161951868
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Semi-supervised learning (SSL) can reduce the need for large labelled
datasets by incorporating unlabelled data into the training. This is
particularly interesting for semantic segmentation, where labelling data is
very costly and time-consuming. Current SSL approaches use an initially
supervised trained model to generate predictions for unlabelled images, called
pseudo-labels, which are subsequently used for training a new model from
scratch. Since the predictions usually do not come from an error-free neural
network, they are naturally full of errors. However, training with partially
incorrect labels often reduce the final model performance. Thus, it is crucial
to manage errors/noise of pseudo-labels wisely. In this work, we use three
mechanisms to control pseudo-label noise and errors: (1) We construct a solid
base framework by mixing images with cow-patterns on unlabelled images to
reduce the negative impact of wrong pseudo-labels. Nevertheless, wrong
pseudo-labels still have a negative impact on the performance. Therefore, (2)
we propose a simple and effective loss weighting scheme for pseudo-labels
defined by the feedback of the model trained on these pseudo-labels. This
allows us to soft-weight the pseudo-label training examples based on their
determined confidence score during training. (3) We also study the common
practice to ignore pseudo-labels with low confidence and empirically analyse
the influence and effect of pseudo-labels with different confidence ranges on
SSL and the contribution of pseudo-label filtering to the achievable
performance gains. We show that our method performs superior to state
of-the-art alternatives on various datasets. Furthermore, we show that our
findings also transfer to other tasks such as human pose estimation. Our code
is available at https://github.com/ChristmasFan/SSL_Denoising_Segmentation.
Related papers
- Reduction-based Pseudo-label Generation for Instance-dependent Partial Label Learning [41.345794038968776]
We propose to leverage reduction-based pseudo-labels to alleviate the influence of incorrect candidate labels.
We show that reduction-based pseudo-labels exhibit greater consistency with the Bayes optimal classifier compared to pseudo-labels directly generated from the predictive model.
arXiv Detail & Related papers (2024-10-28T07:32:20Z) - Boosting Semi-Supervised Learning by bridging high and low-confidence
predictions [4.18804572788063]
Pseudo-labeling is a crucial technique in semi-supervised learning (SSL)
We propose a new method called ReFixMatch, which aims to utilize all of the unlabeled data during training.
arXiv Detail & Related papers (2023-08-15T00:27:18Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - Doubly Robust Self-Training [46.168395767948965]
We introduce doubly robust self-training, a novel semi-supervised algorithm.
We demonstrate the superiority of the doubly robust loss over the standard self-training baseline.
arXiv Detail & Related papers (2023-06-01T00:57:16Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - Boosting Semi-Supervised Face Recognition with Noise Robustness [54.342992887966616]
This paper presents an effective solution to semi-supervised face recognition that is robust to the label noise aroused by the auto-labelling.
We develop a semi-supervised face recognition solution, named Noise Robust Learning-Labelling (NRoLL), which is based on the robust training ability empowered by GN.
arXiv Detail & Related papers (2021-05-10T14:43:11Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.