Re-distributing Biased Pseudo Labels for Semi-supervised Semantic
Segmentation: A Baseline Investigation
- URL: http://arxiv.org/abs/2107.11279v2
- Date: Mon, 26 Jul 2021 06:11:54 GMT
- Title: Re-distributing Biased Pseudo Labels for Semi-supervised Semantic
Segmentation: A Baseline Investigation
- Authors: Ruifei He, Jihan Yang, Xiaojuan Qi
- Abstract summary: We present a simple and yet effective Distribution Alignment and Random Sampling (DARS) method to produce unbiased pseudo labels.
Our method performs favorably in comparison with state-of-the-art approaches.
- Score: 30.688753736660725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While self-training has advanced semi-supervised semantic segmentation, it
severely suffers from the long-tailed class distribution on real-world semantic
segmentation datasets that make the pseudo-labeled data bias toward majority
classes. In this paper, we present a simple and yet effective Distribution
Alignment and Random Sampling (DARS) method to produce unbiased pseudo labels
that match the true class distribution estimated from the labeled data.
Besides, we also contribute a progressive data augmentation and labeling
strategy to facilitate model training with pseudo-labeled data. Experiments on
both Cityscapes and PASCAL VOC 2012 datasets demonstrate the effectiveness of
our approach. Albeit simple, our method performs favorably in comparison with
state-of-the-art approaches. Code will be available at
https://github.com/CVMI-Lab/DARS.
Related papers
- Graph-Based Semi-Supervised Segregated Lipschitz Learning [0.21847754147782888]
This paper presents an approach to semi-supervised learning for the classification of data using the Lipschitz Learning on graphs.
We develop a graph-based semi-supervised learning framework that leverages the properties of the infinity Laplacian to propagate labels in a dataset where only a few samples are labeled.
arXiv Detail & Related papers (2024-11-05T17:16:56Z) - Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for
Severe Label Noise [4.90148689564172]
Real-world datasets contain noisy label samples that have no semantic relevance to any class in the dataset.
Most state-of-the-art methods leverage ID labeled noisy samples as unlabeled data for semi-supervised learning.
We propose incorporating the information from all the training data by leveraging the benefits of self-supervised training.
arXiv Detail & Related papers (2023-08-13T23:33:33Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced
Semi-Supervised Learning [80.05441565830726]
This paper addresses imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance.
We propose a general pseudo-labeling framework to address the bias motivated by this observation.
We term the novel pseudo-labeling framework for imbalanced SSL as Distribution-Aware Semantics-Oriented (DASO) Pseudo-label.
arXiv Detail & Related papers (2021-06-10T11:58:25Z) - Weakly Supervised Pseudo-Label assisted Learning for ALS Point Cloud
Semantic Segmentation [1.4620086904601473]
Competitive point cloud results usually rely on a large amount of labeled data.
In this study, we propose a pseudo-labeling strategy to obtain accurate results with limited ground truth.
arXiv Detail & Related papers (2021-05-05T08:07:21Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.