PseudoSeg: Designing Pseudo Labels for Semantic Segmentation
- URL: http://arxiv.org/abs/2010.09713v2
- Date: Tue, 30 Mar 2021 17:54:47 GMT
- Title: PseudoSeg: Designing Pseudo Labels for Semantic Segmentation
- Authors: Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian,
Jia-Bin Huang, Tomas Pfister
- Abstract summary: We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
- Score: 78.35515004654553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in semi-supervised learning (SSL) demonstrate that a
combination of consistency regularization and pseudo-labeling can effectively
improve image classification accuracy in the low-data regime. Compared to
classification, semantic segmentation tasks require much more intensive
labeling costs. Thus, these tasks greatly benefit from data-efficient training
methods. However, structured outputs in segmentation render particular
difficulties (e.g., designing pseudo-labeling and augmentation) to apply
existing SSL strategies. To address this problem, we present a simple and novel
re-design of pseudo-labeling to generate well-calibrated structured pseudo
labels for training with unlabeled or weakly-labeled data. Our proposed
pseudo-labeling strategy is network structure agnostic to apply in a one-stage
consistency training framework. We demonstrate the effectiveness of the
proposed pseudo-labeling strategy in both low-data and high-data regimes.
Extensive experiments have validated that pseudo labels generated from wisely
fusing diverse sources and strong data augmentation are crucial to consistency
training for segmentation. The source code is available at
https://github.com/googleinterns/wss.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - LayerMatch: Do Pseudo-labels Benefit All Layers? [77.59625180366115]
Semi-supervised learning offers a promising solution to mitigate the dependency of labeled data.
We develop two layer-specific pseudo-label strategies, termed Grad-ReLU and Avg-Clustering.
Our approach consistently demonstrates exceptional performance on standard semi-supervised learning benchmarks.
arXiv Detail & Related papers (2024-06-20T11:25:50Z) - Posterior Label Smoothing for Node Classification [2.737276507021477]
We propose a simple yet effective label smoothing for the transductive node classification task.
We design the soft label to encapsulate the local context of the target node through the neighborhood label distribution.
In the following analysis, we find that incorporating global label statistics in posterior computation is the key to the success of label smoothing.
arXiv Detail & Related papers (2024-06-01T11:59:49Z) - Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation [87.17768598044427]
Traditional semi-supervised learning assumes that the feature distributions of labeled and unlabeled data are consistent.
We propose Self-Supervised Feature Adaptation (SSFA), a generic framework for improving SSL performance when labeled and unlabeled data come from different distributions.
Our proposed SSFA is applicable to various pseudo-label-based SSL learners and significantly improves performance in labeled, unlabeled, and even unseen distributions.
arXiv Detail & Related papers (2024-05-31T03:13:45Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - Weakly Supervised Pseudo-Label assisted Learning for ALS Point Cloud
Semantic Segmentation [1.4620086904601473]
Competitive point cloud results usually rely on a large amount of labeled data.
In this study, we propose a pseudo-labeling strategy to obtain accurate results with limited ground truth.
arXiv Detail & Related papers (2021-05-05T08:07:21Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.