PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised
Object Detection
- URL: http://arxiv.org/abs/2203.16317v1
- Date: Wed, 30 Mar 2022 13:59:22 GMT
- Title: PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised
Object Detection
- Authors: Gang Li, Xiang Li, Yujie Wang, Shanshan Zhang, Yichao Wu, Ding Liang
- Abstract summary: We propose Noisy Pseudo box Learning (NPL) that includes Prediction-guided Label Assignment (PLA) and Positive-proposal Consistency Voting (PCV)
On benchmark, our method, PSEudo labeling and COnsistency training (PseCo), outperforms the SOTA (Soft Teacher) by 2.0, 1.8, 2.0 points under 1%, 5%, and 10% labelling ratios.
- Score: 42.75316070378037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we delve into two key techniques in Semi-Supervised Object
Detection (SSOD), namely pseudo labeling and consistency training. We observe
that these two techniques currently neglect some important properties of object
detection, hindering efficient learning on unlabeled data. Specifically, for
pseudo labeling, existing works only focus on the classification score yet fail
to guarantee the localization precision of pseudo boxes; For consistency
training, the widely adopted random-resize training only considers the
label-level consistency but misses the feature-level one, which also plays an
important role in ensuring the scale invariance. To address the problems
incurred by noisy pseudo boxes, we design Noisy Pseudo box Learning (NPL) that
includes Prediction-guided Label Assignment (PLA) and Positive-proposal
Consistency Voting (PCV). PLA relies on model predictions to assign labels and
makes it robust to even coarse pseudo boxes; while PCV leverages the regression
consistency of positive proposals to reflect the localization quality of pseudo
boxes. Furthermore, in consistency training, we propose Multi-view
Scale-invariant Learning (MSL) that includes mechanisms of both label- and
feature-level consistency, where feature consistency is achieved by aligning
shifted feature pyramids between two images with identical content but varied
scales. On COCO benchmark, our method, termed PSEudo labeling and COnsistency
training (PseCo), outperforms the SOTA (Soft Teacher) by 2.0, 1.8, 2.0 points
under 1%, 5%, and 10% labelling ratios, respectively. It also significantly
improves the learning efficiency for SSOD, e.g., PseCo halves the training time
of the SOTA approach but achieves even better performance.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Cross Pseudo-Labeling for Semi-Supervised Audio-Visual Source
Localization [9.791311361007397]
We propose a novel method named Cross Pseudo-Labeling (XPL), wherein two models learn from each other with the cross-refine mechanism to avoid bias accumulation.
XPL significantly outperforms existing methods, achieving state-of-the-art performance while effectively mitigating confirmation bias.
arXiv Detail & Related papers (2024-03-05T16:28:48Z) - Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object
Detection with Repeated Labels [6.872072177648135]
We propose a novel localization algorithm that adapts well-established ground truth estimation methods.
Our algorithm also shows superior performance during training on the TexBiG dataset.
arXiv Detail & Related papers (2023-09-18T13:08:44Z) - Improving Self-training for Cross-lingual Named Entity Recognition with
Contrastive and Prototype Learning [80.08139343603956]
In cross-lingual named entity recognition, self-training is commonly used to bridge the linguistic gap.
In this work, we aim to improve self-training for cross-lingual NER by combining representation learning and pseudo label refinement.
Our proposed method, namely ContProto mainly comprises two components: (1) contrastive self-training and (2) prototype-based pseudo-labeling.
arXiv Detail & Related papers (2023-05-23T02:52:16Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced
Semi-Supervised Learning [80.05441565830726]
This paper addresses imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance.
We propose a general pseudo-labeling framework to address the bias motivated by this observation.
We term the novel pseudo-labeling framework for imbalanced SSL as Distribution-Aware Semantics-Oriented (DASO) Pseudo-label.
arXiv Detail & Related papers (2021-06-10T11:58:25Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.