Pseudo-label Correction and Learning For Semi-Supervised Object
Detection
- URL: http://arxiv.org/abs/2303.02998v1
- Date: Mon, 6 Mar 2023 09:54:15 GMT
- Title: Pseudo-label Correction and Learning For Semi-Supervised Object
Detection
- Authors: Yulin He, Wei Chen, Ke Liang, Yusong Tan, Zhengfa Liang, Yulan Guo
- Abstract summary: We propose two strategies, namely pseudo-label correction and noise-unaware learning.
For pseudo-label correction, we introduce a multi-round refining method and a multi-vote weighting method.
For noise-unaware learning, we introduce a loss weight function that is negatively correlated with the Intersection over Union (IoU) in the regression task.
- Score: 34.030270359567204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pseudo-Labeling has emerged as a simple yet effective technique for
semi-supervised object detection (SSOD). However, the inevitable noise problem
in pseudo-labels significantly degrades the performance of SSOD methods. Recent
advances effectively alleviate the classification noise in SSOD, while the
localization noise which is a non-negligible part of SSOD is not
well-addressed. In this paper, we analyse the localization noise from the
generation and learning phases, and propose two strategies, namely pseudo-label
correction and noise-unaware learning. For pseudo-label correction, we
introduce a multi-round refining method and a multi-vote weighting method. The
former iteratively refines the pseudo boxes to improve the stability of
predictions, while the latter smoothly self-corrects pseudo boxes by weighing
the scores of surrounding jittered boxes. For noise-unaware learning, we
introduce a loss weight function that is negatively correlated with the
Intersection over Union (IoU) in the regression task, which pulls the predicted
boxes closer to the object and improves localization accuracy. Our proposed
method, Pseudo-label Correction and Learning (PCL), is extensively evaluated on
the MS COCO and PASCAL VOC benchmarks. On MS COCO, PCL outperforms the
supervised baseline by 12.16, 12.11, and 9.57 mAP and the recent SOTA
(SoftTeacher) by 3.90, 2.54, and 2.43 mAP under 1\%, 5\%, and 10\% labeling
ratios, respectively. On PASCAL VOC, PCL improves the supervised baseline by
5.64 mAP and the recent SOTA (Unbiased Teacherv2) by 1.04 mAP on AP$^{50}$.
Related papers
- Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised
Object Detection [42.75316070378037]
We propose Noisy Pseudo box Learning (NPL) that includes Prediction-guided Label Assignment (PLA) and Positive-proposal Consistency Voting (PCV)
On benchmark, our method, PSEudo labeling and COnsistency training (PseCo), outperforms the SOTA (Soft Teacher) by 2.0, 1.8, 2.0 points under 1%, 5%, and 10% labelling ratios.
arXiv Detail & Related papers (2022-03-30T13:59:22Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - SparseDet: Improving Sparsely Annotated Object Detection with
Pseudo-positive Mining [76.95808270536318]
We propose an end-to-end system that learns to separate proposals into labeled and unlabeled regions using Pseudo-positive mining.
While the labeled regions are processed as usual, self-supervised learning is used to process the unlabeled regions.
We conduct exhaustive experiments on five splits on the PASCAL-VOC and COCO datasets achieving state-of-the-art performance.
arXiv Detail & Related papers (2022-01-12T18:57:04Z) - Semi-Supervised Object Detection with Adaptive Class-Rebalancing
Self-Training [5.874575666947381]
This study delves into semi-supervised object detection to improve detector performance with additional unlabeled data.
We propose a novel two-stage filtering algorithm to generate accurate pseudo-labels.
Our method achieves satisfactory improvements on MS-COCO and VOC benchmarks.
arXiv Detail & Related papers (2021-07-11T12:14:42Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.