Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection
- URL: http://arxiv.org/abs/2303.14960v1
- Date: Mon, 27 Mar 2023 07:46:58 GMT
- Title: Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection
- Authors: Chang Liu, Weiming Zhang, Xiangru Lin, Wei Zhang, Xiao Tan, Junyu Han,
Xiaomao Li, Errui Ding, Jingdong Wang
- Abstract summary: We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
- Score: 98.66771688028426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With basic Semi-Supervised Object Detection (SSOD) techniques, one-stage
detectors generally obtain limited promotions compared with two-stage clusters.
We experimentally find that the root lies in two kinds of ambiguities: (1)
Selection ambiguity that selected pseudo labels are less accurate, since
classification scores cannot properly represent the localization quality. (2)
Assignment ambiguity that samples are matched with improper labels in
pseudo-label assignment, as the strategy is misguided by missed objects and
inaccurate pseudo boxes. To tackle these problems, we propose a
Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Specifically, to alleviate the selection ambiguity, Joint-Confidence Estimation
(JCE) is proposed to jointly quantifies the classification and localization
quality of pseudo labels. As for the assignment ambiguity, Task-Separation
Assignment (TSA) is introduced to assign labels based on pixel-level
predictions rather than unreliable pseudo boxes. It employs a
"divide-and-conquer" strategy and separately exploits positives for the
classification and localization task, which is more robust to the assignment
ambiguity. Comprehensive experiments demonstrate that ARSL effectively
mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS
COCO and PASCAL VOC. Codes can be found at
https://github.com/PaddlePaddle/PaddleDetection.
Related papers
- Towards Adaptive Pseudo-label Learning for Semi-Supervised Temporal Action Localization [10.233225586034665]
Existing methods often filter pseudo labels based on strict conditions, leading to suboptimal pseudo-label ranking and selection.
We propose a novel Adaptive Pseudo-label Learning framework to facilitate better pseudo-label selection.
Our method achieves state-of-the-art performance under various semi-supervised settings.
arXiv Detail & Related papers (2024-07-10T14:00:19Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Prompt-based Pseudo-labeling Strategy for Sample-Efficient Semi-Supervised Extractive Summarization [12.582774521907227]
Semi-supervised learning (SSL) is a widely used technique in scenarios where labeled data is scarce and unlabeled data is abundant.
Standard SSL methods follow a teacher-student paradigm to first train a classification model and then use the classifier's confidence values to select pseudo-labels.
We propose a prompt-based pseudo-labeling strategy with LLMs that picks unlabeled examples with more accurate pseudo-labels.
arXiv Detail & Related papers (2023-11-16T04:29:41Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Mind the Gap: Polishing Pseudo labels for Accurate Semi-supervised
Object Detection [18.274860417877093]
We propose a dual pseudo-label polishing framework for semi-supervised object detection (SSOD)
Instead of directly exploiting the pseudo labels produced by the teacher detector, we take the first attempt at reducing their deviation from ground truth.
By doing this, both polishing networks can infer more accurate pseudo labels for unannotated objects.
arXiv Detail & Related papers (2022-07-17T14:07:49Z) - Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - Hard Class Rectification for Domain Adaptation [36.58361356407803]
Domain adaptation (DA) aims to transfer knowledge from a label-rich domain (source domain) to a label-scare domain (target domain)
We propose a novel framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to alleviate the hard class problem.
The proposed method is evaluated in both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)
arXiv Detail & Related papers (2020-08-08T06:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.