Semi-supervised Object Detection via Virtual Category Learning
- URL: http://arxiv.org/abs/2207.03433v1
- Date: Thu, 7 Jul 2022 16:59:53 GMT
- Title: Semi-supervised Object Detection via Virtual Category Learning
- Authors: Changrui Chen, Kurt Debattista, Jungong Han
- Abstract summary: This paper proposes to use confusing samples proactively without label correction.
Specifically, a virtual category (VC) is assigned to each confusing sample.
It is attributed to specifying the embedding distance between the training sample and the virtual category.
- Score: 68.26956850996976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the costliness of labelled data in real-world applications,
semi-supervised object detectors, underpinned by pseudo labelling, are
appealing. However, handling confusing samples is nontrivial: discarding
valuable confusing samples would compromise the model generalisation while
using them for training would exacerbate the confirmation bias issue caused by
inevitable mislabelling. To solve this problem, this paper proposes to use
confusing samples proactively without label correction. Specifically, a virtual
category (VC) is assigned to each confusing sample such that they can safely
contribute to the model optimisation even without a concrete label. It is
attributed to specifying the embedding distance between the training sample and
the virtual category as the lower bound of the inter-class distance. Moreover,
we also modify the localisation loss to allow high-quality boundaries for
location regression. Extensive experiments demonstrate that the proposed VC
learning significantly surpasses the state-of-the-art, especially with small
amounts of available labels.
Related papers
- Reduction-based Pseudo-label Generation for Instance-dependent Partial Label Learning [41.345794038968776]
We propose to leverage reduction-based pseudo-labels to alleviate the influence of incorrect candidate labels.
We show that reduction-based pseudo-labels exhibit greater consistency with the Bayes optimal classifier compared to pseudo-labels directly generated from the predictive model.
arXiv Detail & Related papers (2024-10-28T07:32:20Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Improving Generalization of Deep Fault Detection Models in the Presence
of Mislabeled Data [1.3535770763481902]
We propose a novel two-step framework for robust training with label noise.
In the first step, we identify outliers (including the mislabeled samples) based on the update in the hypothesis space.
In the second step, we propose different approaches to modifying the training data based on the identified outliers and a data augmentation technique.
arXiv Detail & Related papers (2020-09-30T12:33:25Z) - Unsupervised Vehicle Re-identification with Progressive Adaptation [26.95027290004128]
Vehicle re-identification (reID) aims at identifying vehicles across different non-overlapping cameras views.
We propose a novel progressive adaptation learning method for vehicle reID, named PAL, which infers from the abundant data without annotations.
arXiv Detail & Related papers (2020-06-20T03:59:41Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.