Semi-Supervised Object Detection with Object-wise Contrastive Learning
and Regression Uncertainty
- URL: http://arxiv.org/abs/2212.02747v1
- Date: Tue, 6 Dec 2022 04:37:51 GMT
- Title: Semi-Supervised Object Detection with Object-wise Contrastive Learning
and Regression Uncertainty
- Authors: Honggyu Choi, Zhixiang Chen, Xuepeng Shi, Tae-Kyun Kim
- Abstract summary: We propose a two-step pseudo-label filtering for the classification and regression heads in a teacher-student framework.
By jointly filtering the pseudo-labels for the classification and regression heads, the student network receives better guidance from the teacher network for object detection task.
- Score: 46.21528260727673
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semi-supervised object detection (SSOD) aims to boost detection performance
by leveraging extra unlabeled data. The teacher-student framework has been
shown to be promising for SSOD, in which a teacher network generates
pseudo-labels for unlabeled data to assist the training of a student network.
Since the pseudo-labels are noisy, filtering the pseudo-labels is crucial to
exploit the potential of such framework. Unlike existing suboptimal methods, we
propose a two-step pseudo-label filtering for the classification and regression
heads in a teacher-student framework. For the classification head, OCL
(Object-wise Contrastive Learning) regularizes the object representation
learning that utilizes unlabeled data to improve pseudo-label filtering by
enhancing the discriminativeness of the classification score. This is designed
to pull together objects in the same class and push away objects from different
classes. For the regression head, we further propose RUPL
(Regression-Uncertainty-guided Pseudo-Labeling) to learn the aleatoric
uncertainty of object localization for label filtering. By jointly filtering
the pseudo-labels for the classification and regression heads, the student
network receives better guidance from the teacher network for object detection
task. Experimental results on Pascal VOC and MS-COCO datasets demonstrate the
superiority of our proposed method with competitive performance compared to
existing methods.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Online Pseudo-Label Unified Object Detection for Multiple Datasets Training [0.0]
We propose an Online Pseudo-Label Unified Object Detection scheme.
Our method uses a periodically updated teacher model to generate pseudo-labels for the unlabelled objects in each sub-dataset.
We also propose a category specific box regression and a pseudo-label RPN head to improve the recall rate of the Region Proposal Network (PRN)
arXiv Detail & Related papers (2024-10-21T01:23:42Z) - Versatile Teacher: A Class-aware Teacher-student Framework for Cross-domain Adaptation [2.9748058103007957]
We introduce a novel teacher-student model named Versatile Teacher (VT)
VT considers class-specific detection difficulty and employs a two-step pseudo-label selection mechanism to generate more reliable pseudo labels.
Our method demonstrates promising results on three benchmark datasets, and extends the alignment methods for widely-used one-stage detectors.
arXiv Detail & Related papers (2024-05-20T03:31:43Z) - Credible Teacher for Semi-Supervised Object Detection in Open Scene [106.25850299007674]
In Open Scene Semi-Supervised Object Detection (O-SSOD), unlabeled data may contain unknown objects not observed in the labeled data.
It is detrimental to the current methods that mainly rely on self-training, as more uncertainty leads to the lower localization and classification precision of pseudo labels.
We propose Credible Teacher, an end-to-end framework to prevent uncertain pseudo labels from misleading the model.
arXiv Detail & Related papers (2024-01-01T08:19:21Z) - Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant [72.4512562104361]
We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor.
Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor.
arXiv Detail & Related papers (2023-01-18T07:11:24Z) - Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - Refining Pseudo Labels with Clustering Consensus over Generations for
Unsupervised Object Re-identification [84.72303377833732]
Unsupervised object re-identification targets at learning discriminative representations for object retrieval without any annotations.
We propose to estimate pseudo label similarities between consecutive training generations with clustering consensus and refine pseudo labels with temporally propagated and ensembled pseudo labels.
The proposed pseudo label refinery strategy is simple yet effective and can be seamlessly integrated into existing clustering-based unsupervised re-identification methods.
arXiv Detail & Related papers (2021-06-11T02:42:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.