Open-Set Semi-Supervised Object Detection
- URL: http://arxiv.org/abs/2208.13722v1
- Date: Mon, 29 Aug 2022 17:04:30 GMT
- Title: Open-Set Semi-Supervised Object Detection
- Authors: Yen-Cheng Liu, Chih-Yao Ma, Xiaoliang Dai, Junjiao Tian, Peter Vajda,
Zijian He, Zsolt Kira
- Abstract summary: Recent developments for Semi-Supervised Object Detection (SSOD) have shown the promise of leveraging unlabeled data to improve an object detector.
We consider a more practical yet challenging problem, Open-Set Semi-Supervised Object Detection (OSSOD)
Our proposed framework effectively addresses the semantic expansion issue and shows consistent improvements on many OSSOD benchmarks.
- Score: 43.464223594166654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent developments for Semi-Supervised Object Detection (SSOD) have shown
the promise of leveraging unlabeled data to improve an object detector.
However, thus far these methods have assumed that the unlabeled data does not
contain out-of-distribution (OOD) classes, which is unrealistic with
larger-scale unlabeled datasets. In this paper, we consider a more practical
yet challenging problem, Open-Set Semi-Supervised Object Detection (OSSOD). We
first find the existing SSOD method obtains a lower performance gain in
open-set conditions, and this is caused by the semantic expansion, where the
distracting OOD objects are mispredicted as in-distribution pseudo-labels for
the semi-supervised training. To address this problem, we consider online and
offline OOD detection modules, which are integrated with SSOD methods. With the
extensive studies, we found that leveraging an offline OOD detector based on a
self-supervised vision transformer performs favorably against online OOD
detectors due to its robustness to the interference of pseudo-labeling. In the
experiment, our proposed framework effectively addresses the semantic expansion
issue and shows consistent improvements on many OSSOD benchmarks, including
large-scale COCO-OpenImages. We also verify the effectiveness of our framework
under different OSSOD conditions, including varying numbers of in-distribution
classes, different degrees of supervision, and different combinations of
unlabeled sets.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - Credible Teacher for Semi-Supervised Object Detection in Open Scene [106.25850299007674]
In Open Scene Semi-Supervised Object Detection (O-SSOD), unlabeled data may contain unknown objects not observed in the labeled data.
It is detrimental to the current methods that mainly rely on self-training, as more uncertainty leads to the lower localization and classification precision of pseudo labels.
We propose Credible Teacher, an end-to-end framework to prevent uncertain pseudo labels from misleading the model.
arXiv Detail & Related papers (2024-01-01T08:19:21Z) - Semi-Supervised Object Detection with Uncurated Unlabeled Data for
Remote Sensing Images [16.660668160785615]
Semi-supervised object detection (SSOD) methods tackle this issue by generating pseudo-labels for the unlabeled data.
However, real-world situations introduce the possibility of out-of-distribution (OOD) samples being mixed with in-distribution (ID) samples within the unlabeled dataset.
We propose Open-Set Semi-Supervised Object Detection (OSSOD) on uncurated unlabeled data.
arXiv Detail & Related papers (2023-10-09T07:59:31Z) - Occlusion-Aware Detection and Re-ID Calibrated Network for Multi-Object
Tracking [38.36872739816151]
Occlusion-Aware Attention (OAA) module in the detector highlights the object features while suppressing the occluded background regions.
OAA can serve as a modulator that enhances the detector for some potentially occluded objects.
We design a Re-ID embedding matching block based on the optimal transport problem.
arXiv Detail & Related papers (2023-08-30T06:56:53Z) - SOOD: Towards Semi-Supervised Oriented Object Detection [57.05141794402972]
This paper proposes a novel Semi-supervised Oriented Object Detection model, termed SOOD, built upon the mainstream pseudo-labeling framework.
Our experiments show that when trained with the two proposed losses, SOOD surpasses the state-of-the-art SSOD methods under various settings on the DOTA-v1.5 benchmark.
arXiv Detail & Related papers (2023-04-10T11:10:42Z) - SIOD: Single Instance Annotated Per Category Per Image for Object
Detection [67.64774488115299]
We propose the Single Instance annotated Object Detection (SIOD), requiring only one instance annotation for each existing category in an image.
Degraded from inter-task (WSOD) or inter-image (SSOD) discrepancies to the intra-image discrepancy, SIOD provides more reliable and rich prior knowledge for mining the rest of unlabeled instances.
Under the SIOD setting, we propose a simple yet effective framework, termed Dual-Mining (DMiner), which consists of a Similarity-based Pseudo Label Generating module (SPLG) and a Pixel-level Group Contrastive Learning module (PGCL)
arXiv Detail & Related papers (2022-03-29T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.