Revisiting Open World Object Detection
- URL: http://arxiv.org/abs/2201.00471v2
- Date: Tue, 4 Jan 2022 04:58:06 GMT
- Title: Revisiting Open World Object Detection
- Authors: Xiaowei Zhao, Xianglong Liu, Yifan Shen, Yixuan Qiao, Yuqing Ma,
Duorui Wang
- Abstract summary: We find that the only previous OWOD work constructively puts forward the OWOD definition.
We propose five fundamental benchmark principles to guide the OWOD benchmark construction.
Our method outperforms other state-of-the-art object detection approaches in terms of both existing and our new metrics.
- Score: 39.49589782316664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open World Object Detection (OWOD), simulating the real dynamic world where
knowledge grows continuously, attempts to detect both known and unknown classes
and incrementally learn the identified unknown ones. We find that although the
only previous OWOD work constructively puts forward to the OWOD definition, the
experimental settings are unreasonable with the illogical benchmark, confusing
metric calculation, and inappropriate method. In this paper, we rethink the
OWOD experimental setting and propose five fundamental benchmark principles to
guide the OWOD benchmark construction. Moreover, we design two fair evaluation
protocols specific to the OWOD problem, filling the void of evaluating from the
perspective of unknown classes. Furthermore, we introduce a novel and effective
OWOD framework containing an auxiliary Proposal ADvisor (PAD) and a
Class-specific Expelling Classifier (CEC). The non-parametric PAD could assist
the RPN in identifying accurate unknown proposals without supervision, while
CEC calibrates the over-confident activation boundary and filters out confusing
predictions through a class-specific expelling function. Comprehensive
experiments conducted on our fair benchmark demonstrate that our method
outperforms other state-of-the-art object detection approaches in terms of both
existing and our new metrics. Our benchmark and code are available at
https://github.com/RE-OWOD/RE-OWOD.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Open-set object detection: towards unified problem formulation and benchmarking [2.4374097382908477]
We introduce two benchmarks: a unified VOC-COCO evaluation, and the new OpenImagesRoad benchmark which provides clear hierarchical object definition besides new evaluation metrics.
State-of-the-art methods are extensively evaluated on the proposed benchmarks.
This study provides a clear problem definition, ensures consistent evaluations, and draws new conclusions about effectiveness of OSOD strategies.
arXiv Detail & Related papers (2024-11-08T13:40:01Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Semi-DETR: Semi-Supervised Object Detection with Detection Transformers [105.45018934087076]
We analyze the DETR-based framework on semi-supervised object detection (SSOD)
We present Semi-DETR, the first transformer-based end-to-end semi-supervised object detector.
Our method outperforms all state-of-the-art methods by clear margins.
arXiv Detail & Related papers (2023-07-16T16:32:14Z) - Open-World Object Detection via Discriminative Class Prototype Learning [4.055884768256164]
Open-world object detection (OWOD) is a challenging problem that combines object detection with incremental learning and open-set learning.
We propose a novel and efficient OWOD solution from a prototype perspective, which we call OCPL: Open-world object detection via discnative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via
arXiv Detail & Related papers (2023-02-23T03:05:04Z) - Dense Learning based Semi-Supervised Object Detection [46.885301243656045]
Semi-supervised object detection (SSOD) aims to facilitate the training and deployment of object detectors with the help of a large amount of unlabeled data.
In this paper, we propose a DenSe Learning based anchor-free SSOD algorithm.
Experiments are conducted on MS-COCO and PASCAL-VOC, and the results show that our proposed DSL method records new state-of-the-art SSOD performance.
arXiv Detail & Related papers (2022-04-15T02:31:02Z) - OW-DETR: Open-world Detection Transformer [90.56239673123804]
We introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection.
OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring.
Our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from 1.8% to 3.3% in terms of unknown recall.
arXiv Detail & Related papers (2021-12-02T18:58:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.