SKDF: A Simple Knowledge Distillation Framework for Distilling Open-Vocabulary Knowledge to Open-world Object Detector
- URL: http://arxiv.org/abs/2312.08653v2
- Date: Sat, 30 Mar 2024 06:05:40 GMT
- Title: SKDF: A Simple Knowledge Distillation Framework for Distilling Open-Vocabulary Knowledge to Open-world Object Detector
- Authors: Shuailei Ma, Yuefeng Wang, Ying Wei, Jiaqi Fan, Enming Zhang, Xinyu Sun, Peihao Chen,
- Abstract summary: We specialize the VLM model for OWOD tasks by distilling its open-world knowledge into a language-agnostic detector.
We observe that the combination of a simple textbfknowledge distillation approach and the automatic pseudo-labeling mechanism in OWOD can achieve better performance for unknown object detection.
We propose two benchmarks for evaluating the ability of the open-world detector to detect unknown objects in the open world.
- Score: 8.956773268679811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we attempt to specialize the VLM model for OWOD tasks by distilling its open-world knowledge into a language-agnostic detector. Surprisingly, we observe that the combination of a simple \textbf{knowledge distillation} approach and the automatic pseudo-labeling mechanism in OWOD can achieve better performance for unknown object detection, even with a small amount of data. Unfortunately, knowledge distillation for unknown objects severely affects the learning of detectors with conventional structures for known objects, leading to catastrophic forgetting. To alleviate these problems, we propose the \textbf{down-weight loss function} for knowledge distillation from vision-language to single vision modality. Meanwhile, we propose the \textbf{cascade decouple decoding structure} that decouples the learning of localization and recognition to reduce the impact of category interactions of known and unknown objects on the localization learning process. Ablation experiments demonstrate that both of them are effective in mitigating the impact of open-world knowledge distillation on the learning of known objects. Additionally, to alleviate the current lack of comprehensive benchmarks for evaluating the ability of the open-world detector to detect unknown objects in the open world, we propose two benchmarks, which we name "\textbf{StandardSet}$\heartsuit$" and "\textbf{IntensiveSet}$\spadesuit$" respectively, based on the complexity of their testing scenarios. Comprehensive experiments performed on OWOD, MS-COCO, and our proposed benchmarks demonstrate the effectiveness of our methods. The code and proposed dataset are available at \url{https://github.com/xiaomabufei/SKDF}.
Related papers
- Open-World Object Detection with Instance Representation Learning [1.8749305679160366]
We propose a method to train an object detector that can both detect novel objects and extract semantically rich features in open-world conditions.
Our method learns a robust and generalizable feature space, outperforming other OWOD-based feature extraction methods.
arXiv Detail & Related papers (2024-09-24T13:13:34Z) - Learning Background Prompts to Discover Implicit Knowledge for Open Vocabulary Object Detection [101.15777242546649]
Open vocabulary object detection (OVD) aims at seeking an optimal object detector capable of recognizing objects from both base and novel categories.
Recent advances leverage knowledge distillation to transfer insightful knowledge from pre-trained large-scale vision-language models to the task of object detection.
We present a novel OVD framework termed LBP to propose learning background prompts to harness explored implicit background knowledge.
arXiv Detail & Related papers (2024-06-01T17:32:26Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - Unsupervised Recognition of Unknown Objects for Open-World Object
Detection [28.787586991713535]
Open-World Object Detection (OWOD) extends object detection problem to a realistic and dynamic scenario.
Current OWOD models, such as ORE and OW-DETR, focus on pseudo-labeling regions with high objectness scores as unknowns.
This paper proposes a novel approach that learns an unsupervised discriminative model to recognize true unknown objects.
arXiv Detail & Related papers (2023-08-31T08:17:29Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Detecting the open-world objects with the help of the Brain [20.00772846521719]
Open World Object Detection (OWOD) is a novel computer vision task with a considerable challenge.
OWOD algorithms are expected to detect unseen/unknown objects and incrementally learn them.
We propose leveraging the VL as the Brain'' of the open-world detector by simply generating unknown labels.
arXiv Detail & Related papers (2023-03-21T06:44:02Z) - Open-World Object Detection via Discriminative Class Prototype Learning [4.055884768256164]
Open-world object detection (OWOD) is a challenging problem that combines object detection with incremental learning and open-set learning.
We propose a novel and efficient OWOD solution from a prototype perspective, which we call OCPL: Open-world object detection via discnative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via
arXiv Detail & Related papers (2023-02-23T03:05:04Z) - Open World DETR: Transformer based Open World Object Detection [60.64535309016623]
We propose a two-stage training approach named Open World DETR for open world object detection based on Deformable DETR.
We fine-tune the class-specific components of the model with a multi-view self-labeling strategy and a consistency constraint.
Our proposed method outperforms other state-of-the-art open world object detection methods by a large margin.
arXiv Detail & Related papers (2022-12-06T13:39:30Z) - Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language
Knowledge Distillation [36.79599282372021]
We propose a hierarchical visual-language knowledge distillation method, i.e., HierKD, for open-vocabulary one-stage detection.
Our method significantly surpasses the previous best one-stage detector with 11.9% and 6.7% $AP_50$ gains.
arXiv Detail & Related papers (2022-03-20T16:31:49Z) - OW-DETR: Open-world Detection Transformer [90.56239673123804]
We introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection.
OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring.
Our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from 1.8% to 3.3% in terms of unknown recall.
arXiv Detail & Related papers (2021-12-02T18:58:30Z) - Towards Open World Object Detection [68.79678648726416]
ORE: Open World Object Detector is based on contrastive clustering and energy based unknown identification.
We find that identifying and characterizing unknown instances helps to reduce confusion in an incremental object detection setting.
arXiv Detail & Related papers (2021-03-03T18:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.