Dense Relation Distillation with Context-aware Aggregation for Few-Shot
Object Detection
- URL: http://arxiv.org/abs/2103.17115v1
- Date: Tue, 30 Mar 2021 05:34:49 GMT
- Title: Dense Relation Distillation with Context-aware Aggregation for Few-Shot
Object Detection
- Authors: Hanzhe Hu, Shuai Bai, Aoxue Li, Jinshi Cui, Liwei Wang
- Abstract summary: Few-shot object detection is challenging since the fine-grained feature of novel object can be easily overlooked with only a few data available.
We propose Dense Relation Distillation with Context-aware Aggregation (DCNet) to tackle the few-shot detection problem.
- Score: 18.04185751827619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional deep learning based methods for object detection require a large
amount of bounding box annotations for training, which is expensive to obtain
such high quality annotated data. Few-shot object detection, which learns to
adapt to novel classes with only a few annotated examples, is very challenging
since the fine-grained feature of novel object can be easily overlooked with
only a few data available. In this work, aiming to fully exploit features of
annotated novel object and capture fine-grained features of query object, we
propose Dense Relation Distillation with Context-aware Aggregation (DCNet) to
tackle the few-shot detection problem. Built on the meta-learning based
framework, Dense Relation Distillation module targets at fully exploiting
support features, where support features and query feature are densely matched,
covering all spatial locations in a feed-forward fashion. The abundant usage of
the guidance information endows model the capability to handle common
challenges such as appearance changes and occlusions. Moreover, to better
capture scale-aware features, Context-aware Aggregation module adaptively
harnesses features from different scales for a more comprehensive feature
representation. Extensive experiments illustrate that our proposed approach
achieves state-of-the-art results on PASCAL VOC and MS COCO datasets. Code will
be made available at https://github.com/hzhupku/DCNet.
Related papers
- Adaptive Guidance Learning for Camouflaged Object Detection [23.777432551429396]
This paper proposes an adaptive guidance learning network, dubbed textitAGLNet, to guide accurate camouflaged feature learning.
Experiments on three widely used COD benchmark datasets demonstrate that the proposed method achieves significant performance improvements.
arXiv Detail & Related papers (2024-05-05T06:21:58Z) - Few-shot Oriented Object Detection with Memorable Contrastive Learning in Remote Sensing Images [11.217630579076237]
Few-shot object detection (FSOD) has garnered significant research attention in the field of remote sensing.
We propose a novel FSOD method for remote sensing images called Few-shot Oriented object detection with Memorable Contrastive learning (FOMC)
Specifically, we employ oriented bounding boxes instead of traditional horizontal bounding boxes to learn a better feature representation for arbitrary-oriented aerial objects.
arXiv Detail & Related papers (2024-03-20T08:15:18Z) - Fine-Grained Prototypes Distillation for Few-Shot Object Detection [8.795211323408513]
Few-shot object detection (FSOD) aims at extending a generic detector for novel object detection with only a few training examples.
In general, methods based on meta-learning employ an additional support branch to encode novel examples into class prototypes.
New methods are required to capture the distinctive local context for more robust novel object detection.
arXiv Detail & Related papers (2024-01-15T12:12:48Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Discovery-and-Selection: Towards Optimal Multiple Instance Learning for
Weakly Supervised Object Detection [86.86602297364826]
We propose a discoveryand-selection approach fused with multiple instance learning (DS-MIL)
Our proposed DS-MIL approach can consistently improve the baselines, reporting state-of-the-art performance.
arXiv Detail & Related papers (2021-10-18T07:06:57Z) - Dynamic Relevance Learning for Few-Shot Object Detection [6.550840743803705]
We propose a dynamic relevance learning model, which utilizes the relationship between all support images and Region of Interest (RoI) on the query images to construct a dynamic graph convolutional network (GCN)
The proposed model achieves the best overall performance, which shows its effectiveness of learning more generalized features.
arXiv Detail & Related papers (2021-08-04T18:29:42Z) - Meta Faster R-CNN: Towards Accurate Few-Shot Object Detection with
Attentive Feature Alignment [33.446875089255876]
Few-shot object detection (FSOD) aims to detect objects using only few examples.
We propose a meta-learning based few-shot object detection method by transferring meta-knowledge learned from data-abundant base classes to data-scarce novel classes.
arXiv Detail & Related papers (2021-04-15T19:01:27Z) - Multi-scale Interactive Network for Salient Object Detection [91.43066633305662]
We propose the aggregate interaction modules to integrate the features from adjacent levels.
To obtain more efficient multi-scale features, the self-interaction modules are embedded in each decoder unit.
Experimental results on five benchmark datasets demonstrate that the proposed method without any post-processing performs favorably against 23 state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-17T15:41:37Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.