Dynamic Relevance Learning for Few-Shot Object Detection
- URL: http://arxiv.org/abs/2108.02235v3
- Date: Wed, 22 Mar 2023 13:26:59 GMT
- Title: Dynamic Relevance Learning for Few-Shot Object Detection
- Authors: Weijie Liu, Chong Wang, Haohe Li, Shenghao Yu and Jiafei Wu
- Abstract summary: We propose a dynamic relevance learning model, which utilizes the relationship between all support images and Region of Interest (RoI) on the query images to construct a dynamic graph convolutional network (GCN)
The proposed model achieves the best overall performance, which shows its effectiveness of learning more generalized features.
- Score: 6.550840743803705
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Expensive bounding-box annotations have limited the development of object
detection task. Thus, it is necessary to focus on more challenging task of
few-shot object detection. It requires the detector to recognize objects of
novel classes with only a few training samples. Nowadays, many existing popular
methods adopting training way similar to meta-learning have achieved promising
performance, such as Meta R-CNN series. However, support data is only used as
the class attention to guide the detecting of query images each time. Their
relevance to each other remains unexploited. Moreover, a lot of recent works
treat the support data and query images as independent branch without
considering the relationship between them. To address this issue, we propose a
dynamic relevance learning model, which utilizes the relationship between all
support images and Region of Interest (RoI) on the query images to construct a
dynamic graph convolutional network (GCN). By adjusting the prediction
distribution of the base detector using the output of this GCN, the proposed
model serves as a hard auxiliary classification task, which guides the detector
to improve the class representation implicitly. Comprehensive experiments have
been conducted on Pascal VOC and MS-COCO dataset. The proposed model achieves
the best overall performance, which shows its effectiveness of learning more
generalized features. Our code is available at
https://github.com/liuweijie19980216/DRL-for-FSOD.
Related papers
- Fine-Grained Prototypes Distillation for Few-Shot Object Detection [8.795211323408513]
Few-shot object detection (FSOD) aims at extending a generic detector for novel object detection with only a few training examples.
In general, methods based on meta-learning employ an additional support branch to encode novel examples into class prototypes.
New methods are required to capture the distinctive local context for more robust novel object detection.
arXiv Detail & Related papers (2024-01-15T12:12:48Z) - Query Adaptive Few-Shot Object Detection with Heterogeneous Graph
Convolutional Networks [33.446875089255876]
Few-shot object detection (FSOD) aims to detect never-seen objects using few examples.
We propose a novel FSOD model using heterogeneous graph convolutional networks.
arXiv Detail & Related papers (2021-12-17T22:08:15Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Dense Relation Distillation with Context-aware Aggregation for Few-Shot
Object Detection [18.04185751827619]
Few-shot object detection is challenging since the fine-grained feature of novel object can be easily overlooked with only a few data available.
We propose Dense Relation Distillation with Context-aware Aggregation (DCNet) to tackle the few-shot detection problem.
arXiv Detail & Related papers (2021-03-30T05:34:49Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.