Few-Shot Object Detection via Knowledge Transfer
- URL: http://arxiv.org/abs/2008.12496v1
- Date: Fri, 28 Aug 2020 06:35:27 GMT
- Title: Few-Shot Object Detection via Knowledge Transfer
- Authors: Geonuk Kim, Hong-Gyu Jung, Seong-Whan Lee
- Abstract summary: Conventional methods for object detection usually require substantial amounts of training data and annotated bounding boxes.
In this paper, we introduce a few-shot object detection via knowledge transfer, which aims to detect objects from a few training examples.
- Score: 21.3564383157159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional methods for object detection usually require substantial amounts
of training data and annotated bounding boxes. If there are only a few training
data and annotations, the object detectors easily overfit and fail to
generalize. It exposes the practical weakness of the object detectors. On the
other hand, human can easily master new reasoning rules with only a few
demonstrations using previously learned knowledge. In this paper, we introduce
a few-shot object detection via knowledge transfer, which aims to detect
objects from a few training examples. Central to our method is prototypical
knowledge transfer with an attached meta-learner. The meta-learner takes
support set images that include the few examples of the novel categories and
base categories, and predicts prototypes that represent each category as a
vector. Then, the prototypes reweight each RoI (Region-of-Interest) feature
vector from a query image to remodels R-CNN predictor heads. To facilitate the
remodeling process, we predict the prototypes under a graph structure, which
propagates information of the correlated base categories to the novel
categories with explicit guidance of prior knowledge that represents
correlations among categories. Extensive experiments on the PASCAL VOC dataset
verifies the effectiveness of the proposed method.
Related papers
- Few-Shot Object Detection with Sparse Context Transformers [37.106378859592965]
Few-shot detection is a major task in pattern recognition which seeks to localize objects using models trained with few labeled data.
We propose a novel sparse context transformer (SCT) that effectively leverages object knowledge in the source domain, and automatically learns a sparse context from only few training images in the target domain.
We evaluate the proposed method on two challenging few-shot object detection benchmarks, and empirical results show that the proposed method obtains competitive performance compared to the related state-of-the-art.
arXiv Detail & Related papers (2024-02-14T17:10:01Z) - Spatial Reasoning for Few-Shot Object Detection [21.3564383157159]
We propose a spatial reasoning framework that detects novel objects with only a few training examples in a context.
We employ a graph convolutional network as the RoIs and their relatedness are defined as nodes and edges, respectively.
We demonstrate that the proposed method significantly outperforms the state-of-the-art methods and verify its efficacy through extensive ablation studies.
arXiv Detail & Related papers (2022-11-02T12:38:08Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Few-Shot Object Detection: A Survey [4.266990593059534]
Few-shot object detection aims to learn from few object instances of new categories in the target domain.
We categorize approaches according to their training scheme and architectural layout.
We introduce commonly used datasets and their evaluation protocols and analyze reported benchmark results.
arXiv Detail & Related papers (2021-12-22T07:08:53Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Closing the Generalization Gap in One-Shot Object Detection [92.82028853413516]
We show that the key to strong few-shot detection models may not lie in sophisticated metric learning approaches, but instead in scaling the number of categories.
Future data annotation efforts should therefore focus on wider datasets and annotate a larger number of categories.
arXiv Detail & Related papers (2020-11-09T09:31:17Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.