Leveraging Bottom-Up and Top-Down Attention for Few-Shot Object
Detection
- URL: http://arxiv.org/abs/2007.12104v1
- Date: Thu, 23 Jul 2020 16:12:04 GMT
- Title: Leveraging Bottom-Up and Top-Down Attention for Few-Shot Object
Detection
- Authors: Xianyu Chen, Ming Jiang, Qi Zhao
- Abstract summary: Few-shot object detection aims at detecting objects with few annotated examples.
We propose an attentive few-shot object detection network (AttFDNet) that takes the advantages of both top-down and bottom-up attention.
- Score: 31.1548809359908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot object detection aims at detecting objects with few annotated
examples, which remains a challenging research problem yet to be explored.
Recent studies have shown the effectiveness of self-learned top-down attention
mechanisms in object detection and other vision tasks. The top-down attention,
however, is less effective at improving the performance of few-shot detectors.
Due to the insufficient training data, object detectors cannot effectively
generate attention maps for few-shot examples. To improve the performance and
interpretability of few-shot object detectors, we propose an attentive few-shot
object detection network (AttFDNet) that takes the advantages of both top-down
and bottom-up attention. Being task-agnostic, the bottom-up attention serves as
a prior that helps detect and localize naturally salient objects. We further
address specific challenges in few-shot object detection by introducing two
novel loss terms and a hybrid few-shot learning strategy. Experimental results
and visualization demonstrate the complementary nature of the two types of
attention and their roles in few-shot object detection. Codes are available at
https://github.com/chenxy99/AttFDNet.
Related papers
- Visible and Clear: Finding Tiny Objects in Difference Map [50.54061010335082]
We introduce a self-reconstruction mechanism in the detection model, and discover the strong correlation between it and the tiny objects.
Specifically, we impose a reconstruction head in-between the neck of a detector, constructing a difference map of the reconstructed image and the input, which shows high sensitivity to tiny objects.
We further develop a Difference Map Guided Feature Enhancement (DGFE) module to make the tiny feature representation more clear.
arXiv Detail & Related papers (2024-05-18T12:22:26Z) - Few-shot Oriented Object Detection with Memorable Contrastive Learning in Remote Sensing Images [11.217630579076237]
Few-shot object detection (FSOD) has garnered significant research attention in the field of remote sensing.
We propose a novel FSOD method for remote sensing images called Few-shot Oriented object detection with Memorable Contrastive learning (FOMC)
Specifically, we employ oriented bounding boxes instead of traditional horizontal bounding boxes to learn a better feature representation for arbitrary-oriented aerial objects.
arXiv Detail & Related papers (2024-03-20T08:15:18Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Task-Focused Few-Shot Object Detection for Robot Manipulation [1.8275108630751844]
We develop a manipulation method based solely on detection then introduce task-focused few-shot object detection to learn new objects and settings.
In experiments for our interactive approach to few-shot learning, we train a robot to manipulate objects directly from detection (ClickBot)
arXiv Detail & Related papers (2022-01-28T21:52:05Z) - A Survey of Self-Supervised and Few-Shot Object Detection [19.647681501581225]
Self-supervised methods aim at learning representations from unlabeled data which transfer well to downstream tasks such as object detection.
Few-shot object detection is about training a model on novel (unseen) object classes with little data.
In this survey, we review and characterize the most recent approaches on few-shot and self-supervised object detection.
arXiv Detail & Related papers (2021-10-27T18:55:47Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Few-shot Object Detection with Self-adaptive Attention Network for
Remote Sensing Images [11.938537194408669]
We propose a few-shot object detector which is designed for detecting novel objects provided with only a few examples.
In order to fit the object detection settings, our proposed few-shot detector concentrates on the relations that lie in the level of objects instead of the full image.
The experiments demonstrate the effectiveness of the proposed method in few-shot scenes.
arXiv Detail & Related papers (2020-09-26T13:44:58Z) - Few-shot Object Detection with Feature Attention Highlight Module in
Remote Sensing Images [10.92844145381214]
We propose a few-shot object detector which is designed for detecting novel objects based on only a few examples.
Our model is composed of a feature-extractor, a feature attention highlight module as well as a two-stage detection backend.
Experiments demonstrate the effectiveness of the proposed method for few-shot cases.
arXiv Detail & Related papers (2020-09-03T12:38:49Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z) - Progressive Object Transfer Detection [84.48927705173494]
We propose a novel Progressive Object Transfer Detection (POTD) framework.
First, POTD can leverage various object supervision of different domains effectively into a progressive detection procedure.
Second, POTD consists of two delicate transfer stages, i.e., Low-Shot Transfer Detection (LSTD), and Weakly-Supervised Transfer Detection (WSTD)
arXiv Detail & Related papers (2020-02-12T00:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.