Scope Head for Accurate Localization in Object Detection
- URL: http://arxiv.org/abs/2005.04854v2
- Date: Tue, 12 May 2020 02:07:38 GMT
- Title: Scope Head for Accurate Localization in Object Detection
- Authors: Geng Zhan, Dan Xu, Guo Lu, Wei Wu, Chunhua Shen, Wanli Ouyang
- Abstract summary: We propose a novel detector coined as ScopeNet, which models anchors of each location as a mutually dependent relationship.
With our concise and effective design, the proposed ScopeNet achieves state-of-the-art results on COCO.
- Score: 135.9979405835606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing anchor-based and anchor-free object detectors in multi-stage or
one-stage pipelines have achieved very promising detection performance.
However, they still encounter the design difficulty in hand-crafted 2D anchor
definition and the learning complexity in 1D direct location regression. To
tackle these issues, in this paper, we propose a novel detector coined as
ScopeNet, which models anchors of each location as a mutually dependent
relationship. This approach quantizes the prediction space and employs a
coarse-to-fine strategy for localization. It achieves superior flexibility as
in the regression based anchor-free methods, while produces more precise
prediction. Besides, an inherit anchor selection score is learned to indicate
the localization quality of the detection result, and we propose to better
represent the confidence of a detection box by combining the
category-classification score and the anchor-selection score. With our concise
and effective design, the proposed ScopeNet achieves state-of-the-art results
on COCO
Related papers
- Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Semi-Supervised and Long-Tailed Object Detection with CascadeMatch [91.86787064083012]
We propose a novel pseudo-labeling-based detector called CascadeMatch.
Our detector features a cascade network architecture, which has multi-stage detection heads with progressive confidence thresholds.
We show that CascadeMatch surpasses existing state-of-the-art semi-supervised approaches in handling long-tailed object detection.
arXiv Detail & Related papers (2023-05-24T07:09:25Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Efficient Person Search: An Anchor-Free Approach [86.45858994806471]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
To achieve this goal, state-of-the-art models typically add a re-id branch upon two-stage detectors like Faster R-CNN.
In this work, we present an anchor-free approach to efficiently tackling this challenging task, by introducing the following dedicated designs.
arXiv Detail & Related papers (2021-09-01T07:01:33Z) - Modulating Localization and Classification for Harmonized Object
Detection [40.82723262074911]
We propose a mutual learning framework to modulate the two tasks.
In particular, the two tasks are forced to learn from each other with a novel mutual labeling strategy.
We achieve a significant performance gain over the baseline detectors on the COCO dataset.
arXiv Detail & Related papers (2021-03-16T10:36:02Z) - Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [4.247967690041766]
Arbitrary-oriented objects widely appear in natural scenes, aerial photographs, remote sensing images, etc.
Current rotation detectors use plenty of anchors with different orientations to achieve spatial alignment with ground truth boxes.
We propose a dynamic anchor learning (DAL) method, which utilizes the newly defined matching degree.
arXiv Detail & Related papers (2020-12-08T01:30:06Z) - Localize to Classify and Classify to Localize: Mutual Guidance in Object
Detection [3.6488662460683794]
We propose a new anchor matching criterion guided, during the training phase, by the optimization of both the localization and the classification tasks.
Despite the simplicity of the proposed method, our experiments with different state-of-the-art deep learning architectures on PASCAL VOC and MS COCO datasets demonstrate the effectiveness and generality of our Mutual Guidance strategy.
arXiv Detail & Related papers (2020-09-29T15:15:26Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.