Location-Aware Box Reasoning for Anchor-Based Single-Shot Object
Detection
- URL: http://arxiv.org/abs/2007.06233v1
- Date: Mon, 13 Jul 2020 08:24:41 GMT
- Title: Location-Aware Box Reasoning for Anchor-Based Single-Shot Object
Detection
- Authors: Wenchi Ma, Kaidong Li, Guanghui Wang
- Abstract summary: Single-shot object detectors suffer the box quality as there is a lack of pre-selection of box proposals.
We propose a location-aware anchor-based reasoning (LAAR) for the bounding boxes.
LAAR takes both the location and classification confidences into consideration for the quality evaluation of bounding boxes.
- Score: 19.669531374307805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the majority of object detection frameworks, the confidence of instance
classification is used as the quality criterion of predicted bounding boxes,
like the confidence-based ranking in non-maximum suppression (NMS). However,
the quality of bounding boxes, indicating the spatial relations, is not only
correlated with the classification scores. Compared with the region proposal
network (RPN) based detectors, single-shot object detectors suffer the box
quality as there is a lack of pre-selection of box proposals. In this paper, we
aim at single-shot object detectors and propose a location-aware anchor-based
reasoning (LAAR) for the bounding boxes. LAAR takes both the location and
classification confidences into consideration for the quality evaluation of
bounding boxes. We introduce a novel network block to learn the relative
location between the anchors and the ground truths, denoted as a localization
score, which acts as a location reference during the inference stage. The
proposed localization score leads to an independent regression branch and
calibrates the bounding box quality by scoring the predicted localization score
so that the best-qualified bounding boxes can be picked up in NMS. Experiments
on MS COCO and PASCAL VOC benchmarks demonstrate that the proposed
location-aware framework enhances the performances of current anchor-based
single-shot object detection frameworks and yields consistent and robust
detection results.
Related papers
- Rank-DETR for High Quality Object Detection [52.82810762221516]
A highly performant object detector requires accurate ranking for the bounding box predictions.
In this work, we introduce a simple and highly performant DETR-based object detector by proposing a series of rank-oriented designs.
arXiv Detail & Related papers (2023-10-13T04:48:32Z) - Localization-Guided Track: A Deep Association Multi-Object Tracking
Framework Based on Localization Confidence of Detections [4.565826090373598]
localization confidence is applied in MOT for the first time, with appearance clarity and localization accuracy of detection boxes taken into account.
Our proposed method outperforms the compared state-of-art tracking methods.
arXiv Detail & Related papers (2023-09-18T13:45:35Z) - Confidence-driven Bounding Box Localization for Small Object Detection [30.906712428887147]
We present Confidence-driven Bounding Box localization (C-BBL) method to rectify the gradients.
C-BBL quantizes continuous labels into grids and formulates two-hot ground truth labels.
We demonstrate the generalizability of C-BBL to different label systems and effectiveness for high resolution detection.
arXiv Detail & Related papers (2023-03-03T09:19:08Z) - Probabilistic Ranking-Aware Ensembles for Enhanced Object Detections [50.096540945099704]
We propose a novel ensemble called the Probabilistic Ranking Aware Ensemble (PRAE) that refines the confidence of bounding boxes from detectors.
We also introduce a bandit approach to address the confidence imbalance problem caused by the need to deal with different numbers of boxes.
arXiv Detail & Related papers (2021-05-07T09:37:06Z) - Generalized Focal Loss V2: Learning Reliable Localization Quality
Estimation for Dense Object Detection [78.11775981796367]
GFLV2 (ResNet-101) achieves 46.2 AP at 14.6 FPS, surpassing the previous state-of-the-art ATSS baseline (43.6 AP at 14.6 FPS) by absolute 2.6 AP on COCO tt test-dev.
Code will be available at https://github.com/implus/GFocalV2.
arXiv Detail & Related papers (2020-11-25T17:06:37Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - Dive Deeper Into Box for Object Detection [49.923586776690115]
We propose a box reorganization method(DDBNet), which can dive deeper into the box for more accurate localization.
Experimental results show that our method is effective which leads to state-of-the-art performance for object detection.
arXiv Detail & Related papers (2020-07-15T07:49:05Z) - Generalized Focal Loss: Learning Qualified and Distributed Bounding
Boxes for Dense Object Detection [85.53263670166304]
One-stage detector basically formulates object detection as dense classification and localization.
Recent trend for one-stage detectors is to introduce an individual prediction branch to estimate the quality of localization.
This paper delves into the representations of the above three fundamental elements: quality estimation, classification and localization.
arXiv Detail & Related papers (2020-06-08T07:24:33Z) - Scope Head for Accurate Localization in Object Detection [135.9979405835606]
We propose a novel detector coined as ScopeNet, which models anchors of each location as a mutually dependent relationship.
With our concise and effective design, the proposed ScopeNet achieves state-of-the-art results on COCO.
arXiv Detail & Related papers (2020-05-11T04:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.