Dive Deeper Into Box for Object Detection
- URL: http://arxiv.org/abs/2007.14350v1
- Date: Wed, 15 Jul 2020 07:49:05 GMT
- Title: Dive Deeper Into Box for Object Detection
- Authors: Ran Chen, Yong Liu, Mengdan Zhang, Shu Liu, Bei Yu, and Yu-Wing Tai
- Abstract summary: We propose a box reorganization method(DDBNet), which can dive deeper into the box for more accurate localization.
Experimental results show that our method is effective which leads to state-of-the-art performance for object detection.
- Score: 49.923586776690115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anchor free methods have defined the new frontier in state-of-the-art object
detection researches where accurate bounding box estimation is the key to the
success of these methods. However, even the bounding box has the highest
confidence score, it is still far from perfect at localization. To this end, we
propose a box reorganization method(DDBNet), which can dive deeper into the box
for more accurate localization. At the first step, drifted boxes are filtered
out because the contents in these boxes are inconsistent with target semantics.
Next, the selected boxes are broken into boundaries, and the well-aligned
boundaries are searched and grouped into a sort of optimal boxes toward
tightening instances more precisely. Experimental results show that our method
is effective which leads to state-of-the-art performance for object detection.
Related papers
- Theoretically Achieving Continuous Representation of Oriented Bounding Boxes [64.15627958879053]
This paper endeavors to completely solve the issue of discontinuity in Oriented Bounding Box representation.
We propose a novel representation method called Continuous OBB (COBB) which can be readily integrated into existing detectors.
For fairness and transparency of experiments, we have developed a modularized benchmark based on the open-source deep learning framework Jittor's detection toolbox JDet for OOD evaluation.
arXiv Detail & Related papers (2024-02-29T09:27:40Z) - Shape-IoU: More Accurate Metric considering Bounding Box Shape and Scale [5.8666339171606445]
The Shape IoU method can calculate the loss by focusing on the shape and scale of the bounding box itself.
Our method can effectively improve detection performance and outperform existing methods, achieving state-of-the-art performance in different detection tasks.
arXiv Detail & Related papers (2023-12-29T16:05:02Z) - Point2RBox: Combine Knowledge from Synthetic Visual Patterns for End-to-end Oriented Object Detection with Single Point Supervision [81.60564776995682]
We present Point2RBox, an end-to-end solution for point-supervised object detection.
Our method uses a lightweight paradigm, yet it achieves a competitive performance among point-supervised alternatives.
In particular, our method uses a lightweight paradigm, yet it achieves a competitive performance among point-supervised alternatives.
arXiv Detail & Related papers (2023-11-23T15:57:41Z) - Split, Merge, and Refine: Fitting Tight Bounding Boxes via
Over-Segmentation and Iterative Search [15.29167642670379]
We propose a novel framework for finding a set of tight bounding boxes of a 3D shape via over-segmentation and iterative merging and refinement.
By thoughtful evaluation, we demonstrate full coverage, tightness, and an adequate number of bounding boxes of our method without requiring any training data or supervision.
arXiv Detail & Related papers (2023-04-10T00:25:15Z) - Rigidity-Aware Detection for 6D Object Pose Estimation [60.88857851869196]
Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose.
We propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid.
Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary.
arXiv Detail & Related papers (2023-03-22T09:02:54Z) - Weakly Supervised Image Segmentation Beyond Tight Bounding Box
Annotations [5.000514512377416]
This study investigates whether it is possible to maintain good segmentation performance when loose bounding boxes are used as supervision.
The proposed polar transformation based MIL formulation works for both tight and loose bounding boxes.
The results demonstrate that the proposed approach achieves state-of-the-art performance for bounding boxes at all precision levels.
arXiv Detail & Related papers (2023-01-28T02:11:36Z) - H2RBox: Horizonal Box Annotation is All You Need for Oriented Object
Detection [63.66553556240689]
Oriented object detection emerges in many applications from aerial images to autonomous driving.
Many existing detection benchmarks are annotated with horizontal bounding box only which is also less costive than fine-grained rotated box.
This paper proposes a simple yet effective oriented object detection approach called H2RBox.
arXiv Detail & Related papers (2022-10-13T05:12:45Z) - Boundary Distribution Estimation for Precise Object Detection [12.247010914825971]
In the field of object detection, the task of object localization is typically accomplished through a dedicated that emphasizes bounding box regression.
This traditionally predicts the object's position by regressing the box's center position and scaling factors.
In this paper, we address the shortcomings of previous methods through theoretical analysis and experimental verification.
Our approach enhances the accuracy of bounding box localization by refining the box edges based on the estimated distribution at the object's boundary.
arXiv Detail & Related papers (2021-11-02T06:58:22Z) - Oriented Bounding Boxes for Small and Freely Rotated Objects [7.6997148655751895]
A novel object detection method is presented that handles freely rotated objects of arbitrary sizes.
The method encodes the precise location and orientation of features of the target objects at grid cell locations.
Evaluations on the xView and DOTA datasets show that the proposed method uniformly improves performance over existing state-of-the-art methods.
arXiv Detail & Related papers (2021-04-24T02:04:49Z) - DeepStrip: High Resolution Boundary Refinement [60.00241966809684]
We propose to convert regions of interest into strip images and compute a boundary prediction in the strip domain.
To detect the target boundary, we present a framework with two prediction layers.
We enforce a matching consistency and C0 continuity regularization to the network to reduce false alarms.
arXiv Detail & Related papers (2020-03-25T22:44:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.