AFDet: Anchor Free One Stage 3D Object Detection
- URL: http://arxiv.org/abs/2006.12671v2
- Date: Tue, 30 Jun 2020 07:03:40 GMT
- Title: AFDet: Anchor Free One Stage 3D Object Detection
- Authors: Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Yu Wang, Sijia Chen, Li
Huang, Yuan Li
- Abstract summary: High-efficiency point cloud 3D object detection is important for many robotics applications including autonomous driving.
Most previous works try to solve it using anchor-based detection methods which come with two drawbacks: post-processing is relatively complex and computationally expensive; tuning anchor parameters is tricky.
We are the first to address these drawbacks with an anchor free and Non-Maximum Suppression free one stage detector called AFDet.
- Score: 9.981769027320551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-efficiency point cloud 3D object detection operated on embedded systems
is important for many robotics applications including autonomous driving. Most
previous works try to solve it using anchor-based detection methods which come
with two drawbacks: post-processing is relatively complex and computationally
expensive; tuning anchor parameters is tricky. We are the first to address
these drawbacks with an anchor free and Non-Maximum Suppression free one stage
detector called AFDet. The entire AFDet can be processed efficiently on a CNN
accelerator or a GPU with the simplified post-processing. Without bells and
whistles, our proposed AFDet performs competitively with other one stage
anchor-based methods on KITTI validation set and Waymo Open Dataset validation
set.
Related papers
- Open-Vocabulary Affordance Detection in 3D Point Clouds [6.4274167612662465]
Open-Vocabulary Affordance Detection (OpenAD) method is capable of detecting an unbounded number of affordances in 3D point clouds.
Our proposed method enables zero-shot detection and can be able to detect previously unseen affordances.
arXiv Detail & Related papers (2023-03-04T12:26:47Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection [11.962073589763676]
Existing 3D detectors significantly improve the accuracy by adopting a two-stage paradigm.
The sparsity of point clouds, especially for the points far away, makes it difficult for the LiDAR-only refinement module to accurately recognize and locate objects.
We propose a novel multi-modality two-stage approach named FusionRCNN, which effectively and efficiently fuses point clouds and camera images in the Regions of Interest(RoI)
FusionRCNN significantly improves the strong SECOND baseline by 6.14% mAP on baseline, and outperforms competing two-stage approaches.
arXiv Detail & Related papers (2022-09-22T02:07:25Z) - Embracing Single Stride 3D Object Detector with Sparse Transformer [63.179720817019096]
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Many 3D detectors directly follow the common practice of 2D detectors, which downsample the feature maps even after quantizing the point clouds.
We propose Single-stride Sparse Transformer (SST) to maintain the original resolution from the beginning to the end of the network.
arXiv Detail & Related papers (2021-12-13T02:12:02Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - LiDAR R-CNN: An Efficient and Universal 3D Object Detector [20.17906188581305]
LiDAR-based 3D detection in point cloud is essential in the perception system of autonomous driving.
We present LiDAR R-CNN, a second stage detector that can generally improve any existing 3D detector.
In particular, based on one variant of PointPillars, our method could achieve new state-of-the-art results with minor cost.
arXiv Detail & Related papers (2021-03-29T03:01:21Z) - FCOS: A simple and strong anchor-free object detector [111.87691210818194]
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion.
Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes.
In contrast, our proposed detector FCOS is anchor box free, as well as proposal free.
arXiv Detail & Related papers (2020-06-14T01:03:39Z) - Detection in Crowded Scenes: One Proposal, Multiple Predictions [79.28850977968833]
We propose a proposal-based object detector, aiming at detecting highly-overlapped instances in crowded scenes.
The key of our approach is to let each proposal predict a set of correlated instances rather than a single one in previous proposal-based frameworks.
Our detector can obtain 4.9% AP gains on challenging CrowdHuman dataset and 1.0% $textMR-2$ improvements on CityPersons dataset.
arXiv Detail & Related papers (2020-03-20T09:48:53Z) - 3DSSD: Point-based 3D Single Stage Object Detector [61.67928229961813]
We present a point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency.
Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well.
arXiv Detail & Related papers (2020-02-24T12:01:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.