Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds
- URL: http://arxiv.org/abs/2004.00186v1
- Date: Wed, 1 Apr 2020 01:21:23 GMT
- Title: Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds
- Authors: Guodong Xu, Wenxiao Wang, Zili Liu, Liang Xie, Zheng Yang, Haifeng
Liu, Deng Cai
- Abstract summary: We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
- Score: 32.916690488130506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D object detection based on point clouds has become more and more popular.
Some methods propose localizing 3D objects directly from raw point clouds to
avoid information loss. However, these methods come with complex structures and
significant computational overhead, limiting its broader application in
real-time scenarios. Some methods choose to transform the point cloud data into
compact tensors first and leverage off-the-shelf 2D detectors to propose 3D
objects, which is much faster and achieves state-of-the-art results. However,
because of the inconsistency between 2D and 3D data, we argue that the
performance of compact tensor-based 3D detectors is restricted if we use 2D
detectors without corresponding modification. Specifically, the distribution of
point clouds is uneven, with most points gather on the boundary of objects,
while detectors for 2D data always extract features evenly. Motivated by this
observation, we propose DENse Feature Indicator (DENFI), a universal module
that helps 3D detectors focus on the densest region of the point clouds in a
boundary-aware manner. Moreover, DENFI is lightweight and guarantees real-time
speed when applied to 3D object detectors. Experiments on KITTI dataset show
that DENFI improves the performance of the baseline single-stage detector
remarkably, which achieves new state-of-the-art performance among previous 3D
detectors, including both two-stage and multi-sensor fusion methods, in terms
of mAP with a 34FPS detection speed.
Related papers
- 3D Small Object Detection with Dynamic Spatial Pruning [62.72638845817799]
We propose an efficient feature pruning strategy for 3D small object detection.
We present a multi-level 3D detector named DSPDet3D which benefits from high spatial resolution.
It takes less than 2s to directly process a whole building consisting of more than 4500k points while detecting out almost all objects.
arXiv Detail & Related papers (2023-05-05T17:57:04Z) - Multi-Sem Fusion: Multimodal Semantic Fusion for 3D Object Detection [11.575945934519442]
LiDAR and camera fusion techniques are promising for achieving 3D object detection in autonomous driving.
Most multi-modal 3D object detection frameworks integrate semantic knowledge from 2D images into 3D LiDAR point clouds.
We propose a general multi-modal fusion framework Multi-Sem Fusion (MSF) to fuse the semantic information from both the 2D image and 3D points scene parsing results.
arXiv Detail & Related papers (2022-12-10T10:54:41Z) - Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [85.08249413137558]
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors.
Small, distant, and incomplete objects with sparse or few points are often hard to detect.
We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space.
arXiv Detail & Related papers (2022-11-23T16:01:06Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - 3D Object Detection Method Based on YOLO and K-Means for Image and Point
Clouds [1.9458156037869139]
Lidar based 3D object detection and classification tasks are essential for autonomous driving.
This paper proposes a 3D object detection method based on point cloud and image.
arXiv Detail & Related papers (2020-04-21T04:32:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.