RSN: Range Sparse Net for Efficient, Accurate LiDAR 3D Object Detection
- URL: http://arxiv.org/abs/2106.13365v1
- Date: Fri, 25 Jun 2021 00:23:55 GMT
- Title: RSN: Range Sparse Net for Efficient, Accurate LiDAR 3D Object Detection
- Authors: Pei Sun, Weiyue Wang, Yuning Chai, Gamaleldin Elsayed, Alex Bewley,
Xiao Zhang, Cristian Sminchisescu, Dragomir Anguelov
- Abstract summary: Range Sparse Net (RSN) is a simple, efficient, and accurate 3D object detector.
RSN predicts foreground points from range images and applies sparse convolutions on the selected foreground points to detect objects.
RSN is ranked first in the leaderboard based on the APH/LEVEL 1 metrics for LiDAR-based pedestrian and vehicle detection.
- Score: 44.024530632421836
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The detection of 3D objects from LiDAR data is a critical component in most
autonomous driving systems. Safe, high speed driving needs larger detection
ranges, which are enabled by new LiDARs. These larger detection ranges require
more efficient and accurate detection models. Towards this goal, we propose
Range Sparse Net (RSN), a simple, efficient, and accurate 3D object detector in
order to tackle real time 3D object detection in this extended detection
regime. RSN predicts foreground points from range images and applies sparse
convolutions on the selected foreground points to detect objects. The
lightweight 2D convolutions on dense range images results in significantly
fewer selected foreground points, thus enabling the later sparse convolutions
in RSN to efficiently operate. Combining features from the range image further
enhance detection accuracy. RSN runs at more than 60 frames per second on a
150m x 150m detection region on Waymo Open Dataset (WOD) while being more
accurate than previously published detectors. As of 11/2020, RSN is ranked
first in the WOD leaderboard based on the APH/LEVEL 1 metrics for LiDAR-based
pedestrian and vehicle detection, while being several times faster than
alternatives.
Related papers
- Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - Towards Long-Range 3D Object Detection for Autonomous Vehicles [4.580520623362462]
3D object detection at long range is crucial for ensuring the safety and efficiency of self driving vehicles.
Most current state of the art LiDAR based methods are range limited due to sparsity at long range.
We investigate two ways to improve long range performance of current LiDAR based 3D detectors.
arXiv Detail & Related papers (2023-10-07T13:39:46Z) - An Empirical Analysis of Range for 3D Object Detection [70.54345282696138]
We present an empirical analysis of far-field 3D detection using the long-range detection dataset Argoverse 2.0.
Near-field LiDAR measurements are dense and optimally encoded by small voxels, while far-field measurements are sparse and are better encoded with large voxels.
We propose simple techniques to efficiently ensemble models for long-range detection that improve efficiency by 33% and boost accuracy by 3.2% CDS.
arXiv Detail & Related papers (2023-08-08T05:29:26Z) - Super Sparse 3D Object Detection [48.684300007948906]
LiDAR-based 3D object detection contributes ever-increasingly to the long-range perception in autonomous driving.
To enable efficient long-range detection, we first propose a fully sparse object detector termed FSD.
FSD++ generates residual points, which indicate the point changes between consecutive frames.
arXiv Detail & Related papers (2023-01-05T17:03:56Z) - PointPillars Backbone Type Selection For Fast and Accurate LiDAR Object
Detection [0.0]
We present the results of experiments on the impact of backbone selection of a deep convolutional neural network on detection accuracy and speed.
We chose the PointPillars network, which is characterised by a simple architecture, high speed, and modularity that allows for easy expansion.
arXiv Detail & Related papers (2022-09-30T06:18:14Z) - Fully Sparse 3D Object Detection [57.05834683261658]
We build a fully sparse 3D object detector (FSD) for long-range LiDAR-based object detection.
FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module.
SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances.
arXiv Detail & Related papers (2022-07-20T17:01:33Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - Sparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimationand 3D
Object Detection [3.5488685789514736]
SLS-Fusion is a new approach to fuse data from 4-beam LiDAR and a stereo camera via a neural network for depth estimation.
Since 4-beam LiDAR is cheaper than the well-known 64-beam LiDAR, this approach is also classified as a low-cost sensors-based method.
arXiv Detail & Related papers (2021-03-05T23:10:09Z) - LRPD: Long Range 3D Pedestrian Detection Leveraging Specific Strengths
of LiDAR and RGB [12.650574326251023]
The current state-of-the-art on the KITTI benchmark performs suboptimal in detecting the position of pedestrians at long range.
We propose an approach specifically targeting long range 3D pedestrian detection (LRPD), leveraging the density of RGB and the precision of LiDAR.
This leads to a significant improvement in mAP on long range compared to the current state-of-the art.
arXiv Detail & Related papers (2020-06-17T09:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.