NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving
- URL: http://arxiv.org/abs/2209.14499v1
- Date: Thu, 29 Sep 2022 01:30:34 GMT
- Title: NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving
- Authors: Alexander Popov, Patrik Gebhardt, Ke Chen, Ryan Oldja, Heeseok Lee,
Shane Murray, Ruchi Bhargava, Nikolai Smolyanskiy
- Abstract summary: We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
- Score: 57.03126447713602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting obstacles is crucial for safe and efficient autonomous driving. To
this end, we present NVRadarNet, a deep neural network (DNN) that detects
dynamic obstacles and drivable free space using automotive RADAR sensors. The
network utilizes temporally accumulated data from multiple RADAR sensors to
detect dynamic obstacles and compute their orientation in a top-down bird's-eye
view (BEV). The network also regresses drivable free space to detect
unclassified obstacles. Our DNN is the first of its kind to utilize sparse
RADAR signals in order to perform obstacle and free space detection in real
time from RADAR data only. The network has been successfully used for
perception on our autonomous vehicles in real self-driving scenarios. The
network runs faster than real time on an embedded GPU and shows good
generalization across geographic regions.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Deep Radar Inverse Sensor Models for Dynamic Occupancy Grid Maps [0.0]
We propose a deep learning-based Inverse Sensor Model (ISM) to learn the mapping from sparse radar detections to polar measurement grids.
Our approach is the first one to learn a single-frame measurement grid in the polar scheme from radars with a limited Field Of View.
This enables us to flexibly use one or more radar sensors without network retraining and without requirements on 360deg sensor coverage.
arXiv Detail & Related papers (2023-05-21T09:09:23Z) - ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals [13.035425992944543]
We introduce ERASE-Net, an Efficient RAdar SEgmentation Network to segment raw radar signals semantically.
We show that our method can achieve superior performance on radar semantic segmentation task compared to the state-of-the-art (SOTA) technique.
arXiv Detail & Related papers (2022-09-26T18:23:22Z) - How to Build a Curb Dataset with LiDAR Data for Autonomous Driving [11.632427050596728]
Video cameras and 3D LiDARs are mounted on autonomous vehicles for curb detection.
Camera-based curb detection methods suffer from challenging illumination conditions.
A dataset with curb annotations or an efficient curb labeling approach, hence, is of high demand.
arXiv Detail & Related papers (2021-10-08T08:32:37Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Fooling LiDAR Perception via Adversarial Trajectory Perturbation [13.337443990751495]
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions.
Could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation?
We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough.
arXiv Detail & Related papers (2021-03-29T04:34:31Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Temporal Pulses Driven Spiking Neural Network for Fast Object
Recognition in Autonomous Driving [65.36115045035903]
We propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN)
Being evaluated on various datasets, our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency.
arXiv Detail & Related papers (2020-01-24T22:58:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.