Efficient and Robust LiDAR-Based End-to-End Navigation
- URL: http://arxiv.org/abs/2105.09932v1
- Date: Thu, 20 May 2021 17:52:37 GMT
- Title: Efficient and Robust LiDAR-Based End-to-End Navigation
- Authors: Zhijian Liu, Alexander Amini, Sibo Zhu, Sertac Karaman, Song Han,
Daniela Rus
- Abstract summary: We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
- Score: 132.52661670308606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has been used to demonstrate end-to-end neural network learning
for autonomous vehicle control from raw sensory input. While LiDAR sensors
provide reliably accurate information, existing end-to-end driving solutions
are mainly based on cameras since processing 3D data requires a large memory
footprint and computation cost. On the other hand, increasing the robustness of
these systems is also critical; however, even estimating the model's
uncertainty is very challenging due to the cost of sampling-based methods. In
this paper, we present an efficient and robust LiDAR-based end-to-end
navigation framework. We first introduce Fast-LiDARNet that is based on sparse
convolution kernel optimization and hardware-aware model design. We then
propose Hybrid Evidential Fusion that directly estimates the uncertainty of the
prediction from only a single forward pass and then fuses the control
predictions intelligently. We evaluate our system on a full-scale vehicle and
demonstrate lane-stable as well as navigation capabilities. In the presence of
out-of-distribution events (e.g., sensor failures), our system significantly
improves robustness and reduces the number of takeovers in the real world.
Related papers
- An Efficient Approach to Generate Safe Drivable Space by LiDAR-Camera-HDmap Fusion [13.451123257796972]
We propose an accurate and robust perception module for Autonomous Vehicles (AVs) for drivable space extraction.
Our work introduces a robust easy-to-generalize perception module that leverages LiDAR, camera, and HD map data fusion.
Our approach is tested on a real dataset and its reliability is verified during the daily (including harsh snowy weather) operation of our autonomous shuttle, WATonoBus.
arXiv Detail & Related papers (2024-10-29T17:54:02Z) - Optical Flow Matters: an Empirical Comparative Study on Fusing Monocular Extracted Modalities for Better Steering [37.46760714516923]
This research introduces a new end-to-end method that exploits multimodal information from a single monocular camera to improve the steering predictions for self-driving cars.
By focusing on the fusion of RGB imagery with depth completion information or optical flow data, we propose a framework that integrates these modalities through both early and hybrid fusion techniques.
arXiv Detail & Related papers (2024-09-18T09:36:24Z) - Deep Learning-Based Robust Multi-Object Tracking via Fusion of mmWave Radar and Camera Sensors [6.166992288822812]
Multi-Object Tracking plays a critical role in ensuring safer and more efficient navigation through complex traffic scenarios.
This paper presents a novel deep learning-based method that integrates radar and camera data to enhance the accuracy and robustness of Multi-Object Tracking in autonomous driving systems.
arXiv Detail & Related papers (2024-07-10T21:09:09Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - TrajectoryNAS: A Neural Architecture Search for Trajectory Prediction [0.0]
Trajectory prediction is a critical component of autonomous driving systems.
This paper introduces TrajectoryNAS, a pioneering method that focuses on utilizing point cloud data for trajectory prediction.
arXiv Detail & Related papers (2024-03-18T11:48:41Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.