Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion
- URL: http://arxiv.org/abs/2005.01935v1
- Date: Tue, 5 May 2020 03:48:10 GMT
- Title: Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion
- Authors: Peide Cai, Sukai Wang, Yuxiang Sun, Ming Liu
- Abstract summary: All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
- Score: 16.018962965273495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: All-day and all-weather navigation is a critical capability for autonomous
driving, which requires proper reaction to varied environmental conditions and
complex agent behaviors. Recently, with the rise of deep learning, end-to-end
control for autonomous vehicles has been well studied. However, most works are
solely based on visual information, which can be degraded by challenging
illumination conditions such as dim light or total darkness. In addition, they
usually generate and apply deterministic control commands without considering
the uncertainties in the future. In this paper, based on imitation learning, we
propose a probabilistic driving model with ultiperception capability utilizing
the information from the camera, lidar and radar. We further evaluate its
driving performance online on our new driving benchmark, which includes various
environmental conditions (e.g., urban and rural areas, traffic densities,
weather and times of the day) and dynamic obstacles (e.g., vehicles,
pedestrians, motorcyclists and bicyclists). The results suggest that our
proposed model outperforms baselines and achieves excellent generalization
performance in unseen environments with heavy traffic and extreme weather.
Related papers
- DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle [7.642646077340124]
DeepIPCv2 is an autonomous driving model that perceives the environment using a LiDAR sensor for more robust drivability.
DeepIPCv2 takes a set of LiDAR point clouds as the main perception input.
arXiv Detail & Related papers (2023-07-13T09:23:21Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain
Adaptation [152.60469768559878]
SHIFT is the largest multi-task synthetic dataset for autonomous driving.
It presents discrete and continuous shifts in cloudiness, rain and fog intensity, time of day, and vehicle and pedestrian density.
Our dataset and benchmark toolkit are publicly available at www.vis.xyz/shift.
arXiv Detail & Related papers (2022-06-16T17:59:52Z) - Transferable and Adaptable Driving Behavior Prediction [34.606012573285554]
We propose HATN, a hierarchical framework to generate high-quality, transferable, and adaptable predictions for driving behaviors.
We demonstrate our algorithms in the task of trajectory prediction for real traffic data at intersections and roundabouts from the INTERACTION dataset.
arXiv Detail & Related papers (2022-02-10T16:46:24Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Calibration of Human Driving Behavior and Preference Using Naturalistic
Traffic Data [5.926030548326619]
We show how the model can be inverted to estimate driver preferences from naturalistic traffic data.
One distinct advantage of our approach is the drastically reduced computational burden.
arXiv Detail & Related papers (2021-05-05T01:20:03Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions [0.0]
We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
arXiv Detail & Related papers (2020-10-28T12:38:41Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.