Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions
- URL: http://arxiv.org/abs/2010.14924v2
- Date: Tue, 29 Jun 2021 11:45:19 GMT
- Title: Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions
- Authors: Jyri Maanp\"a\"a, Josef Taher, Petri Manninen, Leo Pakola, Iaroslav
Melekhov and Juha Hyypp\"a
- Abstract summary: We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving is challenging in adverse road and weather conditions in
which there might not be lane lines, the road might be covered in snow and the
visibility might be poor. We extend the previous work on end-to-end learning
for autonomous steering to operate in these adverse real-life conditions with
multimodal data. We collected 28 hours of driving data in several road and
weather conditions and trained convolutional neural networks to predict the car
steering wheel angle from front-facing color camera images and lidar range and
reflectance data. We compared the CNN model performances based on the different
modalities and our results show that the lidar modality improves the
performances of different multimodal sensor-fusion models. We also performed
on-road tests with different models and they support this observation.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Vision-Guided Forecasting -- Visual Context for Multi-Horizon Time
Series Forecasting [0.6947442090579469]
We tackle multi-horizon forecasting of vehicle states by fusing the two modalities.
We design and experiment with 3D convolutions for visual features extraction and 1D convolutions for features extraction from speed and steering angle traces.
We show that we are able to forecast a vehicle's state to various horizons, while outperforming the current state-of-the-art results on the related task of driving state estimation.
arXiv Detail & Related papers (2021-07-27T08:52:40Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z) - Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion [16.018962965273495]
All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
arXiv Detail & Related papers (2020-05-05T03:48:10Z) - VTGNet: A Vision-based Trajectory Generation Network for Autonomous
Vehicles in Urban Environments [26.558394047144006]
We develop an uncertainty-aware end-to-end trajectory generation method based on imitation learning.
Under various weather and lighting conditions, our network can reliably generate trajectories in different urban environments.
The proposed method achieves better cross-scene/platform driving results than the state-of-the-art (SOTA) end-to-end control method.
arXiv Detail & Related papers (2020-04-27T06:17:55Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.