LiDAR-as-Camera for End-to-End Driving
- URL: http://arxiv.org/abs/2206.15170v1
- Date: Thu, 30 Jun 2022 10:06:49 GMT
- Title: LiDAR-as-Camera for End-to-End Driving
- Authors: Ardi Tampuu, Romet Aidla, Jan Are van Gent, Tambet Matiisen
- Abstract summary: Ouster LiDARs can output surround-view LiDAR-images with depth, intensity, and ambient radiation channels.
These measurements originate from the same sensor, rendering them perfectly aligned in time and space.
We demonstrate that such LiDAR-images are sufficient for the real-car road-following task and perform at least equally to camera-based models in the tested conditions.
In the second direction of study, we reveal that the temporal smoothness of off-policy prediction sequences correlates equally well with actual on-policy driving ability as the commonly used mean absolute error.
- Score: 1.0323063834827415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The core task of any autonomous driving system is to transform sensory inputs
into driving commands. In end-to-end driving, this is achieved via a neural
network, with one or multiple cameras as the most commonly used input and
low-level driving command, e.g. steering angle, as output. However,
depth-sensing has been shown in simulation to make the end-to-end driving task
easier. On a real car, combining depth and visual information can be
challenging, due to the difficulty of obtaining good spatial and temporal
alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can
output surround-view LiDAR-images with depth, intensity, and ambient radiation
channels. These measurements originate from the same sensor, rendering them
perfectly aligned in time and space. We demonstrate that such LiDAR-images are
sufficient for the real-car road-following task and perform at least equally to
camera-based models in the tested conditions, with the difference increasing
when needing to generalize to new weather conditions. In the second direction
of study, we reveal that the temporal smoothness of off-policy prediction
sequences correlates equally well with actual on-policy driving ability as the
commonly used mean absolute error.
Related papers
- Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - How to Build a Curb Dataset with LiDAR Data for Autonomous Driving [11.632427050596728]
Video cameras and 3D LiDARs are mounted on autonomous vehicles for curb detection.
Camera-based curb detection methods suffer from challenging illumination conditions.
A dataset with curb annotations or an efficient curb labeling approach, hence, is of high demand.
arXiv Detail & Related papers (2021-10-08T08:32:37Z) - Data-driven vehicle speed detection from synthetic driving simulator
images [0.440401067183266]
We explore the use of synthetic images generated from a driving simulator to address vehicle speed detection.
We generate thousands of images with variability corresponding to multiple speeds, different vehicle types and colors, and lighting and weather conditions.
Two different approaches to map the sequence of images to an output speed (regression) are studied, including CNN-GRU and 3D-CNN.
arXiv Detail & Related papers (2021-04-20T11:26:13Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Incorporating Orientations into End-to-end Driving Model for Steering
Control [12.163394005517766]
We present a novel end-to-end deep neural network model for autonomous driving.
It takes monocular image sequence as input, and directly generates the steering control angle.
Our dataset includes multiple driving scenarios, such as urban, country, and off-road.
arXiv Detail & Related papers (2021-03-10T03:14:41Z) - ISETAuto: Detecting vehicles with depth and radiance information [0.0]
We compare the performance of a ResNet for vehicle detection in complex, daytime, driving scenes.
For a hybrid system that combines a depth map and radiance image, the average precision is higher than using depth or radiance alone.
arXiv Detail & Related papers (2021-01-06T01:37:43Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.