A Lightweight Machine Learning Pipeline for LiDAR-simulation
- URL: http://arxiv.org/abs/2208.03130v1
- Date: Fri, 5 Aug 2022 12:45:53 GMT
- Title: A Lightweight Machine Learning Pipeline for LiDAR-simulation
- Authors: Richard Marcus, Niklas Knoop, Bernhard Egger and Marc Stamminger
- Abstract summary: We propose a lightweight approach for more realistic LiDAR simulation.
The central idea is to cast the simulation into an image-to-image translation problem.
This strategy enables to skip the sensor-specific, expensive and complex LiDAR physics simulation.
- Score: 8.18203294574182
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Virtual testing is a crucial task to ensure safety in autonomous driving, and
sensor simulation is an important task in this domain. Most current LiDAR
simulations are very simplistic and are mainly used to perform initial tests,
while the majority of insights are gathered on the road. In this paper, we
propose a lightweight approach for more realistic LiDAR simulation that learns
a real sensor's behavior from test drive data and transforms this to the
virtual domain. The central idea is to cast the simulation into an
image-to-image translation problem. We train our pix2pix based architecture on
two real world data sets, namely the popular KITTI data set and the Audi
Autonomous Driving Dataset which provide both, RGB and LiDAR images. We apply
this network on synthetic renderings and show that it generalizes sufficiently
from real images to simulated images. This strategy enables to skip the
sensor-specific, expensive and complex LiDAR physics simulation in our
synthetic world and avoids oversimplification and a large domain-gap through
the clean synthetic environment.
Related papers
- RaSim: A Range-aware High-fidelity RGB-D Data Simulation Pipeline for Real-world Applications [55.24463002889]
We focus on depth data synthesis and develop a range-aware RGB-D data simulation pipeline (RaSim)
In particular, high-fidelity depth data is generated by imitating the imaging principle of real-world sensors.
RaSim can be directly applied to real-world scenarios without any finetuning and excel at downstream RGB-D perception tasks.
arXiv Detail & Related papers (2024-04-05T08:52:32Z) - GAN-Based LiDAR Intensity Simulation [3.8697834534260447]
We train GANs to translate between camera images and LiDAR scans from real test drives.
We test the performance of the LiDAR simulation by testing how well an object detection network generalizes between real and synthetic point clouds.
arXiv Detail & Related papers (2023-11-26T20:44:09Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - sim2real: Cardiac MR Image Simulation-to-Real Translation via
Unsupervised GANs [0.4433315630787158]
We provide image simulation on virtual XCAT subjects with varying anatomies.
We propose sim2real translation network to improve image realism.
Our usability experiments suggest that sim2real data exhibits a good potential to augment training data and boost the performance of a segmentation algorithm.
arXiv Detail & Related papers (2022-08-09T16:06:06Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.