GAN-Based LiDAR Intensity Simulation
- URL: http://arxiv.org/abs/2311.15415v1
- Date: Sun, 26 Nov 2023 20:44:09 GMT
- Title: GAN-Based LiDAR Intensity Simulation
- Authors: Richard Marcus, Felix Gabel, Niklas Knoop and Marc Stamminger
- Abstract summary: We train GANs to translate between camera images and LiDAR scans from real test drives.
We test the performance of the LiDAR simulation by testing how well an object detection network generalizes between real and synthetic point clouds.
- Score: 3.8697834534260447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic vehicle sensor simulation is an important element in developing
autonomous driving. As physics-based implementations of visual sensors like
LiDAR are complex in practice, data-based approaches promise solutions. Using
pairs of camera images and LiDAR scans from real test drives, GANs can be
trained to translate between them. For this process, we contribute two
additions. First, we exploit the camera images, acquiring segmentation data and
dense depth maps as additional input for training. Second, we test the
performance of the LiDAR simulation by testing how well an object detection
network generalizes between real and synthetic point clouds to enable
evaluation without ground truth point clouds. Combining both, we simulate LiDAR
point clouds and demonstrate their realism.
Related papers
- PCGen: Point Cloud Generator for LiDAR Simulation [10.692184635629792]
Existing methods generate data which are more noisy and complete than the real point clouds.
We propose FPA raycasting and surrogate model raydrop.
With minimal training data, the surrogate model can generalize to different geographies and scenes.
Results show that object detection models trained by simulation data can achieve similar result as the real data trained model.
arXiv Detail & Related papers (2022-10-17T04:13:21Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - A Lightweight Machine Learning Pipeline for LiDAR-simulation [8.18203294574182]
We propose a lightweight approach for more realistic LiDAR simulation.
The central idea is to cast the simulation into an image-to-image translation problem.
This strategy enables to skip the sensor-specific, expensive and complex LiDAR physics simulation.
arXiv Detail & Related papers (2022-08-05T12:45:53Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.