SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving
- URL: http://arxiv.org/abs/2005.03844v2
- Date: Thu, 25 Jun 2020 05:37:24 GMT
- Title: SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving
- Authors: Zhenpei Yang, Yuning Chai, Dragomir Anguelov, Yin Zhou, Pei Sun,
Dumitru Erhan, Sean Rafferty, Henrik Kretzschmar
- Abstract summary: We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
- Score: 27.948417322786575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving system development is critically dependent on the ability
to replay complex and diverse traffic scenarios in simulation. In such
scenarios, the ability to accurately simulate the vehicle sensors such as
cameras, lidar or radar is essential. However, current sensor simulators
leverage gaming engines such as Unreal or Unity, requiring manual creation of
environments, objects and material properties. Such approaches have limited
scalability and fail to produce realistic approximations of camera, lidar, and
radar data without significant additional work.
In this paper, we present a simple yet effective approach to generate
realistic scenario sensor data, based only on a limited amount of lidar and
camera data collected by an autonomous vehicle. Our approach uses
texture-mapped surfels to efficiently reconstruct the scene from an initial
vehicle pass or set of passes, preserving rich information about object 3D
geometry and appearance, as well as the scene conditions. We then leverage a
SurfelGAN network to reconstruct realistic camera images for novel positions
and orientations of the self-driving vehicle and moving objects in the scene.
We demonstrate our approach on the Waymo Open Dataset and show that it can
synthesize realistic camera data for simulated scenarios. We also create a
novel dataset that contains cases in which two self-driving vehicles observe
the same scene at the same time. We use this dataset to provide additional
evaluation and demonstrate the usefulness of our SurfelGAN model.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Reconstructing Objects in-the-wild for Realistic Sensor Simulation [41.55571880832957]
We present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data.
We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data.
Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views.
arXiv Detail & Related papers (2023-11-09T18:58:22Z) - CADSim: Robust and Scalable in-the-wild 3D Reconstruction for
Controllable Sensor Simulation [44.83732884335725]
Sensor simulation involves modeling traffic participants, such as vehicles, with high quality appearance and articulated geometry.
Current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise.
We present CADSim, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry.
arXiv Detail & Related papers (2023-11-02T17:56:59Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.