DriveGAN: Towards a Controllable High-Quality Neural Simulation
- URL: http://arxiv.org/abs/2104.15060v1
- Date: Fri, 30 Apr 2021 15:30:05 GMT
- Title: DriveGAN: Towards a Controllable High-Quality Neural Simulation
- Authors: Seung Wook Kim, Jonah Philion, Antonio Torralba, Sanja Fidler
- Abstract summary: We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
- Score: 147.6822288981004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic simulators are critical for training and verifying robotics
systems. While most of the contemporary simulators are hand-crafted, a
scaleable way to build simulators is to use machine learning to learn how the
environment behaves in response to an action, directly from data. In this work,
we aim to learn to simulate a dynamic environment directly in pixel-space, by
watching unannotated sequences of frames and their associated action pairs. We
introduce a novel high-quality neural simulator referred to as DriveGAN that
achieves controllability by disentangling different components without
supervision. In addition to steering controls, it also includes controls for
sampling features of a scene, such as the weather as well as the location of
non-player objects. Since DriveGAN is a fully differentiable simulator, it
further allows for re-simulation of a given video sequence, offering an agent
to drive through a recorded scene again, possibly taking different actions. We
train DriveGAN on multiple datasets, including 160 hours of real-world driving
data. We showcase that our approach greatly surpasses the performance of
previous data-driven simulators, and allows for new features not explored
before.
Related papers
- SimGen: Simulator-conditioned Driving Scene Generation [50.03358485083602]
We introduce a simulator-conditioned scene generation framework called SimGen.
SimGen learns to generate diverse driving scenes by mixing data from the simulator and the real world.
It achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator.
arXiv Detail & Related papers (2024-06-13T17:58:32Z) - GarchingSim: An Autonomous Driving Simulator with Photorealistic Scenes
and Minimalist Workflow [24.789118651720045]
We introduce an autonomous driving simulator with photorealistic scenes.
The simulator is able to communicate with external algorithms through ROS2 or Socket.IO.
We implement a highly accurate vehicle dynamics model within the simulator to enhance the realism of the vehicle's physical effects.
arXiv Detail & Related papers (2024-01-28T23:26:15Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - UniSim: A Neural Closed-Loop Sensor Simulator [76.79818601389992]
We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
arXiv Detail & Related papers (2023-08-03T17:56:06Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - SimNet: Learning Reactive Self-driving Simulations from Real-world
Observations [10.035169936164504]
We present an end-to-end trainable machine learning system capable of realistically simulating driving experiences.
This can be used for the verification of self-driving system performance without relying on expensive and time-consuming road testing.
arXiv Detail & Related papers (2021-05-26T05:14:23Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.