UniSim: A Neural Closed-Loop Sensor Simulator
- URL: http://arxiv.org/abs/2308.01898v1
- Date: Thu, 3 Aug 2023 17:56:06 GMT
- Title: UniSim: A Neural Closed-Loop Sensor Simulator
- Authors: Ze Yang, Yun Chen, Jingkang Wang, Sivabalan Manivasagam, Wei-Chiu Ma,
Anqi Joyce Yang, Raquel Urtasun
- Abstract summary: We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
- Score: 76.79818601389992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rigorously testing autonomy systems is essential for making safe self-driving
vehicles (SDV) a reality. It requires one to generate safety critical scenarios
beyond what can be collected safely in the world, as many scenarios happen
rarely on public roads. To accurately evaluate performance, we need to test the
SDV on these scenarios in closed-loop, where the SDV and other actors interact
with each other at each timestep. Previously recorded driving logs provide a
rich resource to build these new scenarios from, but for closed loop
evaluation, we need to modify the sensor data based on the new scene
configuration and the SDV's decisions, as actors might be added or removed and
the trajectories of existing actors and the SDV will differ from the original
log. In this paper, we present UniSim, a neural sensor simulator that takes a
single recorded log captured by a sensor-equipped vehicle and converts it into
a realistic closed-loop multi-sensor simulation. UniSim builds neural feature
grids to reconstruct both the static background and dynamic actors in the
scene, and composites them together to simulate LiDAR and camera data at new
viewpoints, with actors added or removed and at new placements. To better
handle extrapolated views, we incorporate learnable priors for dynamic objects,
and leverage a convolutional network to complete unseen regions. Our
experiments show UniSim can simulate realistic sensor data with small domain
gap on downstream tasks. With UniSim, we demonstrate closed-loop evaluation of
an autonomy system on safety-critical scenarios as if it were in the real
world.
Related papers
- DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - NeuroNCAP: Photorealistic Closed-loop Safety Testing for Autonomous Driving [19.709153559084093]
We present a versatile NeRF-based simulator for testing autonomous driving software systems.
The simulator learns from sequences of real-world driving sensor data.
We use our simulator to test the responses of AD models to safety-critical scenarios inspired by Euro NCAP.
arXiv Detail & Related papers (2024-04-11T14:03:16Z) - CADSim: Robust and Scalable in-the-wild 3D Reconstruction for
Controllable Sensor Simulation [44.83732884335725]
Sensor simulation involves modeling traffic participants, such as vehicles, with high quality appearance and articulated geometry.
Current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise.
We present CADSim, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry.
arXiv Detail & Related papers (2023-11-02T17:56:59Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.