VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles
- URL: http://arxiv.org/abs/2111.12083v1
- Date: Tue, 23 Nov 2021 18:58:10 GMT
- Title: VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles
- Authors: Alexander Amini, Tsun-Hsuan Wang, Igor Gilitschenski, Wilko
Schwarting, Zhijian Liu, Song Han, Sertac Karaman, Daniela Rus
- Abstract summary: We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
- Score: 131.2240621036954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation has the potential to transform the development of robust
algorithms for mobile agents deployed in safety-critical scenarios. However,
the poor photorealism and lack of diverse sensor modalities of existing
simulation engines remain key hurdles towards realizing this potential. Here,
we present VISTA, an open source, data-driven simulator that integrates
multiple types of sensors for autonomous vehicles. Using high fidelity,
real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and
event-based cameras, enabling the rapid generation of novel viewpoints in
simulation and thereby enriching the data available for policy learning with
corner cases that are difficult to capture in the physical world. Using VISTA,
we demonstrate the ability to train and test perception-to-control policies
across each of the sensor types and showcase the power of this approach via
deployment on a full scale autonomous vehicle. The policies learned in VISTA
exhibit sim-to-real transfer without modification and greater robustness than
those trained exclusively on real-world data.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - CADSim: Robust and Scalable in-the-wild 3D Reconstruction for
Controllable Sensor Simulation [44.83732884335725]
Sensor simulation involves modeling traffic participants, such as vehicles, with high quality appearance and articulated geometry.
Current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise.
We present CADSim, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry.
arXiv Detail & Related papers (2023-11-02T17:56:59Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.