Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation
- URL: http://arxiv.org/abs/2111.07971v1
- Date: Mon, 15 Nov 2021 18:37:43 GMT
- Title: Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation
- Authors: David Acuna, Jonah Philion, Sanja Fidler
- Abstract summary: We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
- Score: 98.51313127382937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving relies on a huge volume of real-world data to be labeled
to high precision. Alternative solutions seek to exploit driving simulators
that can generate large amounts of labeled data with a plethora of content
variations. However, the domain gap between the synthetic and real data
remains, raising the following important question: What are the best ways to
utilize a self-driving simulator for perception tasks? In this work, we build
on top of recent advances in domain-adaptation theory, and from this
perspective, propose ways to minimize the reality gap. We primarily focus on
the use of labels in the synthetic domain alone. Our approach introduces both a
principled way to learn neural-invariant representations and a theoretically
inspired view on how to sample the data from the simulator. Our method is easy
to implement in practice as it is agnostic of the network architecture and the
choice of the simulator. We showcase our approach on the bird's-eye-view
vehicle segmentation task with multi-sensor data (cameras, lidar) using an
open-source simulator (CARLA), and evaluate the entire framework on a
real-world dataset (nuScenes). Last but not least, we show what types of
variations (e.g. weather conditions, number of assets, map design, and color
diversity) matter to perception networks when trained with driving simulators,
and which ones can be compensated for with our domain adaptation technique.
Related papers
- Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Improving Generalization of Transfer Learning Across Domains Using
Spatio-Temporal Features in Autonomous Driving [45.655433907239804]
Vehicle simulation can be used to learn in the virtual world, and the acquired skills can be transferred to handle real-world scenarios.
These visual elements are intuitively crucial for human decision making during driving.
We propose a CNN+LSTM transfer learning framework to extract thetemporal-temporal features representing vehicle dynamics from scenes.
arXiv Detail & Related papers (2021-03-15T03:26:06Z) - Domain Adaptation Through Task Distillation [5.371337604556311]
Deep networks devour millions of precisely annotated images to build their powerful representations.
Images exist in any interesting domain, simulated or real, and are easy to label and extend.
We use these recognition datasets to link up a source and target domain to transfer models between them in a task distillation framework.
arXiv Detail & Related papers (2020-08-27T04:44:49Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Virtual to Real adaptation of Pedestrian Detectors [9.432150710329607]
ViPeD is a new synthetically generated set of images collected with the graphical engine of the video game GTA V - Grand Theft Auto V.
We propose two different Domain Adaptation techniques suitable for the pedestrian detection task, but possibly applicable to general object detection.
Experiments show that the network trained with ViPeD can generalize over unseen real-world scenarios better than the detector trained over real-world data.
arXiv Detail & Related papers (2020-01-09T14:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.