Exiting the Simulation: The Road to Robust and Resilient Autonomous
Vehicles at Scale
- URL: http://arxiv.org/abs/2210.10876v1
- Date: Wed, 19 Oct 2022 20:32:43 GMT
- Title: Exiting the Simulation: The Road to Robust and Resilient Autonomous
Vehicles at Scale
- Authors: Richard Chakra
- Abstract summary: This paper presents the current state-of-the-art simulation frameworks and methodologies used in the development of autonomous driving systems.
A synthesis of the key challenges surrounding autonomous driving simulation is presented.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past two decades, autonomous driving has been catalyzed into reality
by the growing capabilities of machine learning. This paradigm shift possesses
significant potential to transform the future of mobility and reshape our
society as a whole. With the recent advances in perception, planning, and
control capabilities, autonomous driving technologies are being rolled out for
public trials, yet we remain far from being able to rigorously ensure the
resilient operations of these systems across the long-tailed nature of the
driving environment. Given the limitations of real-world testing, autonomous
vehicle simulation stands as the critical component in exploring the edge of
autonomous driving capabilities, developing the robust behaviors required for
successful real-world operation, and enabling the extraction of hidden risks
from these complex systems prior to deployment. This paper presents the current
state-of-the-art simulation frameworks and methodologies used in the
development of autonomous driving systems, with a focus on outlining how
simulation is used to build the resiliency required for real-world operation
and the methods developed to bridge the gap between simulation and reality. A
synthesis of the key challenges surrounding autonomous driving simulation is
presented, specifically highlighting the opportunities to further advance the
ability to continuously learn in simulation and effectively transfer the
learning into the real-world - enabling autonomous vehicles to exit the
guardrails of simulation and deliver robust and resilient operations at scale.
Related papers
- GAIA-1: A Generative World Model for Autonomous Driving [9.578453700755318]
We introduce GAIA-1 ('Generative AI for Autonomy'), a generative world model that generates realistic driving scenarios.
Emerging properties from our model include learning high-level structures and scene dynamics, contextual awareness, generalization, and understanding of geometry.
arXiv Detail & Related papers (2023-09-29T09:20:37Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - SimNet: Learning Reactive Self-driving Simulations from Real-world
Observations [10.035169936164504]
We present an end-to-end trainable machine learning system capable of realistically simulating driving experiences.
This can be used for the verification of self-driving system performance without relying on expensive and time-consuming road testing.
arXiv Detail & Related papers (2021-05-26T05:14:23Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.