Mind the Gap! A Study on the Transferability of Virtual vs
Physical-world Testing of Autonomous Driving Systems
- URL: http://arxiv.org/abs/2112.11255v1
- Date: Tue, 21 Dec 2021 14:28:35 GMT
- Title: Mind the Gap! A Study on the Transferability of Virtual vs
Physical-world Testing of Autonomous Driving Systems
- Authors: Andrea Stocco, Brian Pulfer, Paolo Tonella
- Abstract summary: We leverage the Donkey Car open-source framework to empirically compare testing of SDCs when deployed on a physical small-scale vehicle vs its virtual simulated counterpart.
While a large number of testing results do transfer between virtual and physical environments, we also identified critical shortcomings that contribute to the reality gap between the virtual and physical world.
- Score: 6.649715954440713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe deployment of self-driving cars (SDC) necessitates thorough simulated
and in-field testing. Most testing techniques consider virtualized SDCs within
a simulation environment, whereas less effort has been directed towards
assessing whether such techniques transfer to and are effective with a physical
real-world vehicle. In this paper, we leverage the Donkey Car open-source
framework to empirically compare testing of SDCs when deployed on a physical
small-scale vehicle vs its virtual simulated counterpart. In our empirical
study, we investigate the transferability of behavior and failure exposure
between virtual and real-world environments on a vast set of corrupted and
adversarial settings. While a large number of testing results do transfer
between virtual and physical environments, we also identified critical
shortcomings that contribute to the reality gap between the virtual and
physical world, threatening the potential of existing testing solutions when
applied to physical SDCs.
Related papers
- Behavioural gap assessment of human-vehicle interaction in real and virtual reality-based scenarios in autonomous driving [7.588679613436823]
We present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation.
In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI)
Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
arXiv Detail & Related papers (2024-07-04T17:20:17Z) - How does Simulation-based Testing for Self-driving Cars match Human
Perception? [5.742965094549775]
This study investigates the factors that determine how humans perceive self-driving cars test cases as safe, unsafe, realistic, or unrealistic.
Our findings indicate that the human assessment of the safety and realism of failing and passing test cases can vary based on different factors.
arXiv Detail & Related papers (2024-01-26T09:58:12Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - An in-depth experimental study of sensor usage and visual reasoning of
robots navigating in real environments [20.105395754497202]
We study the performance and reasoning capacities of real physical agents, trained in simulation and deployed to two different physical environments.
We show, that for the PointGoal task, an agent pre-trained on wide variety of tasks and fine-tuned on a simulated version of the target environment can reach competitive performance without modelling any sim2real transfer.
arXiv Detail & Related papers (2021-11-29T16:27:29Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Coverage-based Scene Fuzzing for Virtual Autonomous Driving Testing [7.820464285404852]
This paper proposes a coverage-driven fuzzing technique to automatically generate diverse configuration parameters to form new driving scenes.
Experimental results show that our fuzzing method can significantly reduce the cost in deriving new risky scenes from the initial setup designed by testers.
arXiv Detail & Related papers (2021-06-02T00:49:59Z) - Worsening Perception: Real-time Degradation of Autonomous Vehicle
Perception Performance for Simulation of Adverse Weather Conditions [47.529411576737644]
This study explores the potential of using a simple, lightweight image augmentation system in an autonomous racing vehicle.
With minimal adjustment, the prototype system can replicate the effects of both water droplets on the camera lens, and fading light conditions.
arXiv Detail & Related papers (2021-03-03T23:49:02Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - RoboTHOR: An Open Simulation-to-Real Embodied AI Platform [56.50243383294621]
We introduce RoboTHOR to democratize research in interactive and embodied visual AI.
We show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs.
arXiv Detail & Related papers (2020-04-14T20:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.