NeuroNCAP: Photorealistic Closed-loop Safety Testing for Autonomous Driving
- URL: http://arxiv.org/abs/2404.07762v4
- Date: Tue, 23 Apr 2024 07:29:18 GMT
- Title: NeuroNCAP: Photorealistic Closed-loop Safety Testing for Autonomous Driving
- Authors: William Ljungbergh, Adam Tonderski, Joakim Johnander, Holger Caesar, Kalle Åström, Michael Felsberg, Christoffer Petersson,
- Abstract summary: We present a versatile NeRF-based simulator for testing autonomous driving software systems.
The simulator learns from sequences of real-world driving sensor data.
We use our simulator to test the responses of AD models to safety-critical scenarios inspired by Euro NCAP.
- Score: 19.709153559084093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a versatile NeRF-based simulator for testing autonomous driving (AD) software systems, designed with a focus on sensor-realistic closed-loop evaluation and the creation of safety-critical scenarios. The simulator learns from sequences of real-world driving sensor data and enables reconfigurations and renderings of new, unseen scenarios. In this work, we use our simulator to test the responses of AD models to safety-critical scenarios inspired by the European New Car Assessment Programme (Euro NCAP). Our evaluation reveals that, while state-of-the-art end-to-end planners excel in nominal driving scenarios in an open-loop setting, they exhibit critical flaws when navigating our safety-critical scenarios in a closed-loop setting. This highlights the need for advancements in the safety and real-world usability of end-to-end planners. By publicly releasing our simulator and scenarios as an easy-to-run evaluation suite, we invite the research community to explore, refine, and validate their AD models in controlled, yet highly configurable and challenging sensor-realistic environments. Code and instructions can be found at https://github.com/atonderski/neuro-ncap
Related papers
- ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - UniSim: A Neural Closed-Loop Sensor Simulator [76.79818601389992]
We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
arXiv Detail & Related papers (2023-08-03T17:56:06Z) - PEM: Perception Error Model for Virtual Testing of Autonomous Vehicles [20.300846259643137]
We define Perception Error Models (PEM) in this article.
PEM is a virtual simulation component that can enable the analysis of the impact of perception errors on AV safety.
We demonstrate the usefulness of PEM-based virtual tests, by evaluating camera, LiDAR, and camera-LiDAR setups.
arXiv Detail & Related papers (2023-02-23T10:54:36Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Towards Automated Safety Coverage and Testing for Autonomous Vehicles
with Reinforcement Learning [0.3683202928838613]
Validation puts the autonomous vehicle system to the test in scenarios or situations that the system would likely encounter in everyday driving.
We propose using reinforcement learning (RL) to generate failure examples and unexpected traffic situations for the AV software implementation.
arXiv Detail & Related papers (2020-05-22T19:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.