Towards Scenario-based Safety Validation for Autonomous Trains with Deep
Generative Models
- URL: http://arxiv.org/abs/2310.10635v1
- Date: Mon, 16 Oct 2023 17:55:14 GMT
- Title: Towards Scenario-based Safety Validation for Autonomous Trains with Deep
Generative Models
- Authors: Thomas Decker, Ananta R. Bhattarai, and Michael Lebacher
- Abstract summary: We report our practical experiences regarding the utility of data simulation with deep generative models for scenario-based validation.
We demonstrate the capabilities of semantically editing railway scenes with deep generative models to make a limited amount of test data more representative.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern AI techniques open up ever-increasing possibilities for autonomous
vehicles, but how to appropriately verify the reliability of such systems
remains unclear. A common approach is to conduct safety validation based on a
predefined Operational Design Domain (ODD) describing specific conditions under
which a system under test is required to operate properly. However, collecting
sufficient realistic test cases to ensure comprehensive ODD coverage is
challenging. In this paper, we report our practical experiences regarding the
utility of data simulation with deep generative models for scenario-based ODD
validation. We consider the specific use case of a camera-based rail-scene
segmentation system designed to support autonomous train operation. We
demonstrate the capabilities of semantically editing railway scenes with deep
generative models to make a limited amount of test data more representative. We
also show how our approach helps to analyze the degree to which a system
complies with typical ODD requirements. Specifically, we focus on evaluating
proper operation under different lighting and weather conditions as well as
while transitioning between them.
Related papers
- AdvDiffuser: Generating Adversarial Safety-Critical Driving Scenarios via Guided Diffusion [6.909801263560482]
AdvDiffuser is an adversarial framework for generating safety-critical driving scenarios through guided diffusion.
We show that AdvDiffuser can be applied to various tested systems with minimal warm-up episode data.
arXiv Detail & Related papers (2024-10-11T02:03:21Z) - SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation [55.87169702896249]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.
We propose a framework to evaluate DA methods and present a fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment.
Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications.
arXiv Detail & Related papers (2024-07-16T12:52:29Z) - GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation [0.14999444543328289]
Goal-conditioned Scenario Generation (GOOSE) is a goal-conditioned reinforcement learning (RL) approach that automatically generates safety-critical scenarios.
We demonstrate the effectiveness of GOOSE in generating scenarios that lead to safety-critical events.
arXiv Detail & Related papers (2024-06-06T08:59:08Z) - Hard Cases Detection in Motion Prediction by Vision-Language Foundation Models [16.452638202694246]
This work explores the potential of Vision-Language Foundation Models (VLMs) in detecting hard cases in autonomous driving.
We introduce a feasible pipeline where VLMs, fed with sequential image frames with designed prompts, effectively identify challenging agents or scenarios.
We show the effectiveness and feasibility of incorporating our pipeline with state-of-the-art methods on NuScenes datasets.
arXiv Detail & Related papers (2024-05-31T16:35:41Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Discovering Decision Manifolds to Assure Trusted Autonomous Systems [0.0]
We propose an optimization-based search technique for capturing the range of correct and incorrect responses a system could exhibit.
This manifold provides a more detailed understanding of system reliability than traditional testing or Monte Carlo simulations.
In this proof-of-concept, we apply our method to a software-in-the-loop evaluation of an autonomous vehicle.
arXiv Detail & Related papers (2024-02-12T16:55:58Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Efficient statistical validation with edge cases to evaluate Highly
Automated Vehicles [6.198523595657983]
The widescale deployment of Autonomous Vehicles seems to be imminent despite many safety challenges that are yet to be resolved.
Existing standards focus on deterministic processes where the validation requires only a set of test cases that cover the requirements.
This paper presents a new approach to compute the statistical characteristics of a system's behaviour by biasing automatically generated test cases towards the worst case scenarios.
arXiv Detail & Related papers (2020-03-04T04:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.