Simulative Performance Analysis of an AD Function with Road Network
Variation
- URL: http://arxiv.org/abs/2308.04446v1
- Date: Tue, 1 Aug 2023 15:25:51 GMT
- Title: Simulative Performance Analysis of an AD Function with Road Network
Variation
- Authors: Daniel Becker and Guido K\"uppers and Lutz Eckstein
- Abstract summary: We propose a method to automatically test a set of scenarios in many variations.
Those variations are not applied to traffic participants around the ADF, but to the road network to show that parameters regarding the road topology also influence the performance of such an ADF.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated driving functions (ADFs) have become increasingly popular in recent
years. However, their safety must be assured. Thus, the verification and
validation of these functions is still an important open issue in research and
development. To achieve this efficiently, scenario-based testing has been
established as a valuable methodology among researchers, industry, as well as
authorities. Simulations are a powerful way to test those scenarios
reproducibly. In this paper, we propose a method to automatically test a set of
scenarios in many variations. In contrast to related approaches, those
variations are not applied to traffic participants around the ADF, but to the
road network to show that parameters regarding the road topology also influence
the performance of such an ADF. We present a continuous tool chain to set up
scenarios, variate them, run simulations and finally, evaluate the performance
with a set of key performance indicators (KPIs).
Related papers
- AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Adaptive Testing Environment Generation for Connected and Automated
Vehicles with Dense Reinforcement Learning [7.6589102528398065]
We develop an adaptive testing environment that bolsters evaluation robustness by incorporating multiple surrogate models.
We propose the dense reinforcement learning method and devise a new adaptive policy with high sample efficiency.
arXiv Detail & Related papers (2024-02-29T15:42:33Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation [1.4530711901349282]
We propose to validate test-time adaptation methods using datasets for autonomous driving, namely CLAD-C and SHIFT.
We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift.
We enhance the well-established self-training framework by incorporating a small memory buffer to increase model stability.
arXiv Detail & Related papers (2023-09-18T19:34:23Z) - Better Practices for Domain Adaptation [62.70267990659201]
Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
arXiv Detail & Related papers (2023-09-07T17:44:18Z) - Benchmarking Test-Time Adaptation against Distribution Shifts in Image
Classification [77.0114672086012]
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of models by leveraging unlabeled samples solely during prediction.
We present a benchmark that systematically evaluates 13 prominent TTA methods and their variants on five widely used image classification datasets.
arXiv Detail & Related papers (2023-07-06T16:59:53Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - A Survey on Scenario-Based Testing for Automated Driving Systems in
High-Fidelity Simulation [26.10081199009559]
Testing the system on the road is the closest to real-world and desirable approach, but it is incredibly costly.
A popular alternative is to evaluate an ADS's performance in some well-designed challenging scenarios, a.k.a. scenario-based testing.
High-fidelity simulators have been widely used in this setting to maximize flexibility and convenience in testing what-if scenarios.
arXiv Detail & Related papers (2021-12-02T03:41:33Z) - Efficient falsification approach for autonomous vehicle validation using
a parameter optimisation technique based on reinforcement learning [6.198523595657983]
The widescale deployment of Autonomous Vehicles (AV) appears to be imminent despite many safety challenges that are yet to be resolved.
The uncertainties in the behaviour of the traffic participants and the dynamic world cause reactions in advanced autonomous systems.
This paper presents an efficient falsification method to evaluate the System Under Test.
arXiv Detail & Related papers (2020-11-16T02:56:13Z) - Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method [20.280573307366627]
We propose a generative framework to create safety-critical scenarios for evaluating task algorithms.
We demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods.
arXiv Detail & Related papers (2020-03-02T21:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.