SOTIF-Compliant Scenario Generation Using Semi-Concrete Scenarios and
Parameter Sampling
- URL: http://arxiv.org/abs/2308.07025v1
- Date: Mon, 14 Aug 2023 09:25:24 GMT
- Title: SOTIF-Compliant Scenario Generation Using Semi-Concrete Scenarios and
Parameter Sampling
- Authors: Lukas Birkemeyer, Julian Fuchs, Alessio Gambi, Ina Schaefer
- Abstract summary: SOTIF standard requires scenario-based testing to verify and validate Advanced Driver Assistance Systems and Automated Driving Systems.
Existing scenario generation approaches either focus on exploring or exploiting the scenario space.
This paper proposes semi-concrete scenarios and parameter sampling to generate SOTIF-compliant test suites.
- Score: 6.195203785530687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The SOTIF standard (ISO 21448) requires scenario-based testing to verify and
validate Advanced Driver Assistance Systems and Automated Driving Systems but
does not suggest any practical way to do so effectively and efficiently.
Existing scenario generation approaches either focus on exploring or exploiting
the scenario space. This generally leads to test suites that cover many known
cases but potentially miss edge cases or focused test suites that are effective
but also contain less diverse scenarios. To generate SOTIF-compliant test
suites that achieve higher coverage and find more faults, this paper proposes
semi-concrete scenarios and combines them with parameter sampling to adequately
balance scenario space exploration and exploitation. Semi-concrete scenarios
enable combinatorial scenario generation techniques that systematically explore
the scenario space, while parameter sampling allows for the exploitation of
continuous parameters. Our experimental results show that the proposed concept
can generate more effective test suites than state-of-the-art coverage-based
sampling. Moreover, our results show that including a feedback mechanism to
drive parameter sampling further increases test suites' effectiveness.
Related papers
- Scenario-Wise Rec: A Multi-Scenario Recommendation Benchmark [54.93461228053298]
We introduce our benchmark, textbfScenario-Wise Rec, which comprises 6 public datasets and 12 benchmark models, along with a training and evaluation pipeline.
We aim for this benchmark to offer researchers valuable insights from prior work, enabling the development of novel models.
arXiv Detail & Related papers (2024-12-23T08:15:34Z) - LAMBDA: Covering the Multimodal Critical Scenarios for Automated Driving Systems by Search Space Quantization [33.87626198349963]
Black-Box Optimization (BBO) was introduced to accelerate the scenario-based test of automated driving systems (ADSs)
All the subspaces representing danger in the logical scenario space, rather than only the most critical concrete scenario, play a more significant role for the safety evaluation.
We propose LAMBDA (Latent-Action Monte-Carlo Beam Search with Density Adaption) to solve BBC problems.
arXiv Detail & Related papers (2024-11-30T15:57:05Z) - Balancing Diversity and Risk in LLM Sampling: How to Select Your Method and Parameter for Open-Ended Text Generation [60.493180081319785]
We propose a systematic way to estimate the capacity of a truncation sampling method by considering the trade-off between diversity and risk at each decoding step.
Our work offers a comprehensive comparison of existing truncation sampling methods and serves as a practical user guideline for their parameter selection.
arXiv Detail & Related papers (2024-08-24T14:14:32Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - AutoSAM: Towards Automatic Sampling of User Behaviors for Sequential Recommender Systems [48.461157194277504]
We propose a general automatic sampling framework, named AutoSAM, to non-uniformly treat historical behaviors.
Specifically, AutoSAM augments the standard sequential recommendation architecture with an additional sampler layer to adaptively learn the skew distribution of the raw input.
We theoretically design multi-objective sampling rewards including Future Prediction and Sequence Perplexity, and then optimize the whole framework in an end-to-end manner.
arXiv Detail & Related papers (2023-11-01T09:25:21Z) - Is Scenario Generation Ready for SOTIF? A Systematic Literature Review [3.1491385041570146]
We perform a Systematic Literature Review to identify techniques that generate scenarios complying with requirements of the SOTIF-standard.
We investigate which details of the real-world are covered by generated scenarios, whether scenarios are specific for a system under test or generic, and whether scenarios are designed to minimize the set of unknown and hazardous scenarios.
arXiv Detail & Related papers (2023-08-04T11:59:21Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - An Application of Scenario Exploration to Find New Scenarios for the
Development and Testing of Automated Driving Systems in Urban Scenarios [2.480533141352916]
This work aims to find relevant, interesting, or critical parameter sets within logical scenarios by utilizing Bayes optimization and Gaussian processes.
A list of ideas this work leads to and should be investigated further is presented.
arXiv Detail & Related papers (2022-05-17T09:47:32Z) - A Survey on Scenario-Based Testing for Automated Driving Systems in
High-Fidelity Simulation [26.10081199009559]
Testing the system on the road is the closest to real-world and desirable approach, but it is incredibly costly.
A popular alternative is to evaluate an ADS's performance in some well-designed challenging scenarios, a.k.a. scenario-based testing.
High-fidelity simulators have been widely used in this setting to maximize flexibility and convenience in testing what-if scenarios.
arXiv Detail & Related papers (2021-12-02T03:41:33Z) - Addressing the IEEE AV Test Challenge with Scenic and VerifAI [10.221093591444731]
This paper summarizes our formal approach to testing autonomous vehicles (AVs) in simulation for the IEEE AV Test Challenge.
We demonstrate a systematic testing framework leveraging our previous work on formally-driven simulation for intelligent cyber-physical systems.
arXiv Detail & Related papers (2021-08-20T04:51:27Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.