Search-based Test-Case Generation by Monitoring Responsibility Safety
Rules
- URL: http://arxiv.org/abs/2005.00326v1
- Date: Sat, 25 Apr 2020 10:10:11 GMT
- Title: Search-based Test-Case Generation by Monitoring Responsibility Safety
Rules
- Authors: Mohammad Hekmatnejad, Bardh Hoxha and Georgios Fainekos
- Abstract summary: We propose a method for screening and classifying simulation-based driving test data to be used for training and testing controllers.
Our framework is distributed with the publicly available S-TALIRO and Sim-ATAV tools.
- Score: 2.1270496914042996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The safety of Automated Vehicles (AV) as Cyber-Physical Systems (CPS) depends
on the safety of their consisting modules (software and hardware) and their
rigorous integration. Deep Learning is one of the dominant techniques used for
perception, prediction, and decision making in AVs. The accuracy of predictions
and decision-making is highly dependant on the tests used for training their
underlying deep-learning. In this work, we propose a method for screening and
classifying simulation-based driving test data to be used for training and
testing controllers. Our method is based on monitoring and falsification
techniques, which lead to a systematic automated procedure for generating and
selecting qualified test data. We used Responsibility Sensitive Safety (RSS)
rules as our qualifier specifications to filter out the random tests that do
not satisfy the RSS assumptions. Therefore, the remaining tests cover driving
scenarios that the controlled vehicle does not respond safely to its
environment. Our framework is distributed with the publicly available S-TALIRO
and Sim-ATAV tools.
Related papers
- Automated System-level Testing of Unmanned Aerial Systems [2.2249176072603634]
A major requirement of international safety standards is to perform rigorous system-level testing of avionics software systems.
The proposed approach (AITester) utilizes model-based testing and artificial intelligence (AI) techniques to automatically generate, execute, and evaluate various test scenarios.
arXiv Detail & Related papers (2024-03-23T14:47:26Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - Identifying and Explaining Safety-critical Scenarios for Autonomous
Vehicles via Key Features [5.634825161148484]
This paper uses Instance Space Analysis (ISA) to identify the significant features of test scenarios that affect their ability to reveal the unsafe behaviour of AVs.
ISA identifies the features that best differentiate safety-critical scenarios from normal driving and visualises the impact of these features on test scenario outcomes (safe/unsafe) in 2D.
To test the predictive ability of the identified features, we train five Machine Learning classifiers to classify test scenarios as safe or unsafe.
arXiv Detail & Related papers (2022-12-15T00:52:47Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Towards Automated Safety Coverage and Testing for Autonomous Vehicles
with Reinforcement Learning [0.3683202928838613]
Validation puts the autonomous vehicle system to the test in scenarios or situations that the system would likely encounter in everyday driving.
We propose using reinforcement learning (RL) to generate failure examples and unexpected traffic situations for the AV software implementation.
arXiv Detail & Related papers (2020-05-22T19:00:38Z) - Efficient statistical validation with edge cases to evaluate Highly
Automated Vehicles [6.198523595657983]
The widescale deployment of Autonomous Vehicles seems to be imminent despite many safety challenges that are yet to be resolved.
Existing standards focus on deterministic processes where the validation requires only a set of test cases that cover the requirements.
This paper presents a new approach to compute the statistical characteristics of a system's behaviour by biasing automatically generated test cases towards the worst case scenarios.
arXiv Detail & Related papers (2020-03-04T04:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.