Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components
- URL: http://arxiv.org/abs/2311.03362v1
- Date: Thu, 28 Sep 2023 09:00:31 GMT
- Title: Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components
- Authors: Hasan Esen, Brian Hsuan-Cheng Liao
- Abstract summary: Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
- Score: 0.6526824510982802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There have been major developments in Automated Driving (AD) and Driving
Assist Systems (ADAS) in recent years. However, their safety assurance, thus
methodologies for testing, verification and validation AD/ADAS safety-critical
applications remain as one the main challenges. Inevitably AI also penetrates
into AD/ADAS applications, such as object detection. Despite important
benefits, adoption of such learned-enabled components and systems in
safety-critical scenarios causes that conventional testing approaches (e.g.,
distance-based testing in automotive) quickly become infeasible. Similarly,
safety engineering approaches usually assume model-based components and do not
handle learning-enabled ones well. The authors have participated in the
public-funded project FOCETA , and developed an Automated Valet Parking (AVP)
use case. As the nature of the baseline implementation is imperfect, it offers
a space for continuous improvement based on modelling, verification,
validation, and monitoring techniques. In this publication, we explain the
simulation-based development platform that is designed to verify and validate
safety-critical learning-enabled systems in continuous engineering loops.
Related papers
- On STPA for Distributed Development of Safe Autonomous Driving: An Interview Study [0.7851536646859475]
System-Theoretic Process Analysis (STPA) is a novel method applied in safety-related fields like defense and aerospace.
STPA assumes prerequisites that are not fully valid in the automotive system engineering with distributed system development and multi-abstraction design levels.
This can be seen as a maintainability challenge in continuous development and deployment.
arXiv Detail & Related papers (2024-03-14T15:56:02Z) - Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Complete Agent-driven Model-based System Testing for Autonomous Systems [0.0]
A novel approach to testing complex autonomous transportation systems is described.
It is intended to mitigate some of the most critical problems regarding verification and validation.
arXiv Detail & Related papers (2021-10-25T01:55:24Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical
Systems [30.638615396429536]
Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS.
We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling.
A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems.
arXiv Detail & Related papers (2020-05-06T17:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.