Quantifying Assurance in Learning-enabled Systems
- URL: http://arxiv.org/abs/2006.10345v1
- Date: Thu, 18 Jun 2020 08:11:50 GMT
- Title: Quantifying Assurance in Learning-enabled Systems
- Authors: Erfan Asaadi, Ewen Denney, Ganesh Pai
- Abstract summary: Dependability assurance of systems embedding machine learning components is a key step for their use in safety-critical applications.
This paper develops a quantitative notion of assurance that an LES is dependable, as a core component of its assurance case.
We illustrate the utility of assurance measures by application to a real world autonomous aviation system.
- Score: 3.0938904602244355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dependability assurance of systems embedding machine learning(ML)
components---so called learning-enabled systems (LESs)---is a key step for
their use in safety-critical applications. In emerging standardization and
guidance efforts, there is a growing consensus in the value of using assurance
cases for that purpose. This paper develops a quantitative notion of assurance
that an LES is dependable, as a core component of its assurance case, also
extending our prior work that applied to ML components. Specifically, we
characterize LES assurance in the form of assurance measures: a probabilistic
quantification of confidence that an LES possesses system-level properties
associated with functional capabilities and dependability attributes. We
illustrate the utility of assurance measures by application to a real world
autonomous aviation system, also describing their role both in i) guiding
high-level, runtime risk mitigation decisions and ii) as a core component of
the associated dynamic assurance case.
Related papers
- SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation [56.10557932893919]
We present SafetyAnalyst, a novel LLM safety moderation framework.
Given a prompt, SafetyAnalyst creates a structured "harm-benefit tree"
It then aggregates this structured representation into a harmfulness score.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Automating Semantic Analysis of System Assurance Cases using Goal-directed ASP [1.2189422792863451]
We present our approach to enhancing Assurance 2.0 with semantic rule-based analysis capabilities.
We examine the unique semantic aspects of assurance cases, such as logical consistency, adequacy, indefeasibility, etc.
arXiv Detail & Related papers (2024-08-21T15:22:43Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Towards Quantification of Assurance for Learning-enabled Components [3.0938904602244355]
This paper develops a notion of assurance for LECs based on i) identifying the relevant dependability attributes, and ii) quantifying those attributes and the associated uncertainty.
We identify the applicable quantitative measures of assurance, and characterize the associated uncertainty using a non-parametric Bayesian approach.
We additionally discuss the relevance and contribution of LEC assurance to system-level assurance, the generalizability of our approach, and the associated challenges.
arXiv Detail & Related papers (2023-01-21T17:34:05Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Guidance on the Assurance of Machine Learning in Autonomous Systems
(AMLAS) [16.579772998870233]
We introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS)
AMLAS comprises a set of safety case patterns and a process for integrating safety assurance into the development of ML components.
arXiv Detail & Related papers (2021-02-02T15:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.