Towards Quantification of Assurance for Learning-enabled Components
- URL: http://arxiv.org/abs/2301.08980v1
- Date: Sat, 21 Jan 2023 17:34:05 GMT
- Title: Towards Quantification of Assurance for Learning-enabled Components
- Authors: Erfan Asaadi and Ewen Denney and Ganesh Pai
- Abstract summary: This paper develops a notion of assurance for LECs based on i) identifying the relevant dependability attributes, and ii) quantifying those attributes and the associated uncertainty.
We identify the applicable quantitative measures of assurance, and characterize the associated uncertainty using a non-parametric Bayesian approach.
We additionally discuss the relevance and contribution of LEC assurance to system-level assurance, the generalizability of our approach, and the associated challenges.
- Score: 3.0938904602244355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perception, localization, planning, and control, high-level functions often
organized in a so-called pipeline, are amongst the core building blocks of
modern autonomous (ground, air, and underwater) vehicle architectures. These
functions are increasingly being implemented using learning-enabled components
(LECs), i.e., (software) components leveraging knowledge acquisition and
learning processes such as deep learning. Providing quantified component-level
assurance as part of a wider (dynamic) assurance case can be useful in
supporting both pre-operational approval of LECs (e.g., by regulators), and
runtime hazard mitigation, e.g., using assurance-based failover configurations.
This paper develops a notion of assurance for LECs based on i) identifying the
relevant dependability attributes, and ii) quantifying those attributes and the
associated uncertainty, using probabilistic techniques. We give a practical
grounding for our work using an example from the aviation domain: an autonomous
taxiing capability for an unmanned aircraft system (UAS), focusing on the
application of LECs as sensors in the perception function. We identify the
applicable quantitative measures of assurance, and characterize the associated
uncertainty using a non-parametric Bayesian approach, namely Gaussian process
regression. We additionally discuss the relevance and contribution of LEC
assurance to system-level assurance, the generalizability of our approach, and
the associated challenges.
Related papers
- ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Improving decision-making via risk-based active learning: Probabilistic
discriminative classifiers [0.0]
descriptive labels for measured data corresponding to health-states of monitored systems are often unavailable.
One approach to dealing with this problem is risk-based active learning.
The current paper demonstrates several advantages of using an alternative type of classifier -- discriminative models.
arXiv Detail & Related papers (2022-06-23T10:51:42Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Curriculum Learning for Safe Mapless Navigation [71.55718344087657]
This work investigates the effects of Curriculum Learning (CL)-based approaches on the agent's performance.
In particular, we focus on the safety aspect of robotic mapless navigation, comparing over a standard end-to-end (E2E) training strategy.
arXiv Detail & Related papers (2021-12-23T12:30:36Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Quantifying Assurance in Learning-enabled Systems [3.0938904602244355]
Dependability assurance of systems embedding machine learning components is a key step for their use in safety-critical applications.
This paper develops a quantitative notion of assurance that an LES is dependable, as a core component of its assurance case.
We illustrate the utility of assurance measures by application to a real world autonomous aviation system.
arXiv Detail & Related papers (2020-06-18T08:11:50Z) - Probabilistic Guarantees for Safe Deep Reinforcement Learning [6.85316573653194]
Deep reinforcement learning has been successfully applied to many control tasks, but the application of such agents in safety-critical scenarios has been limited due to safety concerns.
We propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning agents in settings.
arXiv Detail & Related papers (2020-05-14T15:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.