STPA for Learning-Enabled Systems: A Survey and A New Practice
- URL: http://arxiv.org/abs/2302.10588v2
- Date: Mon, 17 Jul 2023 15:56:59 GMT
- Title: STPA for Learning-Enabled Systems: A Survey and A New Practice
- Authors: Yi Qi, Yi Dong, Siddartha Khastgir, Paul Jennings, Xingyu Zhao,
Xiaowei Huang
- Abstract summary: Systems Theoretic Process Analysis (STPA) is a systematic approach for hazard analysis that has been used across many industrial sectors including transportation, energy, and defense.
The trend of using Machine Learning (ML) in safety-critical systems has led to the need of extendingSTPA to Learning-Enabled Systems (LESs)
We present a systematic survey of 31 papers, summarising them from five perspectives (attributes of concern, objects under study, modifications, derivatives and processes being modelled)
We introduce DeepSTPA, which enhances DeepSTPA from two aspects that are missing from the state-of-the-
- Score: 12.665507596261266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systems Theoretic Process Analysis (STPA) is a systematic approach for hazard
analysis that has been used across many industrial sectors including
transportation, energy, and defense. The unstoppable trend of using Machine
Learning (ML) in safety-critical systems has led to the pressing need of
extending STPA to Learning-Enabled Systems (LESs). Although works have been
carried out on various example LESs, without a systematic review, it is unclear
how effective and generalisable the extended STPA methods are, and whether
further improvements can be made. To this end, we present a systematic survey
of 31 papers, summarising them from five perspectives (attributes of concern,
objects under study, modifications, derivatives and processes being modelled).
Furthermore, we identify room for improvement and accordingly introduce
DeepSTPA, which enhances STPA from two aspects that are missing from the
state-of-the-practice: (i) Control loop structures are explicitly extended to
identify hazards from the data-driven development process spanning the ML
lifecycle; (ii) Fine-grained functionalities are modelled at the layer-wise
levels of ML models to detect root causes. We demonstrate and compare DeepSTPA
and STPA through a case study on an autonomous emergency braking system.
Related papers
- From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection [0.0]
We consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model.
We establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection.
arXiv Detail & Related papers (2024-03-12T11:38:45Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - A Review of Machine Learning Methods Applied to Structural Dynamics and
Vibroacoustic [0.0]
Three main applications in Vibroacoustic (SD&V) have taken advantage of Machine Learning (ML)
In Structural Health Monitoring, ML detection and prognosis lead to safe operation and optimized maintenance schedules.
System identification and control design are leveraged by ML techniques in Active Noise Control and Active Vibration Control.
The so-called ML-based surrogate models provide fast alternatives to costly simulations, enabling robust and optimized product design.
arXiv Detail & Related papers (2022-04-13T13:16:21Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Safe-Critical Modular Deep Reinforcement Learning with Temporal Logic
through Gaussian Processes and Control Barrier Functions [3.5897534810405403]
Reinforcement learning (RL) is a promising approach and has limited success towards real-world applications.
In this paper, we propose a learning-based control framework consisting of several aspects.
We show such an ECBF-based modular deep RL algorithm achieves near-perfect success rates and guard safety with a high probability.
arXiv Detail & Related papers (2021-09-07T00:51:12Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.