STEAM & MoSAFE: SOTIF Error-and-Failure Model & Analysis for AI-Enabled
Driving Automation
- URL: http://arxiv.org/abs/2312.09559v2
- Date: Tue, 9 Jan 2024 03:46:45 GMT
- Title: STEAM & MoSAFE: SOTIF Error-and-Failure Model & Analysis for AI-Enabled
Driving Automation
- Authors: Krzysztof Czarnecki and Hiroshi Kuwajima
- Abstract summary: This paper defines the SOTIF Temporal Error and Failure Model (STEAM) as a refinement of the SOTIF cause-and-effect model.
Second, this paper proposes the Model-based SOTIF Analysis of Failures and Errors (MoSAFE) method, which allows instantiating STEAM based on system-design models.
- Score: 4.820785104084241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driving Automation Systems (DAS) are subject to complex road environments and
vehicle behaviors and increasingly rely on sophisticated sensors and Artificial
Intelligence (AI). These properties give rise to unique safety faults stemming
from specification insufficiencies and technological performance limitations,
where sensors and AI introduce errors that vary in magnitude and temporal
patterns, posing potential safety risks. The Safety of the Intended
Functionality (SOTIF) standard emerges as a promising framework for addressing
these concerns, focusing on scenario-based analysis to identify hazardous
behaviors and their causes. Although the current standard provides a basic
cause-and-effect model and high-level process guidance, it lacks concepts
required to identify and evaluate hazardous errors, especially within the
context of AI.
This paper introduces two key contributions to bridge this gap. First, it
defines the SOTIF Temporal Error and Failure Model (STEAM) as a refinement of
the SOTIF cause-and-effect model, offering a comprehensive system-design
perspective. STEAM refines error definitions, introduces error sequences, and
classifies them as error sequence patterns, providing particular relevance to
systems employing advanced sensors and AI. Second, this paper proposes the
Model-based SOTIF Analysis of Failures and Errors (MoSAFE) method, which allows
instantiating STEAM based on system-design models by deriving hazardous error
sequence patterns at module level from hazardous behaviors at vehicle level via
weakest precondition reasoning. Finally, the paper presents a case study
centered on an automated speed-control feature, illustrating the practical
applicability of the refined model and the MoSAFE method in addressing complex
safety challenges in DAS.
Related papers
- From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - High-Dimensional Fault Tolerance Testing of Highly Automated Vehicles Based on Low-Rank Models [39.139025989575686]
Fault Injection (FI) testing is conducted to evaluate the safety level of HAVs.
To fully cover test cases, various driving scenarios and fault settings should be considered.
We propose to accelerate FI testing under the low-rank Smoothness Regularized Matrix Factorization framework.
arXiv Detail & Related papers (2024-07-28T14:27:13Z) - PVF (Parameter Vulnerability Factor): A Scalable Metric for Understanding AI Vulnerability Against SDCs in Model Parameters [7.652441604508354]
Vulnerability Factor (PVF) is a metric aiming to standardize the quantification of AI model vulnerability against parameter corruptions.
PVF can provide pivotal insights to AI hardware designers in balancing the tradeoff between fault protection and performance/efficiency.
We present several use cases on applying PVF to three types of tasks/models during inference -- recommendation (DLRM), vision classification (CNN), and text classification (BERT)
arXiv Detail & Related papers (2024-05-02T21:23:34Z) - Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning [9.100418852199082]
We propose a novel framework based on unsupervised machine learning for early anomaly detection in AMS circuits.
The proposed approach involves injecting anomalies at various circuit locations and individual components to create a diverse and comprehensive anomaly dataset.
By monitoring the system behavior under these anomalous conditions, we capture the propagation of anomalies and their effects at different abstraction levels.
arXiv Detail & Related papers (2024-04-02T04:33:03Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Supporting Early-Safety Analysis of IoT Systems by Exploiting Testing
Techniques [9.095386349136717]
FailureLogic Analysis FLA is a technique that helps predict potential failure scenarios.
manually specifying FLA rules can be arduous and errorprone leading to incomplete or inaccurate specifications.
We propose adopting testing methodologies to improve the completeness and correctness of these rules.
arXiv Detail & Related papers (2023-09-06T13:32:39Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - Fast and Accurate Error Simulation for CNNs against Soft Errors [64.54260986994163]
We present a framework for the reliability analysis of Conal Neural Networks (CNNs) via an error simulation engine.
These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults.
We show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t.FI, that only implements a limited set of error models.
arXiv Detail & Related papers (2022-06-04T19:45:02Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.