Modular Assurance of Complex Systems Using Contract-Based Design Principles
- URL: http://arxiv.org/abs/2402.12804v2
- Date: Thu, 8 Aug 2024 19:21:55 GMT
- Title: Modular Assurance of Complex Systems Using Contract-Based Design Principles
- Authors: Dag McGeorge, Jon Arne Glomsrud,
- Abstract summary: A growing number of safety-critical industries agree that building confidence in complex systems can be achieved through evidence and structured argumentation framed in assurance cases.
We propose to use contract-based development (CBD), a method to manage complexity originally developed in computer science, to simplify assurance cases by modularizing them.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A growing number of safety-critical industries agree that building confidence in complex systems can be achieved through evidence and structured argumentation framed in assurance cases. Nevertheless, according to practical industry experience, assurance cases can easily become too rigorous and difficult to develop and maintain when applied to complex systems. Therefore, we propose to use contract-based development (CBD), a method to manage complexity originally developed in computer science, to simplify assurance cases by modularizing them. This paper will not only summarize relevant previous work such as constructing consistent modular assurance cases using CBD, but more importantly also propose a novel approach to integrate CBD with the argumentation in assurance case modules. This approach will allow subject-matter and domain experts to build assurance case modules together without having to know CBD. This can help a broader application of these methods in industry because subject matter experts outside of computer science can contribute to cross disciplinary co-development of assurance cases without having to learn CBD. Industry experience has proven four rules of thumb helpful for developing high-quality assurance cases. This article illustrates their usefulness and explains how modular assurance enables assurance that accounts for the interdependency of different concerns such as safety, security and performance.
Related papers
- SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - Automating Semantic Analysis of System Assurance Cases using Goal-directed ASP [1.2189422792863451]
We present our approach to enhancing Assurance 2.0 with semantic rule-based analysis capabilities.
We examine the unique semantic aspects of assurance cases, such as logical consistency, adequacy, indefeasibility, etc.
arXiv Detail & Related papers (2024-08-21T15:22:43Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Harnessing GPT-4V(ision) for Insurance: A Preliminary Exploration [51.36387171207314]
Insurance involves a wide variety of data forms in its operational processes, including text, images, and videos.
GPT-4V exhibits remarkable abilities in insurance-related tasks, demonstrating a robust understanding of multimodal content.
However, GPT-4V struggles with detailed risk rating and loss assessment, suffers from hallucination in image understanding, and shows variable support for different languages.
arXiv Detail & Related papers (2024-04-15T11:45:30Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Towards Continuous Assurance Case Creation for ADS with the Evidential
Tool Bus [0.4194295877935868]
An assurance case has become an integral component for the certification of safety-critical systems.
We report on our preliminary experience leveraging the tool integration framework Evidential Tool Bus (ETB) for the construction and continuous maintenance of an assurance case.
arXiv Detail & Related papers (2024-03-04T10:32:48Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Cooperative Multi-Agent Actor-Critic for Privacy-Preserving Load
Scheduling in a Residential Microgrid [71.17179010567123]
We propose a privacy-preserving multi-agent actor-critic framework where the decentralized actors are trained with distributed critics.
The proposed framework can preserve the privacy of the households while simultaneously learn the multi-agent credit assignment mechanism implicitly.
arXiv Detail & Related papers (2021-10-06T14:05:26Z) - Quantifying Assurance in Learning-enabled Systems [3.0938904602244355]
Dependability assurance of systems embedding machine learning components is a key step for their use in safety-critical applications.
This paper develops a quantitative notion of assurance that an LES is dependable, as a core component of its assurance case.
We illustrate the utility of assurance measures by application to a real world autonomous aviation system.
arXiv Detail & Related papers (2020-06-18T08:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.