Safety Case Templates for Autonomous Systems
- URL: http://arxiv.org/abs/2102.02625v2
- Date: Thu, 11 Mar 2021 12:50:15 GMT
- Title: Safety Case Templates for Autonomous Systems
- Authors: Robin Bloomfield, Gareth Fletcher, Heidy Khlaaf, Luke Hinde, Philippa
Ryan
- Abstract summary: This report documents safety assurance argument templates to support the deployment and operation of autonomous systems that include machine learning (ML) components.
The report also presents generic templates for argument defeaters and evidence confidence that can be used to strengthen, review, and adapt the templates as necessary.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report documents safety assurance argument templates to support the
deployment and operation of autonomous systems that include machine learning
(ML) components. The document presents example safety argument templates
covering: the development of safety requirements, hazard analysis, a safety
monitor architecture for an autonomous system including at least one ML
element, a component with ML and the adaptation and change of the system over
time. The report also presents generic templates for argument defeaters and
evidence confidence that can be used to strengthen, review, and adapt the
templates as necessary. This report is made available to get feedback on the
approach and on the templates. This work was sponsored by the UK Dstl under the
R-cloud framework.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Automatic Instantiation of Assurance Cases from Patterns Using Large Language Models [6.314768437420443]
Large Language Models (LLMs) can generate assurance cases that comply with specific patterns.
LLMs exhibit potential in the automatic generation of assurance cases, but their capabilities still fall short compared to human experts.
arXiv Detail & Related papers (2024-10-07T20:58:29Z) - Security Matrix for Multimodal Agents on Mobile Devices: A Systematic and Proof of Concept Study [16.559272781032632]
The rapid progress in the reasoning capability of the Multi-modal Large Language Models has triggered the development of autonomous agent systems on mobile devices.
Despite the increased human-machine interaction efficiency, the security risks of MLLM-based mobile agent systems have not been systematically studied.
This paper highlights the need for security awareness in the design of MLLM-based systems and paves the way for future research on attacks and defense methods.
arXiv Detail & Related papers (2024-07-12T14:30:05Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - A SysML Profile for the Standardized Description of Processes during
System Development [40.539768677361735]
The VDI/VDE 3682 standard for Formalised Process De-scription (FPD) provides a simple and easily understandable representation of processes.
This contribution focuses on the development of a Domain-Specific Modeling Language(D) that facilitates the integration of VDI/VDE 3682 into the Systems Modeling Language (SysML)
arXiv Detail & Related papers (2024-03-11T13:44:38Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - Monitoring ROS2: from Requirements to Autonomous Robots [58.720142291102135]
This paper provides an overview of a formal approach to generating runtime monitors for autonomous robots from requirements written in a structured natural language.
Our approach integrates the Formal Requirement Elicitation Tool (FRET) with Copilot, a runtime verification framework, through the Ogma integration tool.
arXiv Detail & Related papers (2022-09-28T12:19:13Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - The missing link: Developing a safety case for perception components in
automated driving [10.43163823170716]
Perception is a key aspect of automated driving systems (AD) that relies heavily on Machine Learning (ML)
Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components.
We propose a generic template for such a linking argument specifically tailored for perception components.
arXiv Detail & Related papers (2021-08-30T15:12:27Z) - SMT-Based Safety Verification of Data-Aware Processes under Ontologies
(Extended Version) [71.12474112166767]
We introduce a variant of one of the most investigated models in this spectrum, namely simple artifact systems (SASs)
This DL, enjoying suitable model-theoretic properties, allows us to define SASs to which backward reachability can still be applied, leading to decidability in PSPACE of the corresponding safety problems.
arXiv Detail & Related papers (2021-08-27T15:04:11Z) - SMT-based Safety Verification of Parameterised Multi-Agent Systems [78.04236259129524]
We study the verification of parameterised multi-agent systems (MASs)
In particular, we study whether unwanted states, characterised as a given state formula, are reachable in a given MAS.
arXiv Detail & Related papers (2020-08-11T15:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.