Architecting Safer Autonomous Aviation Systems
- URL: http://arxiv.org/abs/2301.08138v1
- Date: Mon, 9 Jan 2023 21:02:18 GMT
- Title: Architecting Safer Autonomous Aviation Systems
- Authors: Jane Fenn, Mark Nicholson, Ganesh Pai, and Michael Wilkinson
- Abstract summary: This paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications.
Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle.
- Score: 1.2599533416395767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aviation literature gives relatively little guidance to practitioners
about the specifics of architecting systems for safety, particularly the impact
of architecture on allocating safety requirements, or the relative ease of
system assurance resulting from system or subsystem level architectural
choices. As an exemplar, this paper considers common architectural patterns
used within traditional aviation systems and explores their safety and safety
assurance implications when applied in the context of integrating artificial
intelligence (AI) and machine learning (ML) based functionality. Considering
safety as an architectural property, we discuss both the allocation of safety
requirements and the architectural trade-offs involved early in the design
lifecycle. This approach could be extended to other assured properties, similar
to safety, such as security. We conclude with a discussion of the safety
considerations that emerge in the context of candidate architectural patterns
that have been proposed in the recent literature for enabling autonomy
capabilities by integrating AI and ML. A recommendation is made for the
generation of a property-driven architectural pattern catalogue.
Related papers
- A quantitative framework for evaluating architectural patterns in ML systems [49.1574468325115]
This study proposes a framework for quantitative assessment of architectural patterns in ML systems.
We focus on scalability and performance metrics for cost-effective CPU-based inference.
arXiv Detail & Related papers (2025-01-20T15:30:09Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Swiss Cheese Model for AI Safety: A Taxonomy and Reference Architecture for Multi-Layered Guardrails of Foundation Model Based Agents [12.593620173835415]
Foundation Model (FM)-based agents are revolutionizing application development across various domains.
We present a comprehensive taxonomy of runtime guardrails for FM-based agents to identify the key quality attributes for guardrails and design dimensions.
Inspired by the Swiss Cheese Model, we also propose a reference architecture for designing multi-layered runtime guardrails for FM-based agents.
arXiv Detail & Related papers (2024-08-05T03:08:51Z) - Model-Driven Security Analysis of Self-Sovereign Identity Systems [2.5475486924467075]
We propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems.
Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic.
We present typical vulnerable patterns verified by SecureSSI.
arXiv Detail & Related papers (2024-06-02T05:44:32Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - An AIC-based approach for articulating unpredictable problems in open complex environments [0.0]
By adopting a systems approach, we aim to improve architects' predictive capabilities in designing dependable systems.
An aerospace case study is used to illustrate the approach.
arXiv Detail & Related papers (2024-03-15T20:30:02Z) - Enhancing Architecture Frameworks by Including Modern Stakeholders and their Views/Viewpoints [48.87872564630711]
The stakeholders with data science and Machine Learning related concerns, such as data scientists and data engineers, are yet to be included in existing architecture frameworks.
We surveyed 61 subject matter experts from over 25 organizations in 10 countries.
arXiv Detail & Related papers (2023-08-09T21:54:34Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - STARdom: an architecture for trusted and secure human-centered
manufacturing systems [4.093985503448998]
We propose an architecture that integrates forecasts, Explainable Artificial Intelligence, supports collecting users' feedback, and uses Active Learning and Simulated Reality to enhance forecasts.
We tailor it for the domain of demand forecasting and validate it on a real-world case study.
arXiv Detail & Related papers (2021-04-02T11:00:20Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.