A Structured Approach to Trustworthy Autonomous/Cognitive Systems
- URL: http://arxiv.org/abs/2002.08210v1
- Date: Wed, 19 Feb 2020 14:36:27 GMT
- Title: A Structured Approach to Trustworthy Autonomous/Cognitive Systems
- Authors: Henrik J. Putzer and Ernest Wozniak
- Abstract summary: There is no generally accepted approach to ensure trustworthiness.
This paper presents a framework to exactly fill this gap.
It proposes a reference lifecycle as a structured approach that is based on current safety standards.
- Score: 4.56877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous systems with cognitive features are on their way into the market.
Within complex environments, they promise to implement complex and goal
oriented behavior even in a safety related context. This behavior is based on a
certain level of situational awareness (perception) and advanced de-cision
making (deliberation). These systems in many cases are driven by artificial
intelligence (e.g. neural networks). The problem with such complex systems and
with using AI technology is that there is no generally accepted approach to
ensure trustworthiness. This paper presents a framework to exactly fill this
gap. It proposes a reference lifecycle as a structured approach that is based
on current safety standards and enhanced to meet the requirements of
autonomous/cog-nitive systems and trustworthiness.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Assurance of AI Systems From a Dependability Perspective [0.0]
We outline the principles of classical assurance for computer-based systems that pose significant risks.
We then consider application of these principles to systems that employ Artificial Intelligence (AI) and Machine Learning (ML)
arXiv Detail & Related papers (2024-07-18T23:55:43Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Towards Formal Fault Injection for Safety Assessment of Automated
Systems [0.0]
This paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle.
We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection.
arXiv Detail & Related papers (2023-11-16T11:34:18Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Safe AI -- How is this Possible? [0.45687771576879593]
Traditional safety engineering is coming to a turning point moving from deterministic, non-evolving systems operating in well-defined contexts to increasingly autonomous and learning-enabled AI systems acting in largely unpredictable operating contexts.
We outline some of underlying challenges of safe AI and suggest a rigorous engineering framework for minimizing uncertainty, thereby increasing confidence, up to tolerable levels, in the safe behavior of AI systems.
arXiv Detail & Related papers (2022-01-25T16:32:35Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) [87.3520234553785]
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence.
This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences.
arXiv Detail & Related papers (2021-02-11T05:44:25Z) - Conceptualization and Framework of Hybrid Intelligence Systems [0.0]
This article provides a precise definition of hybrid intelligence systems and explains its relation with other similar concepts.
We argue that all AI systems are hybrid intelligence systems, so human factors need to be examined at every stage of such systems' lifecycle.
arXiv Detail & Related papers (2020-12-11T06:42:06Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.