Collective Reasoning for Safe Autonomous Systems
- URL: http://arxiv.org/abs/2305.11295v1
- Date: Thu, 18 May 2023 20:37:32 GMT
- Title: Collective Reasoning for Safe Autonomous Systems
- Authors: Selma Saidi (TU Dortmund University, Dortmund, Germany)
- Abstract summary: We introduce the idea of increasing the reliability of autonomous systems by relying on collective intelligence.
We define and formalize at design rules for collective reasoning to achieve collaboratively increased safety, trustworthiness and good decision making.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaboration in multi-agent autonomous systems is critical to increase
performance while ensuring safety. However, due to heterogeneity of their
features in, e.g., perception qualities, some autonomous systems have to be
considered more trustworthy than others when contributing to collaboratively
build a common environmental model, especially under uncertainty. In this
paper, we introduce the idea of increasing the reliability of autonomous
systems by relying on collective intelligence. We borrow concepts from social
epistemology to exploit individual characteristics of autonomous systems, and
define and formalize at design rules for collective reasoning to achieve
collaboratively increased safety, trustworthiness and good decision making.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - A Measure for Level of Autonomy Based on Observable System Behavior [0.0]
We present a potential measure for predicting level of autonomy using observable actions.
We also present an algorithm incorporating the proposed measure.
The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime.
arXiv Detail & Related papers (2024-07-20T20:34:20Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) [87.3520234553785]
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence.
This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences.
arXiv Detail & Related papers (2021-02-11T05:44:25Z) - Improving Competence for Reliable Autonomy [0.0]
We propose a method for improving the competence of a system over the course of its deployment.
We specifically focus on a class of semi-autonomous systems known as competence-aware systems.
Our method exploits such feedback to identify important state features missing from the system's initial model.
arXiv Detail & Related papers (2020-07-23T01:31:28Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Learning to Optimize Autonomy in Competence-Aware Systems [32.3596917475882]
We propose an introspective model of autonomy that is learned and updated online through experience.
We define a competence-aware system (CAS) that explicitly models its own proficiency at different levels of autonomy and the available human feedback.
We analyze the convergence properties of CAS and provide experimental results for robot delivery and autonomous driving domains.
arXiv Detail & Related papers (2020-03-17T14:31:45Z) - A Structured Approach to Trustworthy Autonomous/Cognitive Systems [4.56877715768796]
There is no generally accepted approach to ensure trustworthiness.
This paper presents a framework to exactly fill this gap.
It proposes a reference lifecycle as a structured approach that is based on current safety standards.
arXiv Detail & Related papers (2020-02-19T14:36:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.