Collaborative Trustworthiness for Good Decision Making in Autonomous Systems
- URL: http://arxiv.org/abs/2507.11135v1
- Date: Tue, 15 Jul 2025 09:37:28 GMT
- Title: Collaborative Trustworthiness for Good Decision Making in Autonomous Systems
- Authors: Selma Saidi, Omar Laimona, Christoph Schmickler, Dirk Ziegenbein,
- Abstract summary: We propose a general collaborative approach for increasing the level of trustworthiness in the environment of operation.<n>In the presence of conflicting information, aggregation becomes a major issue for trustworthy decision making based on collaborative data sharing.<n>We use Binary Decision Diagrams (BDDs) as formal models for beliefs aggregation and propagation, and formulate reduction rules to reduce the size of the BDDs.
- Score: 0.26999000177990923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous systems are becoming an integral part of many application domains, like in the mobility sector. However, ensuring their safe and correct behaviour in dynamic and complex environments remains a significant challenge, where systems should autonomously make decisions e.g., about manoeuvring. We propose in this paper a general collaborative approach for increasing the level of trustworthiness in the environment of operation and improve reliability and good decision making in autonomous system. In the presence of conflicting information, aggregation becomes a major issue for trustworthy decision making based on collaborative data sharing. Unlike classical approaches in the literature that rely on consensus or majority as aggregation rule, we exploit the fact that autonomous systems have different quality attributes like perception quality. We use this criteria to determine which autonomous systems are trustworthy and borrow concepts from social epistemology to define aggregation and propagation rules, used for automated decision making. We use Binary Decision Diagrams (BDDs) as formal models for beliefs aggregation and propagation, and formulate reduction rules to reduce the size of the BDDs and allow efficient computation structures for collaborative automated reasoning.
Related papers
- Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Achieving Unanimous Consensus in Decision Making Using Multi-Agents [0.0]
This paper introduces a novel deliberation-based consensus mechanism where Large Language Models (LLMs) act as rational agents engaging in structured discussions to reach a unanimous consensus.<n>By leveraging graded consensus and a multi-round deliberation process, our approach ensures both unanimous consensus for definitive problems and graded confidence for prioritized decisions and policies.<n>We also address key challenges with this novel approach such as degeneration of thoughts, hallucinations, malicious models and nodes, resource consumption, and scalability.
arXiv Detail & Related papers (2025-04-02T21:02:54Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.<n>We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms [8.177915265718703]
We argue that AI agents should be empowered to adjust their objectives dynamically.<n>We call for a shift toward the emergent, self-organizing, and context-aware nature of these multi-agentic AI systems.
arXiv Detail & Related papers (2025-02-05T22:20:15Z) - A Trust-Centric Approach To Quantifying Maturity and Security in Internet Voting Protocols [0.9831489366502298]
This paper introduces a trust-centric maturity scoring framework to quantify the security and maturity of internet voting systems.<n>A comprehensive trust model analysis is conducted for selected internet voting protocols.<n>The framework is general enough to be applied to other systems, where the aspects of decentralization, trust, and security are crucial.
arXiv Detail & Related papers (2024-12-13T23:33:38Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - A hierarchical control framework for autonomous decision-making systems:
Integrating HMDP and MPC [9.74561942059487]
This paper proposes a comprehensive hierarchical control framework for autonomous decision-making arising in robotics and autonomous systems.
It addresses the intricate interplay between traditional continuous systems dynamics utilized at the low levels for control design and discrete Markov decision processes (MDP) for facilitating high-level decision making.
The proposed framework is applied to develop an autonomous lane changing system for intelligent vehicles.
arXiv Detail & Related papers (2024-01-12T15:25:51Z) - Collective Reasoning for Safe Autonomous Systems [0.0]
We introduce the idea of increasing the reliability of autonomous systems by relying on collective intelligence.
We define and formalize at design rules for collective reasoning to achieve collaboratively increased safety, trustworthiness and good decision making.
arXiv Detail & Related papers (2023-05-18T20:37:32Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.