Diffusion of Responsibility in Collective Decision Making
- URL: http://arxiv.org/abs/2506.07935v1
- Date: Mon, 09 Jun 2025 16:54:56 GMT
- Title: Diffusion of Responsibility in Collective Decision Making
- Authors: Pavel Naumov, Jia Tao,
- Abstract summary: "Diffusion of responsibility" refers to situations in which multiple agents share responsibility for an outcome, obscuring individual accountability.<n>This paper examines this frequently undesirable phenomenon in the context of collective decision-making mechanisms.
- Score: 26.831475621780577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The term "diffusion of responsibility'' refers to situations in which multiple agents share responsibility for an outcome, obscuring individual accountability. This paper examines this frequently undesirable phenomenon in the context of collective decision-making mechanisms. The work shows that if a decision is made by two agents, then the only way to avoid diffusion of responsibility is for one agent to act as a "dictator'', making the decision unilaterally. In scenarios with more than two agents, any diffusion-free mechanism is an "elected dictatorship'' where the agents elect a single agent to make a unilateral decision. The technical results are obtained by defining a bisimulation of decision-making mechanisms, proving that bisimulation preserves responsibility-related properties, and establishing the results for a smallest bisimular mechanism.
Related papers
- Toward a Theory of Agents as Tool-Use Decision-Makers [89.26889709510242]
We argue that true autonomy requires agents to be grounded in a coherent epistemic framework that governs what they know, what they need to know, and how to acquire that knowledge efficiently.<n>We propose a unified theory that treats internal reasoning and external actions as equivalent epistemic tools, enabling agents to systematically coordinate introspection and interaction.<n>This perspective shifts the design of agents from mere action executors to knowledge-driven intelligence systems, offering a principled path toward building foundation agents capable of adaptive, efficient, and goal-directed behavior.
arXiv Detail & Related papers (2025-06-01T07:52:16Z) - Responsibility Gap in Collective Decision Making [26.831475621780577]
The paper proposes a concept of an elected dictatorship.<n>It shows that, in a perfect information setting, the gap is empty if and only if the mechanism is an elected dictatorship.
arXiv Detail & Related papers (2025-05-08T14:19:59Z) - Agency Is Frame-Dependent [94.91580596320331]
Agency is a system's capacity to steer outcomes toward a goal.<n>We argue that agency is fundamentally frame-dependent.<n>We conclude that any basic science of agency requires frame-dependence.
arXiv Detail & Related papers (2025-02-06T08:34:57Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Responsibility in Extensive Form Games [1.4104545468525629]
Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
arXiv Detail & Related papers (2023-12-12T10:41:17Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Actual Causality and Responsibility Attribution in Decentralized
Partially Observable Markov Decision Processes [22.408657774650358]
We study these concepts under a widely used framework for multi-agent sequential decision making under uncertainty.
Actual causality focuses on specific outcomes and aims to identify decisions (actions) that were critical in realizing an outcome of interest.
Responsibility attribution is complementary and aims to identify the extent to which decision makers (agents) are responsible for this outcome.
arXiv Detail & Related papers (2022-04-01T09:22:58Z) - Consent as a Foundation for Responsible Autonomy [15.45515784064555]
This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time.
It considers settings where decision making by agents impinges upon the outcomes perceived by other agents.
arXiv Detail & Related papers (2022-03-22T02:25:27Z) - On Blame Attribution for Accountable Multi-Agent Sequential Decision
Making [29.431349181232203]
We study blame attribution in the context of cooperative multi-agent sequential decision making.
We show that some of the well known blame attribution methods, such as Shapley value, are not performance-incentivizing.
We introduce a novel blame attribution method, unique in the set of properties it satisfies.
arXiv Detail & Related papers (2021-07-26T02:22:23Z) - VCG Mechanism Design with Unknown Agent Values under Stochastic Bandit
Feedback [104.06766271716774]
We study a multi-round welfare-maximising mechanism design problem in instances where agents do not know their values.
We first define three notions of regret for the welfare, the individual utilities of each agent and that of the mechanism.
Our framework also provides flexibility to control the pricing scheme so as to trade-off between the agent and seller regrets.
arXiv Detail & Related papers (2020-04-19T18:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.