Responsibility Gap and Diffusion in Sequential Decision-Making Mechanisms
- URL: http://arxiv.org/abs/2507.02582v1
- Date: Thu, 03 Jul 2025 12:43:38 GMT
- Title: Responsibility Gap and Diffusion in Sequential Decision-Making Mechanisms
- Authors: Junli Jiang, Pavel Naumov,
- Abstract summary: The article investigates the computational complexity of two important properties of responsibility in collective decision-making: diffusion and gap.<n>It shows that the sets of diffusion-free and gap-free decision-making mechanisms are $Pi$-complete and $Pi_3$-complete, respectively.
- Score: 26.93342141713236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responsibility has long been a subject of study in law and philosophy. More recently, it became a focus of AI literature. The article investigates the computational complexity of two important properties of responsibility in collective decision-making: diffusion and gap. It shows that the sets of diffusion-free and gap-free decision-making mechanisms are $\Pi_2$-complete and $\Pi_3$-complete, respectively. At the same time, the intersection of these classes is $\Pi_2$-complete.
Related papers
- Diffusion of Responsibility in Collective Decision Making [26.831475621780577]
"Diffusion of responsibility" refers to situations in which multiple agents share responsibility for an outcome, obscuring individual accountability.<n>This paper examines this frequently undesirable phenomenon in the context of collective decision-making mechanisms.
arXiv Detail & Related papers (2025-06-09T16:54:56Z) - Higher-Order Responsibility [26.93342141713236]
The paper considers the problem of deciding if higher-order responsibility up to degree $d$ is enough to close the responsibility gap.<n>The main technical result is that this problem is $Pi_2d+1$-complete.
arXiv Detail & Related papers (2025-06-01T13:22:05Z) - Explaining Decisions in ML Models: a Parameterized Complexity Analysis [26.444020729887782]
This paper presents a theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models.
Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms.
arXiv Detail & Related papers (2024-07-22T16:37:48Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - Principal-Agent Reward Shaping in MDPs [50.914110302917756]
Principal-agent problems arise when one party acts on behalf of another, leading to conflicts of interest.
We study a two-player Stack game where the principal and the agent have different reward functions, and the agent chooses an MDP policy for both players.
Our results establish trees and deterministic decision processes with a finite horizon.
arXiv Detail & Related papers (2023-12-30T18:30:44Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Towards Computationally Efficient Responsibility Attribution in
Decentralized Partially Observable MDPs [5.825190876052148]
Responsibility attribution is a key concept of accountable multi-agent decision making.
We introduce a Monte Carlo Tree Search (MCTS) type of method which efficiently approximates the agents' degrees of responsibility.
We experimentally evaluate the efficacy of our algorithm through a simulation-based test-bed.
arXiv Detail & Related papers (2023-02-24T14:56:25Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.