Higher-Order Responsibility
- URL: http://arxiv.org/abs/2506.01003v1
- Date: Sun, 01 Jun 2025 13:22:05 GMT
- Title: Higher-Order Responsibility
- Authors: Junli Jiang, Pavel Naumov,
- Abstract summary: The paper considers the problem of deciding if higher-order responsibility up to degree $d$ is enough to close the responsibility gap.<n>The main technical result is that this problem is $Pi_2d+1$-complete.
- Score: 26.93342141713236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In ethics, individual responsibility is often defined through Frankfurt's principle of alternative possibilities. This definition is not adequate in a group decision-making setting because it often results in the lack of a responsible party or "responsibility gap''. One of the existing approaches to address this problem is to consider group responsibility. Another, recently proposed, approach is "higher-order'' responsibility. The paper considers the problem of deciding if higher-order responsibility up to degree $d$ is enough to close the responsibility gap. The main technical result is that this problem is $\Pi_{2d+1}$-complete.
Related papers
- Responsibility Gap and Diffusion in Sequential Decision-Making Mechanisms [26.93342141713236]
The article investigates the computational complexity of two important properties of responsibility in collective decision-making: diffusion and gap.<n>It shows that the sets of diffusion-free and gap-free decision-making mechanisms are $Pi$-complete and $Pi_3$-complete, respectively.
arXiv Detail & Related papers (2025-07-03T12:43:38Z) - Responsibility in a Multi-Value Strategic Setting [12.143925288392166]
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
arXiv Detail & Related papers (2024-10-22T17:51:13Z) - From decision aiding to the massive use of algorithms: where does the responsibility stand? [0.0]
We show how the fact they cannot embrace the full situations of use and consequences lead to an unreachable limit.
On the other hand, using technology is never free of responsibility, even if there also exist limits to characterise.
The article is structured in such a way as to show how the limits have gradually evolved, leaving unthought of issues and a failure to share responsibility.
arXiv Detail & Related papers (2024-06-19T01:10:34Z) - Principal-Agent Reward Shaping in MDPs [50.914110302917756]
Principal-agent problems arise when one party acts on behalf of another, leading to conflicts of interest.
We study a two-player Stack game where the principal and the agent have different reward functions, and the agent chooses an MDP policy for both players.
Our results establish trees and deterministic decision processes with a finite horizon.
arXiv Detail & Related papers (2023-12-30T18:30:44Z) - Responsibility in Extensive Form Games [1.4104545468525629]
Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
arXiv Detail & Related papers (2023-12-12T10:41:17Z) - Unravelling Responsibility for AI [0.8472029664133528]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.<n>This paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology.<n>It unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Online Learning under Budget and ROI Constraints via Weak Adaptivity [57.097119428915796]
Existing primal-dual algorithms for constrained online learning problems rely on two fundamental assumptions.
We show how such assumptions can be circumvented by endowing standard primal-dual templates with weakly adaptive regret minimizers.
We prove the first best-of-both-worlds no-regret guarantees which hold in absence of the two aforementioned assumptions.
arXiv Detail & Related papers (2023-02-02T16:30:33Z) - A Unifying Framework for Online Optimization with Long-Term Constraints [62.35194099438855]
We study online learning problems in which a decision maker has to take a sequence of decisions subject to $m$ long-term constraints.
The goal is to maximize their total reward, while at the same time achieving small cumulative violation across the $T$ rounds.
We present the first best-of-both-world type algorithm for this general class problems, with no-regret guarantees both in the case in which rewards and constraints are selected according to an unknown model, and in the case in which they are selected at each round by an adversary.
arXiv Detail & Related papers (2022-09-15T16:59:19Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Dare not to Ask: Problem-Dependent Guarantees for Budgeted Bandits [66.02233330016435]
We provide problem-dependent guarantees on both the regret and the asked feedback.
We present a new algorithm called BuFALU for which we derive problem-dependent regret and cumulative feedback bounds.
arXiv Detail & Related papers (2021-10-12T03:24:57Z) - Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems [0.0]
We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
arXiv Detail & Related papers (2020-07-09T13:19:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.