Responsibility in Extensive Form Games
- URL: http://arxiv.org/abs/2312.07637v1
- Date: Tue, 12 Dec 2023 10:41:17 GMT
- Title: Responsibility in Extensive Form Games
- Authors: Qi Shi
- Abstract summary: Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
- Score: 1.4104545468525629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two different forms of responsibility, counterfactual and seeing-to-it, have
been extensively discussed in the philosophy and AI in the context of a single
agent or multiple agents acting simultaneously. Although the generalisation of
counterfactual responsibility to a setting where multiple agents act in some
order is relatively straightforward, the same cannot be said about seeing-to-it
responsibility. Two versions of seeing-to-it modality applicable to such
settings have been proposed in the literature. Neither of them perfectly
captures the intuition of responsibility. This paper proposes a definition of
seeing-to-it responsibility for such settings that amalgamate the two
modalities.
This paper shows that the newly proposed notion of responsibility and
counterfactual responsibility are not definable through each other and studies
the responsibility gap for these two forms of responsibility. It shows that
although these two forms of responsibility are not enough to ascribe
responsibility in each possible situation, this gap does not exist if
higher-order responsibility is taken into account.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Measuring Responsibility in Multi-Agent Systems [1.5883812630616518]
We introduce a family of quantitative measures of responsibility in multi-agent planning.
We ascribe responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics.
An entropy-based measurement of responsibility is the first to capture the causal responsibility properties of outcomes over time.
arXiv Detail & Related papers (2024-10-31T18:45:34Z) - Responsibility in a Multi-Value Strategic Setting [12.143925288392166]
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
arXiv Detail & Related papers (2024-10-22T17:51:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Consent as a Foundation for Responsible Autonomy [15.45515784064555]
This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time.
It considers settings where decision making by agents impinges upon the outcomes perceived by other agents.
arXiv Detail & Related papers (2022-03-22T02:25:27Z) - Asking It All: Generating Contextualized Questions for any Semantic Role [56.724302729493594]
We introduce the task of role question generation, which is given a predicate mention and a passage.
We develop a two-stage model for this task, which first produces a context-independent question prototype for each role.
Our evaluation demonstrates that we generate diverse and well-formed questions for a large, broad-coverage of predicates and roles.
arXiv Detail & Related papers (2021-09-10T12:31:14Z) - Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems [0.0]
We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
arXiv Detail & Related papers (2020-07-09T13:19:13Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.