Responsibility in Extensive Form Games
- URL: http://arxiv.org/abs/2312.07637v1
- Date: Tue, 12 Dec 2023 10:41:17 GMT
- Title: Responsibility in Extensive Form Games
- Authors: Qi Shi
- Abstract summary: Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
- Score: 1.4104545468525629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two different forms of responsibility, counterfactual and seeing-to-it, have
been extensively discussed in the philosophy and AI in the context of a single
agent or multiple agents acting simultaneously. Although the generalisation of
counterfactual responsibility to a setting where multiple agents act in some
order is relatively straightforward, the same cannot be said about seeing-to-it
responsibility. Two versions of seeing-to-it modality applicable to such
settings have been proposed in the literature. Neither of them perfectly
captures the intuition of responsibility. This paper proposes a definition of
seeing-to-it responsibility for such settings that amalgamate the two
modalities.
This paper shows that the newly proposed notion of responsibility and
counterfactual responsibility are not definable through each other and studies
the responsibility gap for these two forms of responsibility. It shows that
although these two forms of responsibility are not enough to ascribe
responsibility in each possible situation, this gap does not exist if
higher-order responsibility is taken into account.
Related papers
- Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Actual Causality and Responsibility Attribution in Decentralized
Partially Observable Markov Decision Processes [22.408657774650358]
We study these concepts under a widely used framework for multi-agent sequential decision making under uncertainty.
Actual causality focuses on specific outcomes and aims to identify decisions (actions) that were critical in realizing an outcome of interest.
Responsibility attribution is complementary and aims to identify the extent to which decision makers (agents) are responsible for this outcome.
arXiv Detail & Related papers (2022-04-01T09:22:58Z) - Consent as a Foundation for Responsible Autonomy [15.45515784064555]
This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time.
It considers settings where decision making by agents impinges upon the outcomes perceived by other agents.
arXiv Detail & Related papers (2022-03-22T02:25:27Z) - Asking It All: Generating Contextualized Questions for any Semantic Role [56.724302729493594]
We introduce the task of role question generation, which is given a predicate mention and a passage.
We develop a two-stage model for this task, which first produces a context-independent question prototype for each role.
Our evaluation demonstrates that we generate diverse and well-formed questions for a large, broad-coverage of predicates and roles.
arXiv Detail & Related papers (2021-09-10T12:31:14Z) - Responsibility Management through Responsibility Networks [3.1291878216258064]
We deploy the Internet of Responsibilities (IoR) for responsibility management.
Through the building of IoR framework, hierarchical responsibility management, automated responsibility evaluation at all level and efficient responsibility perception are achieved.
arXiv Detail & Related papers (2021-02-14T21:06:33Z) - Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems [0.0]
We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
arXiv Detail & Related papers (2020-07-09T13:19:13Z) - On the Relationship Between Active Inference and Control as Inference [62.997667081978825]
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem.
arXiv Detail & Related papers (2020-06-23T13:03:58Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.