Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems
- URL: http://arxiv.org/abs/2007.07352v1
- Date: Thu, 9 Jul 2020 13:19:13 GMT
- Title: Degrees of individual and groupwise backward and forward responsibility
in extensive-form games with ambiguity, and their application to social
choice problems
- Authors: Jobst Heitzig and Sarah Hiller
- Abstract summary: We present several different quantitative responsibility metrics that assess responsibility degrees in units of probability.
We use a framework based on an adapted version of extensive-form game trees and an axiomatic approach.
We find that while most properties one might desire of such responsibility metrics can be fulfilled by some variant, an optimal metric that clearly outperforms others has yet to be found.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world situations of ethical relevance, in particular those of
large-scale social choice such as mitigating climate change, involve not only
many agents whose decisions interact in complicated ways, but also various
forms of uncertainty, including quantifiable risk and unquantifiable ambiguity.
In such problems, an assessment of individual and groupwise moral
responsibility for ethically undesired outcomes or their responsibility to
avoid such is challenging and prone to the risk of under- or overdetermination
of responsibility. In contrast to existing approaches based on strict causation
or certain deontic logics that focus on a binary classification of
`responsible' vs `not responsible', we here present several different
quantitative responsibility metrics that assess responsibility degrees in units
of probability. For this, we use a framework based on an adapted version of
extensive-form game trees and an axiomatic approach that specifies a number of
potentially desirable properties of such metrics, and then test the developed
candidate metrics by their application to a number of paradigmatic social
choice situations. We find that while most properties one might desire of such
responsibility metrics can be fulfilled by some variant, an optimal metric that
clearly outperforms others has yet to be found.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Measuring Responsibility in Multi-Agent Systems [1.5883812630616518]
We introduce a family of quantitative measures of responsibility in multi-agent planning.
We ascribe responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics.
An entropy-based measurement of responsibility is the first to capture the causal responsibility properties of outcomes over time.
arXiv Detail & Related papers (2024-10-31T18:45:34Z) - Responsibility in a Multi-Value Strategic Setting [12.143925288392166]
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI.
We present a model for responsibility attribution in a multi-agent, multi-value setting.
We show how considerations of responsibility can help an agent to select strategies that are in line with its values.
arXiv Detail & Related papers (2024-10-22T17:51:13Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Catastrophe, Compounding & Consistency in Choice [4.974890682815778]
Conditional value-at-risk (CVaR) precisely characterizes the influence that rare, catastrophic events can exert over decisions.
These examples can ground future experiments with the broader aim of characterizing risk attitudes.
arXiv Detail & Related papers (2021-11-12T16:33:06Z) - Two steps to risk sensitivity [4.974890682815778]
conditional value-at-risk (CVaR) is a risk measure for modeling human and animal planning.
We adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers.
We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR.
arXiv Detail & Related papers (2021-11-12T16:27:47Z) - Robust Allocations with Diversity Constraints [65.3799850959513]
We show that the Nash Welfare rule that maximizes product of agent values is uniquely positioned to be robust when diversity constraints are introduced.
We also show that the guarantees achieved by Nash Welfare are nearly optimal within a widely studied class of allocation rules.
arXiv Detail & Related papers (2021-09-30T11:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.