A Study on Fairness and Trust Perceptions in Automated Decision Making
- URL: http://arxiv.org/abs/2103.04757v1
- Date: Mon, 8 Mar 2021 13:57:31 GMT
- Title: A Study on Fairness and Trust Perceptions in Automated Decision Making
- Authors: Jakob Schoeffer, Yvette Machowski, Niklas Kuehl
- Abstract summary: We evaluate different attempts of explaining automated decision systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms.
A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated decision systems are increasingly used for consequential decision
making -- for a variety of reasons. These systems often rely on sophisticated
yet opaque models, which do not (or hardly) allow for understanding how or why
a given decision was arrived at. This is not only problematic from a legal
perspective, but non-transparent systems are also prone to yield undesirable
(e.g., unfair) outcomes because their sanity is difficult to assess and
calibrate in the first place. In this work, we conduct a study to evaluate
different attempts of explaining such systems with respect to their effect on
people's perceptions of fairness and trustworthiness towards the underlying
mechanisms. A pilot study revealed surprising qualitative insights as well as
preliminary significant effects, which will have to be verified, extended and
thoroughly discussed in the larger main study.
Related papers
- Understanding Fairness in Recommender Systems: A Healthcare Perspective [0.18416014644193066]
This paper explores the public's comprehension of fairness in healthcare recommendations.
We conducted a survey where participants selected from four fairness metrics.
Results suggest that a one-size-fits-all approach to fairness may be insufficient.
arXiv Detail & Related papers (2024-09-05T19:59:42Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
Non-transparent systems are prone to yield unfair outcomes because their sanity is challenging to assess and calibrate.
I aim to make the following three main contributions through my doctoral thesis.
arXiv Detail & Related papers (2022-04-29T18:31:04Z) - Appropriate Fairness Perceptions? On the Effectiveness of Explanations
in Enabling People to Assess the Fairness of Automated Decision Systems [0.0]
We argue that for an effective explanation, perceptions of fairness should increase if and only if the underlying ADS is fair.
In this in-progress work, we introduce the desideratum of appropriate fairness perceptions, propose a novel study design for evaluating it, and outline next steps towards a comprehensive experiment.
arXiv Detail & Related papers (2021-08-14T09:39:59Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.