Appropriate Fairness Perceptions? On the Effectiveness of Explanations
in Enabling People to Assess the Fairness of Automated Decision Systems
- URL: http://arxiv.org/abs/2108.06500v1
- Date: Sat, 14 Aug 2021 09:39:59 GMT
- Title: Appropriate Fairness Perceptions? On the Effectiveness of Explanations
in Enabling People to Assess the Fairness of Automated Decision Systems
- Authors: Jakob Schoeffer and Niklas Kuehl
- Abstract summary: We argue that for an effective explanation, perceptions of fairness should increase if and only if the underlying ADS is fair.
In this in-progress work, we introduce the desideratum of appropriate fairness perceptions, propose a novel study design for evaluating it, and outline next steps towards a comprehensive experiment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is often argued that one goal of explaining automated decision systems
(ADS) is to facilitate positive perceptions (e.g., fairness or trustworthiness)
of users towards such systems. This viewpoint, however, makes the implicit
assumption that a given ADS is fair and trustworthy, to begin with. If the ADS
issues unfair outcomes, then one might expect that explanations regarding the
system's workings will reveal its shortcomings and, hence, lead to a decrease
in fairness perceptions. Consequently, we suggest that it is more meaningful to
evaluate explanations against their effectiveness in enabling people to
appropriately assess the quality (e.g., fairness) of an associated ADS. We
argue that for an effective explanation, perceptions of fairness should
increase if and only if the underlying ADS is fair. In this in-progress work,
we introduce the desideratum of appropriate fairness perceptions, propose a
novel study design for evaluating it, and outline next steps towards a
comprehensive experiment.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Understanding Fairness in Recommender Systems: A Healthcare Perspective [0.18416014644193066]
This paper explores the public's comprehension of fairness in healthcare recommendations.
We conducted a survey where participants selected from four fairness metrics.
Results suggest that a one-size-fits-all approach to fairness may be insufficient.
arXiv Detail & Related papers (2024-09-05T19:59:42Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
arXiv Detail & Related papers (2022-05-11T20:06:03Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - On the Identification of Fair Auditors to Evaluate Recommender Systems
based on a Novel Non-Comparative Fairness Notion [1.116812194101501]
Decision-support systems have been found to be discriminatory in the context of many practical deployments.
We propose a new fairness notion based on the principle of non-comparative justice.
We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions.
arXiv Detail & Related papers (2020-09-09T16:04:41Z) - Exploring User Opinions of Fairness in Recommender Systems [13.749884072907163]
We ask users what their ideas of fair treatment in recommendation might be.
We analyze what might cause discrepancies or changes between user's opinions towards fairness.
arXiv Detail & Related papers (2020-03-13T19:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.