Who Gets What, According to Whom? An Analysis of Fairness Perceptions in
Service Allocation
- URL: http://arxiv.org/abs/2105.04452v1
- Date: Mon, 10 May 2021 15:31:22 GMT
- Title: Who Gets What, According to Whom? An Analysis of Fairness Perceptions in
Service Allocation
- Authors: Jacqueline Hannan, Huei-Yen Winnie Chen, Kenneth Joseph
- Abstract summary: We experimentally explore five novel research questions at the intersection of the "Who," "What," and "How" of fairness perceptions.
Our results suggest that the "Who" and "What," at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
- Score: 2.69180747382622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness research has traditionally been linked to the
disciplines of philosophy, ethics, and economics, where notions of fairness are
prescriptive and seek objectivity. Increasingly, however, scholars are turning
to the study of what different people perceive to be fair, and how these
perceptions can or should help to shape the design of machine learning,
particularly in the policy realm. The present work experimentally explores five
novel research questions at the intersection of the "Who," "What," and "How" of
fairness perceptions. Specifically, we present the results of a multi-factor
conjoint analysis study that quantifies the effects of the specific context in
which a question is asked, the framing of the given question, and who is
answering it. Our results broadly suggest that the "Who" and "What," at least,
matter in ways that are 1) not easily explained by any one theoretical
perspective, 2) have critical implications for how perceptions of fairness
should be measured and/or integrated into algorithmic decision-making systems.
Related papers
- The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - (Unfair) Norms in Fairness Research: A Meta-Analysis [6.395584220342517]
We conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics.
Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research.
Second, fairness studies exhibit a widespread reliance on binary codifications of human identity.
arXiv Detail & Related papers (2024-06-17T17:14:47Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective [13.124434298120494]
We review and reflect on various fairness notions previously proposed in machine learning literature.
We also consider the long-term impact that is induced by current prediction and decision.
This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest) to fulfill the intended purpose.
arXiv Detail & Related papers (2022-06-08T18:05:46Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - The FairCeptron: A Framework for Measuring Human Perceptions of
Algorithmic Fairness [1.4449464910072918]
The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification.
The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis.
An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts.
arXiv Detail & Related papers (2021-02-08T10:47:24Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.