Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review
of the Empirical Literature
- URL: http://arxiv.org/abs/2103.12016v1
- Date: Mon, 22 Mar 2021 17:12:45 GMT
- Title: Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review
of the Empirical Literature
- Authors: Christopher Starke, Janine Baleis, Birte Keller, Frank Marcinkowski
- Abstract summary: Algorithmic decision-making (ADM) increasingly shapes people's daily lives.
A human-centric approach demanded by scholars and policymakers requires taking people's fairness perceptions into account.
We provide a comprehensive, systematic literature review of the existing empirical insights on perceptions of algorithmic fairness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic decision-making (ADM) increasingly shapes people's daily lives.
Given that such autonomous systems can cause severe harm to individuals and
social groups, fairness concerns have arisen. A human-centric approach demanded
by scholars and policymakers requires taking people's fairness perceptions into
account when designing and implementing ADM. We provide a comprehensive,
systematic literature review synthesizing the existing empirical insights on
perceptions of algorithmic fairness from 39 empirical studies spanning multiple
domains and scientific disciplines. Through thorough coding, we systemize the
current empirical literature along four dimensions: (a) algorithmic predictors,
(b) human predictors, (c) comparative effects (human decision-making vs.
algorithmic decision-making), and (d) consequences of ADM. While we identify
much heterogeneity around the theoretical concepts and empirical measurements
of algorithmic fairness, the insights come almost exclusively from
Western-democratic contexts. By advocating for more interdisciplinary research
adopting a society-in-the-loop framework, we hope our work will contribute to
fairer and more responsible ADM.
Related papers
- Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation [1.0470286407954037]
Concerns have been raised that machine learning (ML) models may be biased and perpetuate or exacerbate inequality.
We present a four-stage model of developing ML assessments and applying bias mitigation methods.
arXiv Detail & Related papers (2024-10-21T02:32:14Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - The Fairness Fair: Bringing Human Perception into Collective
Decision-Making [16.300744216179545]
We argue that not only fair solutions should be deemed desirable by social planners (designers), but they should be governed by human and societal cognition.
We discuss how achieving this goal requires a broad transdisciplinary approach ranging from computing and AI to behavioral economics and human-AI interaction.
arXiv Detail & Related papers (2023-12-22T03:06:24Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - A Framework of High-Stakes Algorithmic Decision-Making for the Public
Sector Developed through a Case Study of Child-Welfare [3.739243122393041]
We develop a cohesive framework of algorithmic decision-making adapted for the public sector.
We conduct a case study of the algorithms in daily use within a child-welfare agency.
We propose guidelines for the design of high-stakes algorithmic decision-making tools in the public sector.
arXiv Detail & Related papers (2021-07-07T21:24:35Z) - Impact Remediation: Optimal Interventions to Reduce Inequality [10.806517393212491]
We develop a novel algorithmic framework for tackling pre-existing real-world disparities.
The purpose of our framework is to measure real-world disparities and discover optimal intervention policies.
In contrast to most work on optimal policy learning, we explore disparity reduction itself as an objective.
arXiv Detail & Related papers (2021-07-01T16:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.