The Fairness Fair: Bringing Human Perception into Collective
Decision-Making
- URL: http://arxiv.org/abs/2312.14402v1
- Date: Fri, 22 Dec 2023 03:06:24 GMT
- Title: The Fairness Fair: Bringing Human Perception into Collective
Decision-Making
- Authors: Hadi Hosseini
- Abstract summary: We argue that not only fair solutions should be deemed desirable by social planners (designers), but they should be governed by human and societal cognition.
We discuss how achieving this goal requires a broad transdisciplinary approach ranging from computing and AI to behavioral economics and human-AI interaction.
- Score: 16.300744216179545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness is one of the most desirable societal principles in collective
decision-making. It has been extensively studied in the past decades for its
axiomatic properties and has received substantial attention from the multiagent
systems community in recent years for its theoretical and computational aspects
in algorithmic decision-making. However, these studies are often not
sufficiently rich to capture the intricacies of human perception of fairness in
the ambivalent nature of the real-world problems. We argue that not only fair
solutions should be deemed desirable by social planners (designers), but they
should be governed by human and societal cognition, consider perceived outcomes
based on human judgement, and be verifiable. We discuss how achieving this goal
requires a broad transdisciplinary approach ranging from computing and AI to
behavioral economics and human-AI interaction. In doing so, we identify
shortcomings and long-term challenges of the current literature of fair
division, describe recent efforts in addressing them, and more importantly,
highlight a series of open research directions.
Related papers
- (Unfair) Norms in Fairness Research: A Meta-Analysis [6.395584220342517]
We conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics.
Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research.
Second, fairness studies exhibit a widespread reliance on binary codifications of human identity.
arXiv Detail & Related papers (2024-06-17T17:14:47Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - A Survey on Intersectional Fairness in Machine Learning: Notions,
Mitigation, and Challenges [11.885166133818819]
Adoption of Machine Learning systems has led to increased concerns about fairness implications.
We present a taxonomy for intersectional notions of fairness and mitigation.
We identify the key challenges and provide researchers with guidelines for future directions.
arXiv Detail & Related papers (2023-05-11T16:49:22Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review
of the Empirical Literature [0.0]
Algorithmic decision-making (ADM) increasingly shapes people's daily lives.
A human-centric approach demanded by scholars and policymakers requires taking people's fairness perceptions into account.
We provide a comprehensive, systematic literature review of the existing empirical insights on perceptions of algorithmic fairness.
arXiv Detail & Related papers (2021-03-22T17:12:45Z) - Joint Optimization of AI Fairness and Utility: A Human-Centered Approach [45.04980664450894]
We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives.
We propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.
arXiv Detail & Related papers (2020-02-05T03:31:48Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.