Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches
- URL: http://arxiv.org/abs/2205.04460v1
- Date: Fri, 6 May 2022 14:27:57 GMT
- Title: Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches
- Authors: Lindsay Weinberg
- Abstract summary: This survey article assesses and compares critiques of current fairness-enhancing technical interventions into machine learning (ML)
It draws from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies.
The article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This survey article assesses and compares existing critiques of current
fairness-enhancing technical interventions into machine learning (ML) that draw
from a range of non-computing disciplines, including philosophy, feminist
studies, critical race and ethnic studies, legal studies, anthropology, and
science and technology studies. It bridges epistemic divides in order to offer
an interdisciplinary understanding of the possibilities and limits of hegemonic
computational approaches to ML fairness for producing just outcomes for
society's most marginalized. The article is organized according to nine major
themes of critique wherein these different fields intersect: 1) how "fairness"
in AI fairness research gets defined; 2) how problems for AI systems to address
get formulated; 3) the impacts of abstraction on how AI tools function and its
propensity to lead to technological solutionism; 4) how racial classification
operates within AI fairness research; 5) the use of AI fairness measures to
avoid regulation and engage in ethics washing; 6) an absence of participatory
design and democratic deliberation in AI fairness considerations; 7) data
collection practices that entrench "bias," are non-consensual, and lack
transparency; 8) the predatory inclusion of marginalized groups into AI
systems; and 9) a lack of engagement with AI's long-term social and ethical
outcomes. Drawing from these critiques, the article concludes by imagining
future ML fairness research directions that actively disrupt entrenched power
dynamics and structural injustices in society.
Related papers
- Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.