Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness
- URL: http://arxiv.org/abs/2303.17555v2
- Date: Fri, 21 Jul 2023 02:20:39 GMT
- Title: Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness
- Authors: Anaelia Ovalle, Arjun Subramonian, Vagrant Gautam, Gilbert Gee,
Kai-Wei Chang
- Abstract summary: Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
- Score: 55.037030060643126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intersectionality is a critical framework that, through inquiry and praxis,
allows us to examine how social inequalities persist through domains of
structure and discipline. Given AI fairness' raison d'etre of "fairness", we
argue that adopting intersectionality as an analytical framework is pivotal to
effectively operationalizing fairness. Through a critical review of how
intersectionality is discussed in 30 papers from the AI fairness literature, we
deductively and inductively: 1) map how intersectionality tenets operate within
the AI fairness paradigm and 2) uncover gaps between the conceptualization and
operationalization of intersectionality. We find that researchers
overwhelmingly reduce intersectionality to optimizing for fairness metrics over
demographic subgroups. They also fail to discuss their social context and when
mentioning power, they mostly situate it only within the AI pipeline. We: 3)
outline and assess the implications of these gaps for critical inquiry and
praxis, and 4) provide actionable recommendations for AI fairness researchers
to engage with intersectionality in their work by grounding it in AI
epistemology.
Related papers
- Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Intersectional Inquiry, on the Ground and in the Algorithm [1.0923877073891446]
We argue that methods in this field must account for intersections of social difference, such as race, class, ethnicity, culture, and disability.
We consider the complexities of bringing together computational and qualitative methods in an intersectional methodological approach.
arXiv Detail & Related papers (2023-08-29T23:43:58Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches [0.0]
This survey article assesses and compares critiques of current fairness-enhancing technical interventions into machine learning (ML)
It draws from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies.
The article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
arXiv Detail & Related papers (2022-05-06T14:27:57Z) - Causal intersectionality for fair ranking [14.570546164100618]
We make the application of intersectionality in fair machine learning explicit, connected to important real world effects and domain knowledge, and transparent about technical limitations.
We experimentally evaluate our approach on real and synthetic datasets, exploring its behaviour under different structural assumptions.
arXiv Detail & Related papers (2020-06-15T18:57:46Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.