A Feminist Account of Intersectional Algorithmic Fairness
- URL: http://arxiv.org/abs/2508.17944v1
- Date: Mon, 25 Aug 2025 12:09:04 GMT
- Title: A Feminist Account of Intersectional Algorithmic Fairness
- Authors: Marie Mirsch, Laila Wegner, Jonas Strube, Carmen Leicht-Scholten,
- Abstract summary: We propose Substantive Intersectional Algorithmic Fairness, extending Green's notion of substantive algorithmic fairness with insights from intersectional feminist theory.<n>We introduce ten desiderata within the ROOF methodology to guide the design, assessment, and deployment of algorithmic systems.<n>By bridging computational and social science perspectives, we provide actionable guidance for more equitable, inclusive, and context-sensitive intersectional algorithmic practices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Intersectionality has profoundly influenced research and political action by revealing how interconnected systems of privilege and oppression influence lived experiences, yet its integration into algorithmic fairness research remains limited. Existing approaches often rely on single-axis or formal subgroup frameworks that risk oversimplifying social realities and neglecting structural inequalities. We propose Substantive Intersectional Algorithmic Fairness, extending Green's (2022) notion of substantive algorithmic fairness with insights from intersectional feminist theory. Building on this foundation, we introduce ten desiderata within the ROOF methodology to guide the design, assessment, and deployment of algorithmic systems in ways that address systemic inequities while mitigating harms to intersectionally marginalized communities. Rather than prescribing fixed operationalizations, these desiderata encourage reflection on assumptions of neutrality, the use of protected attributes, the inclusion of multiply marginalized groups, and enhancing algorithmic systems' potential. Our approach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non-deployment may be necessary. By bridging computational and social science perspectives, we provide actionable guidance for more equitable, inclusive, and context-sensitive intersectional algorithmic practices.
Related papers
- Secondary Bounded Rationality: A Theory of How Algorithms Reproduce Structural Inequality in AI Hiring [0.174048653626208]
Article argues that AI systems inherit and amplify human cognitive and structural biases through technical and sociopolitical constraints.<n>We show how algorithmic processes transform historical inequalities, such as elite credential privileging and network homophily, into ostensibly meritocratic outcomes.<n>We propose mitigation strategies, including counterfactual fairness testing, capital-aware auditing, and regulatory interventions, to disrupt this self-reinforcing inequality.
arXiv Detail & Related papers (2025-07-12T10:03:20Z) - Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property [4.2894701473635966]
We argue that fairness cannot be reduced to purely technical constraints on models.<n>We examine the limitations of existing fairness measures through conceptual analysis and empirical illustrations.<n>We believe these findings will help bridge the gap between technical formalisation and social realities.
arXiv Detail & Related papers (2025-06-14T15:54:45Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [91.86718720024825]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.<n>Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.<n>We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - A Sociotechnical View of Algorithmic Fairness [16.184328505946763]
Algorithmic fairness has been framed as a newly emerging technology that mitigates systemic discrimination in automated decision-making.
We argue that fairness is an inherently social concept and that technologies for algorithmic fairness should therefore be approached through a sociotechnical lens.
arXiv Detail & Related papers (2021-09-27T21:17:16Z) - Impossibility of What? Formal and Substantive Equality in Algorithmic
Fairness [3.42658286826597]
I argue that the dominant, "formal" approach to algorithmic fairness is ill-equipped as a framework for pursuing equality.
I propose an alternative: a "substantive" approach to algorithmic fairness that centers opposition to social hierarchies.
The distinction between formal and substantive algorithmic fairness is exemplified by each approach's responses to the "impossibility of fairness"
arXiv Detail & Related papers (2021-07-09T19:29:57Z) - Impact Remediation: Optimal Interventions to Reduce Inequality [10.806517393212491]
We develop a novel algorithmic framework for tackling pre-existing real-world disparities.
The purpose of our framework is to measure real-world disparities and discover optimal intervention policies.
In contrast to most work on optimal policy learning, we explore disparity reduction itself as an objective.
arXiv Detail & Related papers (2021-07-01T16:35:12Z) - Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features [72.72840552588134]
We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
arXiv Detail & Related papers (2020-06-10T18:34:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.