Algorithmic Fairness and Structural Injustice: Insights from Feminist
Political Philosophy
- URL: http://arxiv.org/abs/2206.00945v1
- Date: Thu, 2 Jun 2022 09:18:03 GMT
- Title: Algorithmic Fairness and Structural Injustice: Insights from Feminist
Political Philosophy
- Authors: Atoosa Kasirzadeh
- Abstract summary: 'Algorithmic fairness' aims to mitigate harmful biases in data-driven algorithms.
The perspectives of feminist political philosophers on social justice have been largely neglected.
This paper brings some key insights of feminist political philosophy to algorithmic fairness.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data-driven predictive algorithms are widely used to automate and guide
high-stake decision making such as bail and parole recommendation, medical
resource distribution, and mortgage allocation. Nevertheless, harmful outcomes
biased against vulnerable groups have been reported. The growing research field
known as 'algorithmic fairness' aims to mitigate these harmful biases. Its
primary methodology consists in proposing mathematical metrics to address the
social harms resulting from an algorithm's biased outputs. The metrics are
typically motivated by -- or substantively rooted in -- ideals of distributive
justice, as formulated by political and legal philosophers. The perspectives of
feminist political philosophers on social justice, by contrast, have been
largely neglected. Some feminist philosophers have criticized the paradigm of
distributive justice and have proposed corrective amendments to surmount its
limitations. The present paper brings some key insights of feminist political
philosophy to algorithmic fairness. The paper has three goals. First, I show
that algorithmic fairness does not accommodate structural injustices in its
current scope. Second, I defend the relevance of structural injustices -- as
pioneered in the contemporary philosophical literature by Iris Marion Young --
to algorithmic fairness. Third, I take some steps in developing the paradigm of
'responsible algorithmic fairness' to correct for errors in the current scope
and implementation of algorithmic fairness.
Related papers
- What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning [14.428360876120333]
We argue that fairness measures are particularly sensitive to Goodhart's law.
We present a framework for moral reasoning about the justification of fairness metrics.
arXiv Detail & Related papers (2022-02-17T09:26:39Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Impossibility of What? Formal and Substantive Equality in Algorithmic
Fairness [3.42658286826597]
I argue that the dominant, "formal" approach to algorithmic fairness is ill-equipped as a framework for pursuing equality.
I propose an alternative: a "substantive" approach to algorithmic fairness that centers opposition to social hierarchies.
The distinction between formal and substantive algorithmic fairness is exemplified by each approach's responses to the "impossibility of fairness"
arXiv Detail & Related papers (2021-07-09T19:29:57Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.