The Unfairness of Fair Machine Learning: Levelling down and strict
egalitarianism by default
- URL: http://arxiv.org/abs/2302.02404v1
- Date: Sun, 5 Feb 2023 15:22:43 GMT
- Title: The Unfairness of Fair Machine Learning: Levelling down and strict
egalitarianism by default
- Authors: Brent Mittelstadt, Sandra Wachter, Chris Russell
- Abstract summary: This paper examines the causes and prevalence of levelling down across fairML.
We propose a first step towards substantive equality in fairML by design through enforcement of minimum acceptable harm thresholds.
- Score: 10.281644134255576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years fairness in machine learning (ML) has emerged as a highly
active area of research and development. Most define fairness in simple terms,
where fairness means reducing gaps in performance or outcomes between
demographic groups while preserving as much of the accuracy of the original
system as possible. This oversimplification of equality through fairness
measures is troubling. Many current fairness measures suffer from both fairness
and performance degradation, or "levelling down," where fairness is achieved by
making every group worse off, or by bringing better performing groups down to
the level of the worst off. When fairness can only be achieved by making
everyone worse off in material or relational terms through injuries of stigma,
loss of solidarity, unequal concern, and missed opportunities for substantive
equality, something would appear to have gone wrong in translating the vague
concept of 'fairness' into practice. This paper examines the causes and
prevalence of levelling down across fairML, and explore possible justifications
and criticisms based on philosophical and legal theories of equality and
distributive justice, as well as equality law jurisprudence. We find that
fairML does not currently engage in the type of measurement, reporting, or
analysis necessary to justify levelling down in practice. We propose a first
step towards substantive equality in fairML: "levelling up" systems by design
through enforcement of minimum acceptable harm thresholds, or "minimum rate
constraints," as fairness constraints. We likewise propose an alternative
harms-based framework to counter the oversimplified egalitarian framing
currently dominant in the field and push future discussion more towards
substantive equality opportunities and away from strict egalitarianism by
default. N.B. Shortened abstract, see paper for full abstract.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds [8.223468651994352]
A growing body of literature in fairness-aware machine learning (fairML) aims to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM)
However, the underlying concept of fairness is rarely discussed, leaving a significant gap between centuries of philosophical discussion and the recent adoption of the concept in the ML community.
We try to bridge this gap by formalizing a consistent concept of fairness and by translating the philosophical considerations into a formal framework for the training and evaluation of ML models in ADM systems.
arXiv Detail & Related papers (2022-05-19T15:37:26Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning [14.428360876120333]
We argue that fairness measures are particularly sensitive to Goodhart's law.
We present a framework for moral reasoning about the justification of fairness metrics.
arXiv Detail & Related papers (2022-02-17T09:26:39Z) - Parity-based Cumulative Fairness-aware Boosting [7.824964622317634]
Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
We propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round.
Our experiments show that our approach can achieve parity in terms of statistical parity, equal opportunity, and disparate mistreatment.
arXiv Detail & Related papers (2022-01-04T14:16:36Z) - Impossibility of What? Formal and Substantive Equality in Algorithmic
Fairness [3.42658286826597]
I argue that the dominant, "formal" approach to algorithmic fairness is ill-equipped as a framework for pursuing equality.
I propose an alternative: a "substantive" approach to algorithmic fairness that centers opposition to social hierarchies.
The distinction between formal and substantive algorithmic fairness is exemplified by each approach's responses to the "impossibility of fairness"
arXiv Detail & Related papers (2021-07-09T19:29:57Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.