Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning
- URL: http://arxiv.org/abs/2202.08536v3
- Date: Tue, 2 Jul 2024 10:53:59 GMT
- Title: Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning
- Authors: Hilde Weerts, Lambèr Royakkers, Mykola Pechenizkiy,
- Abstract summary: We argue that fairness measures are particularly sensitive to Goodhart's law.
We present a framework for moral reasoning about the justification of fairness metrics.
- Score: 14.428360876120333
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fairness-aware machine learning (fair-ml) techniques are algorithmic interventions designed to ensure that individuals who are affected by the predictions of a machine learning model are treated fairly. The problem is often posed as an optimization problem, where the objective is to achieve high predictive performance under a quantitative fairness constraint. However, any attempt to design a fair-ml algorithm must assume a world where Goodhart's law has an exception: when a fairness measure becomes an optimization constraint, it does not cease to be a good measure. In this paper, we argue that fairness measures are particularly sensitive to Goodhart's law. Our main contributions are as follows. First, we present a framework for moral reasoning about the justification of fairness metrics. In contrast to existing work, our framework incorporates the belief that whether a distribution of outcomes is fair, depends not only on the cause of inequalities but also on what moral claims decision subjects have to receive a particular benefit or avoid a burden. We use the framework to distil moral and empirical assumptions under which particular fairness metrics correspond to a fair distribution of outcomes. Second, we explore the extent to which employing fairness metrics as a constraint in a fair-ml algorithm is morally justifiable, exemplified by the fair-ml algorithm introduced by Hardt et al. (2016). We illustrate that enforcing a fairness metric through a fair-ml algorithm often does not result in the fair distribution of outcomes that motivated its use and can even harm the individuals the intervention was intended to protect.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Navigating Fairness Measures and Trade-Offs [0.0]
I show that by using Rawls' notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off.
This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature.
arXiv Detail & Related papers (2023-07-17T13:45:47Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - The Unfairness of Fair Machine Learning: Levelling down and strict
egalitarianism by default [10.281644134255576]
This paper examines the causes and prevalence of levelling down across fairML.
We propose a first step towards substantive equality in fairML by design through enforcement of minimum acceptable harm thresholds.
arXiv Detail & Related papers (2023-02-05T15:22:43Z) - Counterfactual Fairness Is Basically Demographic Parity [0.0]
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
We show that an algorithm which satisfies counterfactual fairness also satisfies demographic parity.
We formalize a concrete fairness goal: to preserve the order of individuals within protected groups.
arXiv Detail & Related papers (2022-08-07T23:38:59Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.