Navigating Fairness Measures and Trade-Offs
- URL: http://arxiv.org/abs/2307.08484v1
- Date: Mon, 17 Jul 2023 13:45:47 GMT
- Title: Navigating Fairness Measures and Trade-Offs
- Authors: Stefan Buijsman
- Abstract summary: I show that by using Rawls' notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off.
This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to monitor and prevent bias in AI systems we can use a wide range of
(statistical) fairness measures. However, it is mathematically impossible to
optimize for all of these measures at the same time. In addition, optimizing a
fairness measure often greatly reduces the accuracy of the system (Kozodoi et
al, 2022). As a result, we need a substantive theory that informs us how to
make these decisions and for what reasons. I show that by using Rawls' notion
of justice as fairness, we can create a basis for navigating fairness measures
and the accuracy trade-off. In particular, this leads to a principled choice
focusing on both the most vulnerable groups and the type of fairness measure
that has the biggest impact on that group. This also helps to close part of the
gap between philosophical accounts of distributive justice and the fairness
literature that has been observed (Kuppler et al, 2021) and to operationalise
the value of fairness.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - The Unfairness of $\varepsilon$-Fairness [0.0]
We show that if the concept of $varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context.
We illustrate our findings with two real-world examples: college admissions and credit risk assessment.
arXiv Detail & Related papers (2024-05-15T14:13:35Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Accurate Fairness: Improving Individual Fairness without Trading
Accuracy [4.0415037006237595]
We propose a new fairness criterion, accurate fairness, to align individual fairness with accuracy.
We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations.
To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation.
arXiv Detail & Related papers (2022-05-18T03:24:16Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning [14.428360876120333]
We argue that fairness measures are particularly sensitive to Goodhart's law.
We present a framework for moral reasoning about the justification of fairness metrics.
arXiv Detail & Related papers (2022-02-17T09:26:39Z) - Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [2.6397379133308214]
We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions.
While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness.
arXiv Detail & Related papers (2021-02-01T22:02:14Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.