Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via
Disqualification
- URL: http://arxiv.org/abs/2110.00813v1
- Date: Sat, 2 Oct 2021 14:32:51 GMT
- Title: Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via
Disqualification
- Authors: Guy N. Rothblum and Gal Yona
- Abstract summary: In many machine learning settings there is an inherent tension between fairness and accuracy desiderata.
We introduce and study $gamma$-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs.
We show $gamma$-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy.
- Score: 7.9649015115693444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many machine learning settings there is an inherent tension between
fairness and accuracy desiderata. How should one proceed in light of such
trade-offs? In this work we introduce and study $\gamma$-disqualification, a
new framework for reasoning about fairness-accuracy tradeoffs w.r.t a benchmark
class $H$ in the context of supervised learning. Our requirement stipulates
that a classifier should be disqualified if it is possible to improve its
fairness by switching to another classifier from $H$ without paying "too much"
in accuracy. The notion of "too much" is quantified via a parameter $\gamma$
that serves as a vehicle for specifying acceptable tradeoffs between accuracy
and fairness, in a way that is independent from the specific metrics used to
quantify fairness and accuracy in a given task. Towards this objective, we
establish principled translations between units of accuracy and units of
(un)fairness for different accuracy measures. We show $\gamma$-disqualification
can be used to easily compare different learning strategies in terms of how
they trade-off fairness and accuracy, and we give an efficient reduction from
the problem of finding the optimal classifier that satisfies our requirement to
the problem of approximating the Pareto frontier of $H$.
Related papers
- You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time [131.96508834627832]
Deep neural networks are prone to various bias issues, jeopardizing their applications for high-stake decision-making.
We propose You Only Debias Once (YODO) to achieve in-situ flexible accuracy-fairness trade-offs at inference time.
YODO achieves flexible trade-offs between model accuracy and fairness, at ultra-low overheads.
arXiv Detail & Related papers (2025-03-10T08:50:55Z) - Rethinking Early Stopping: Refine, Then Calibrate [49.966899634962374]
We show that calibration error and refinement error are not minimized simultaneously during training.
We introduce a new metric for early stopping and hyper parameter tuning that makes it possible to minimize refinement error during training.
Our method integrates seamlessly with any architecture and consistently improves performance across diverse classification tasks.
arXiv Detail & Related papers (2025-01-31T15:03:54Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Uncertainty in Language Models: Assessment through Rank-Calibration [65.10149293133846]
Language Models (LMs) have shown promising performance in natural language generation.
It is crucial to correctly quantify their uncertainty in responding to given inputs.
We develop a novel and practical framework, termed $Rank$-$Calibration$, to assess uncertainty and confidence measures for LMs.
arXiv Detail & Related papers (2024-04-04T02:31:05Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Confidence-aware Training of Smoothed Classifiers for Certified
Robustness [75.95332266383417]
We use "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for an input.
Our experiments show that the proposed method consistently exhibits improved certified robustness upon state-of-the-art training methods.
arXiv Detail & Related papers (2022-12-18T03:57:12Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - The Interplay between Distribution Parameters and the
Accuracy-Robustness Tradeoff in Classification [0.0]
Adrial training tends to result in models that are less accurate on natural (unperturbed) examples compared to standard models.
This can be attributed to either an algorithmic shortcoming or a fundamental property of the training data distribution.
In this work, we focus on the latter case under a binary Gaussian mixture classification problem.
arXiv Detail & Related papers (2021-07-01T06:57:50Z) - Measuring Model Fairness under Noisy Covariates: A Theoretical
Perspective [26.704446184314506]
We study the problem of measuring the fairness of a machine learning model under noisy information.
We present a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible.
arXiv Detail & Related papers (2021-05-20T18:36:28Z) - Promoting Fairness through Hyperparameter Optimization [4.479834103607383]
This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development.
We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband.
We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature.
arXiv Detail & Related papers (2021-03-23T17:36:22Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? [11.435833538081557]
Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution.
We examine the ability of fairness-constrained ERM to correct this problem.
We also consider other recovery methods including reweighting the training data, Equalized Odds, and Demographic Parity.
arXiv Detail & Related papers (2019-12-02T22:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.