You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time
- URL: http://arxiv.org/abs/2503.07066v1
- Date: Mon, 10 Mar 2025 08:50:55 GMT
- Title: You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time
- Authors: Xiaotian Han, Tianlong Chen, Kaixiong Zhou, Zhimeng Jiang, Zhangyang Wang, Xia Hu,
- Abstract summary: Deep neural networks are prone to various bias issues, jeopardizing their applications for high-stake decision-making.<n>We propose You Only Debias Once (YODO) to achieve in-situ flexible accuracy-fairness trade-offs at inference time.<n>YODO achieves flexible trade-offs between model accuracy and fairness, at ultra-low overheads.
- Score: 131.96508834627832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are prone to various bias issues, jeopardizing their applications for high-stake decision-making. Existing fairness methods typically offer a fixed accuracy-fairness trade-off, since the weight of the well-trained model is a fixed point (fairness-optimum) in the weight space. Nevertheless, more flexible accuracy-fairness trade-offs at inference time are practically desired since: 1) stakes of the same downstream task can vary for different individuals, and 2) different regions have diverse laws or regularization for fairness. If using the previous fairness methods, we have to train multiple models, each offering a specific level of accuracy-fairness trade-off. This is often computationally expensive, time-consuming, and difficult to deploy, making it less practical for real-world applications. To address this problem, we propose You Only Debias Once (YODO) to achieve in-situ flexible accuracy-fairness trade-offs at inference time, using a single model that trained only once. Instead of pursuing one individual fixed point (fairness-optimum) in the weight space, we aim to find a "line" in the weight space that connects the accuracy-optimum and fairness-optimum points using a single model. Points (models) on this line implement varying levels of accuracy-fairness trade-offs. At inference time, by manually selecting the specific position of the learned "line", our proposed method can achieve arbitrary accuracy-fairness trade-offs for different end-users and scenarios. Experimental results on tabular and image datasets show that YODO achieves flexible trade-offs between model accuracy and fairness, at ultra-low overheads. For example, if we need $100$ levels of trade-off on the \acse dataset, YODO takes $3.53$ seconds while training $100$ fixed models consumes $425$ seconds. The code is available at https://github.com/ahxt/yodo.
Related papers
- On Comparing Fair Classifiers under Data Bias [42.43344286660331]
We study the effect of varying data biases on the accuracy and fairness of fair classifiers.
Our experiments show how to integrate a measure of data bias risk in the existing fairness dashboards for real-world deployments.
arXiv Detail & Related papers (2023-02-12T13:04:46Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - A Differentiable Distance Approximation for Fairer Image Classification [31.471917430653626]
We propose a differentiable approximation of the variance of demographics, a metric that can be used to measure the bias, or unfairness, in an AI model.
Our approximation can be optimised alongside the regular training objective which eliminates the need for any extra models during training.
We demonstrate that our approach improves the fairness of AI models in varied task and dataset scenarios.
arXiv Detail & Related papers (2022-10-09T23:02:18Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Multivariate Probabilistic Forecasting of Intraday Electricity Prices
using Normalizing Flows [62.997667081978825]
In Germany, the intraday electricity price typically fluctuates around the day-ahead price of the EPEX spot markets in a distinct hourly pattern.
This work proposes a probabilistic modeling approach that models the intraday price difference to the day-ahead contracts.
arXiv Detail & Related papers (2022-05-27T08:38:20Z) - Linear Speedup in Personalized Collaborative Learning [69.45124829480106]
Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias.
We formalize the personalized collaborative learning problem as optimization of a user's objective.
We explore conditions under which we can optimally trade-off their bias for a reduction in variance.
arXiv Detail & Related papers (2021-11-10T22:12:52Z) - Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via
Disqualification [7.9649015115693444]
In many machine learning settings there is an inherent tension between fairness and accuracy desiderata.
We introduce and study $gamma$-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs.
We show $gamma$-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy.
arXiv Detail & Related papers (2021-10-02T14:32:51Z) - FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes [0.0]
Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
arXiv Detail & Related papers (2021-09-01T03:56:43Z) - OmniFair: A Declarative System for Model-Agnostic Group Fairness in
Machine Learning [11.762484210143773]
We propose a declarative system OmniFair for supporting group fairness in machine learning (ML)
OmniFair features a declarative interface for users to specify desired group fairness constraints.
We show that OmniFair is more versatile than existing algorithmic fairness approaches in terms of both supported fairness constraints and downstream ML models.
arXiv Detail & Related papers (2021-03-13T02:44:10Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z) - Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint [31.86959207229775]
In this paper, we propose a framework for learning an individually fair classifier.
We define the it probability of individual unfairness (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
arXiv Detail & Related papers (2020-02-17T02:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.