On The Impact of Machine Learning Randomness on Group Fairness
- URL: http://arxiv.org/abs/2307.04138v1
- Date: Sun, 9 Jul 2023 09:36:31 GMT
- Title: On The Impact of Machine Learning Randomness on Group Fairness
- Authors: Prakhar Ganesh, Hongyan Chang, Martin Strobel, Reza Shokri
- Abstract summary: We investigate the impact on group fairness of different sources of randomness in training neural networks.
We show that the variance in group fairness measures is rooted in the high volatility of the learning process on under-represented groups.
We show how one can control group-level accuracy, with high efficiency and negligible impact on the model's overall performance, by simply changing the data order for a single epoch.
- Score: 11.747264308336012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical measures for group fairness in machine learning reflect the gap
in performance of algorithms across different groups. These measures, however,
exhibit a high variance between different training instances, which makes them
unreliable for empirical evaluation of fairness. What causes this high
variance? We investigate the impact on group fairness of different sources of
randomness in training neural networks. We show that the variance in group
fairness measures is rooted in the high volatility of the learning process on
under-represented groups. Further, we recognize the dominant source of
randomness as the stochasticity of data order during training. Based on these
findings, we show how one can control group-level accuracy (i.e., model
fairness), with high efficiency and negligible impact on the model's overall
performance, by simply changing the data order for a single epoch.
Related papers
- Fairness-enhancing mixed effects deep learning improves fairness on in-
and out-of-distribution clustered (non-iid) data [7.413980562174725]
We present a mixed effects deep learning (MEDL) framework.
MEDL quantifies cluster-invariant fixed effects (FE) and cluster-specific random effects (RE)
We marry this MEDL with adversarial debiasing, which promotes equality-of-odds fairness across FE, RE, and ME predictions for fairness-sensitive variables.
Our framework notably enhances fairness across all sensitive variables-increasing fairness up to 82% for age, 43% for race, 86% for sex, and 27% for marital-status.
arXiv Detail & Related papers (2023-10-04T20:18:45Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in
Regression Machine Learning Problems [0.0]
We present a simple, yet effective method based on normalisation (FaiReg) to minimise the impact of unfairness in regression problems.
We compare this method with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing, also without deteriorating the performance of the original problem.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - On Adversarial Bias and the Robustness of Fair Machine Learning [11.584571002297217]
We show that giving the same importance to groups of different sizes and distributions, to counteract the effect of bias in training data, can be in conflict with robustness.
An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models.
We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.
arXiv Detail & Related papers (2020-06-15T18:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.