Fair Group-Shared Representations with Normalizing Flows
- URL: http://arxiv.org/abs/2201.06336v1
- Date: Mon, 17 Jan 2022 10:49:49 GMT
- Title: Fair Group-Shared Representations with Normalizing Flows
- Authors: Mattia Cerrato and Marius K\"oppel and Alexander Segner and Stefan
Kramer
- Abstract summary: We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
- Score: 68.29997072804537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The issue of fairness in machine learning stems from the fact that historical
data often displays biases against specific groups of people which have been
underprivileged in the recent past, or still are. In this context, one of the
possible approaches is to employ fair representation learning algorithms which
are able to remove biases from data, making groups statistically
indistinguishable. In this paper, we instead develop a fair representation
learning algorithm which is able to map individuals belonging to different
groups in a single group. This is made possible by training a pair of
Normalizing Flow models and constraining them to not remove information about
the ground truth by training a ranking or classification model on top of them.
The overall, ``chained'' model is invertible and has a tractable Jacobian,
which allows to relate together the probability densities for different groups
and ``translate'' individuals from one group to another. We show experimentally
that our methodology is competitive with other fair representation learning
algorithms. Furthermore, our algorithm achieves stronger invariance w.r.t. the
sensitive attribute.
Related papers
- Dataset Representativeness and Downstream Task Fairness [24.570493924073524]
We demonstrate that there is a natural tension between dataset representativeness and group-fairness of classifiers trained on that dataset.
We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups.
arXiv Detail & Related papers (2024-06-28T18:11:16Z) - Bias Propagation in Federated Learning [22.954608704251118]
We show that the bias of a few parties against under-represented groups can propagate through the network to all the parties in the network.
We analyze and explain bias propagation in federated learning on naturally partitioned real-world datasets.
arXiv Detail & Related papers (2023-09-05T11:55:03Z) - On The Impact of Machine Learning Randomness on Group Fairness [11.747264308336012]
We investigate the impact on group fairness of different sources of randomness in training neural networks.
We show that the variance in group fairness measures is rooted in the high volatility of the learning process on under-represented groups.
We show how one can control group-level accuracy, with high efficiency and negligible impact on the model's overall performance, by simply changing the data order for a single epoch.
arXiv Detail & Related papers (2023-07-09T09:36:31Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.