Fair Normalizing Flows
- URL: http://arxiv.org/abs/2106.05937v1
- Date: Thu, 10 Jun 2021 17:35:59 GMT
- Title: Fair Normalizing Flows
- Authors: Mislav Balunovi\'c, Anian Ruoss, Martin Vechev
- Abstract summary: We present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations.
The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor.
We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning.
- Score: 10.484851004093919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair representation learning is an attractive approach that promises fairness
of downstream predictors by encoding sensitive data. Unfortunately, recent work
has shown that strong adversarial predictors can still exhibit unfairness by
recovering sensitive attributes from these representations. In this work, we
present Fair Normalizing Flows (FNF), a new approach offering more rigorous
fairness guarantees for learned representations. Specifically, we consider a
practical setting where we can estimate the probability density for sensitive
groups. The key idea is to model the encoder as a normalizing flow trained to
minimize the statistical distance between the latent representations of
different groups. The main advantage of FNF is that its exact likelihood
computation allows us to obtain guarantees on the maximum unfairness of any
potentially adversarial downstream predictor. We experimentally demonstrate the
effectiveness of FNF in enforcing various group fairness notions, as well as
other attractive properties such as interpretability and transfer learning, on
a variety of challenging real-world datasets.
Related papers
- Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Counterfactual Fairness for Predictions using Generative Adversarial
Networks [28.65556399421874]
We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Conformalized Fairness via Quantile Regression [8.180169144038345]
We propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity.
We establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles.
Our results show the model's ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
arXiv Detail & Related papers (2022-10-05T04:04:15Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z) - Fair Regression with Wasserstein Barycenters [39.818025466204055]
We study the problem of learning a real-valued function that satisfies the Demographic Parity constraint.
It demands the distribution of the predicted output to be independent of the sensitive attribute.
We establish a connection between fair regression and optimal transport theory, based on which we derive a close form expression for the optimal fair predictor.
arXiv Detail & Related papers (2020-06-12T16:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.