Counterfactual Fairness for Predictions using Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2310.17687v1
- Date: Thu, 26 Oct 2023 17:58:39 GMT
- Title: Counterfactual Fairness for Predictions using Generative Adversarial
Networks
- Authors: Yuchen Ma, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
- Abstract summary: We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
- Score: 28.65556399421874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in predictions is of direct importance in practice due to legal,
ethical, and societal reasons. It is often achieved through counterfactual
fairness, which ensures that the prediction for an individual is the same as
that in a counterfactual world under a different sensitive attribute. However,
achieving counterfactual fairness is challenging as counterfactuals are
unobservable. In this paper, we develop a novel deep neural network called
Generative Counterfactual Fairness Network (GCFN) for making predictions under
counterfactual fairness. Specifically, we leverage a tailored generative
adversarial network to directly learn the counterfactual distribution of the
descendants of the sensitive attribute, which we then use to enforce fair
predictions through a novel counterfactual mediator regularization. If the
counterfactual distribution is learned sufficiently well, our method is
mathematically guaranteed to ensure the notion of counterfactual fairness.
Thereby, our GCFN addresses key shortcomings of existing baselines that are
based on inferring latent variables, yet which (a) are potentially correlated
with the sensitive attributes and thus lead to bias, and (b) have weak
capability in constructing latent representations and thus low prediction
performance. Across various experiments, our method achieves state-of-the-art
performance. Using a real-world case study from recidivism prediction, we
further demonstrate that our method makes meaningful predictions in practice.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - RobustFair: Adversarial Evaluation through Fairness Confusion Directed
Gradient Search [8.278129731168127]
Deep neural networks (DNNs) often face challenges due to their vulnerability to various adversarial perturbations.
This paper introduces a novel approach, RobustFair, to evaluate the accurate fairness of DNNs when subjected to false or biased perturbations.
arXiv Detail & Related papers (2023-05-18T12:07:29Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Prediction Sensitivity: Continual Audit of Counterfactual Fairness in
Deployed Classifiers [2.0625936401496237]
Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment.
We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers.
Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.
arXiv Detail & Related papers (2022-02-09T15:06:45Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Fair Normalizing Flows [10.484851004093919]
We present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations.
The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor.
We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning.
arXiv Detail & Related papers (2021-06-10T17:35:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.