FairPFN: Transformers Can do Counterfactual Fairness
- URL: http://arxiv.org/abs/2407.05732v1
- Date: Mon, 8 Jul 2024 08:36:44 GMT
- Title: FairPFN: Transformers Can do Counterfactual Fairness
- Authors: Jake Robertson, Noah Hollmann, Noor Awad, Frank Hutter,
- Abstract summary: Causal and counterfactual fairness provides an intuitive way to define fairness that closely aligns with legal standards.
This study builds upon recent work in in context learning (ICL) and prior fitted networks (PFNs) to learn a transformer called FairPFN.
This model is pretrained using synthetic fairness data to eliminate the causal effects of protected attributes directly from observational data.
- Score: 41.052676173417574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning systems are increasingly prevalent across healthcare, law enforcement, and finance but often operate on historical data, which may carry biases against certain demographic groups. Causal and counterfactual fairness provides an intuitive way to define fairness that closely aligns with legal standards. Despite its theoretical benefits, counterfactual fairness comes with several practical limitations, largely related to the reliance on domain knowledge and approximate causal discovery techniques in constructing a causal model. In this study, we take a fresh perspective on counterfactually fair prediction, building upon recent work in in context learning (ICL) and prior fitted networks (PFNs) to learn a transformer called FairPFN. This model is pretrained using synthetic fairness data to eliminate the causal effects of protected attributes directly from observational data, removing the requirement of access to the correct causal model in practice. In our experiments, we thoroughly assess the effectiveness of FairPFN in eliminating the causal impact of protected attributes on a series of synthetic case studies and real world datasets. Our findings pave the way for a new and promising research area: transformers for causal and counterfactual fairness.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - AIM: Attributing, Interpreting, Mitigating Data Unfairness [40.351282126410545]
Existing fair machine learning (FairML) research has predominantly focused on mitigating discriminative bias in the model prediction.
We investigate a novel research problem: discovering samples that reflect biases/prejudices from the training data.
We propose practical algorithms for measuring and countering sample bias.
arXiv Detail & Related papers (2024-06-13T05:21:10Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
Networks [71.6879432974126]
We introduce DECAF: a GAN-based fair synthetic data generator for tabular data.
We show that DECAF successfully removes undesired bias and is capable of generating high-quality synthetic data.
We provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
arXiv Detail & Related papers (2021-10-25T12:39:56Z) - Fair Normalizing Flows [10.484851004093919]
We present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations.
The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor.
We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning.
arXiv Detail & Related papers (2021-06-10T17:35:59Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.