Achieving Counterfactual Fairness with Imperfect Structural Causal Model
- URL: http://arxiv.org/abs/2303.14665v1
- Date: Sun, 26 Mar 2023 09:37:29 GMT
- Title: Achieving Counterfactual Fairness with Imperfect Structural Causal Model
- Authors: Tri Dung Duong, Qian Li, Guandong Xu
- Abstract summary: We propose a novel minimax game-theoretic model for counterfactual fairness.
We also theoretically prove the error bound of the proposed minimax model.
Empirical experiments on multiple real-world datasets illustrate our superior performance in both accuracy and fairness.
- Score: 11.108866104714627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual fairness alleviates the discrimination between the model
prediction toward an individual in the actual world (observational data) and
that in counterfactual world (i.e., what if the individual belongs to other
sensitive groups). The existing studies need to pre-define the structural
causal model that captures the correlations among variables for counterfactual
inference; however, the underlying causal model is usually unknown and
difficult to be validated in real-world scenarios. Moreover, the
misspecification of the causal model potentially leads to poor performance in
model prediction and thus makes unfair decisions. In this research, we propose
a novel minimax game-theoretic model for counterfactual fairness that can
produce accurate results meanwhile achieve a counterfactually fair decision
with the relaxation of strong assumptions of structural causal models. In
addition, we also theoretically prove the error bound of the proposed minimax
model. Empirical experiments on multiple real-world datasets illustrate our
superior performance in both accuracy and fairness. Source code is available at
\url{https://github.com/tridungduong16/counterfactual_fairness_game_theoretic}.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Transport-based Counterfactual Models [0.0]
State-of-the-art models to compute counterfactuals are either unrealistic or unfeasible.
We address the problem of designing realistic and feasible counterfactuals in the absence of a causal model.
We argue that optimal transport theory defines relevant transport-based counterfactual models.
arXiv Detail & Related papers (2021-08-30T07:28:19Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Explainability for fair machine learning [10.227479910430866]
We present a new approach to explaining fairness in machine learning, based on the Shapley value paradigm.
Our fairness explanations attribute a model's overall unfairness to individual input features, even in cases where the model does not operate on sensitive attributes directly.
We propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely.
arXiv Detail & Related papers (2020-10-14T20:21:01Z) - Convex Fairness Constrained Model Using Causal Effect Estimators [6.414055487487486]
We devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias.
We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks.
arXiv Detail & Related papers (2020-02-16T03:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.