Interventional Fairness on Partially Known Causal Graphs: A Constrained
Optimization Approach
- URL: http://arxiv.org/abs/2401.10632v2
- Date: Fri, 8 Mar 2024 10:51:42 GMT
- Title: Interventional Fairness on Partially Known Causal Graphs: A Constrained
Optimization Approach
- Authors: Aoqi Zuo, Yiqing Li, Susan Wei, Mingming Gong
- Abstract summary: We propose a framework for achieving causal fairness based on the notion of interventions when the true causal graph is partially known.
The proposed approach involves modeling fair prediction using a class of causal DAGs that can be learned from observational data combined with domain knowledge.
Results on both simulated and real-world datasets demonstrate the effectiveness of this method.
- Score: 44.48385991344273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair machine learning aims to prevent discrimination against individuals or
sub-populations based on sensitive attributes such as gender and race. In
recent years, causal inference methods have been increasingly used in fair
machine learning to measure unfairness by causal effects. However, current
methods assume that the true causal graph is given, which is often not true in
real-world applications. To address this limitation, this paper proposes a
framework for achieving causal fairness based on the notion of interventions
when the true causal graph is partially known. The proposed approach involves
modeling fair prediction using a Partially Directed Acyclic Graph (PDAG),
specifically, a class of causal DAGs that can be learned from observational
data combined with domain knowledge. The PDAG is used to measure causal
fairness, and a constrained optimization problem is formulated to balance
between fairness and accuracy. Results on both simulated and real-world
datasets demonstrate the effectiveness of this method.
Related papers
- Fair GLASSO: Estimating Fair Graphical Models with Unbiased Statistical Behavior [31.92791228859847]
Many real-world models exhibit unfair discriminatory behavior due to biases in data.
We introduce fairness for graphical models in the form of two bias metrics to promote balance in statistical similarities.
We present Fair GLASSO, a regularized graphical lasso approach to obtain sparse Gaussian precision matrices.
arXiv Detail & Related papers (2024-06-13T18:07:04Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Causal Fair Machine Learning via Rank-Preserving Interventional Distributions [0.5062312533373299]
We define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world.
We propose rank-preserving interventional distributions to define a specific FiND world in which this holds.
We show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.
arXiv Detail & Related papers (2023-07-24T13:46:50Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.