Counterfactual Fairness with Partially Known Causal Graph
- URL: http://arxiv.org/abs/2205.13972v1
- Date: Fri, 27 May 2022 13:40:50 GMT
- Title: Counterfactual Fairness with Partially Known Causal Graph
- Authors: Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong
- Abstract summary: This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
- Score: 85.15766086381352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair machine learning aims to avoid treating individuals or sub-populations
unfavourably based on \textit{sensitive attributes}, such as gender and race.
Those methods in fair machine learning that are built on causal inference
ascertain discrimination and bias through causal effects. Though
causality-based fair learning is attracting increasing attention, current
methods assume the true causal graph is fully known. This paper proposes a
general method to achieve the notion of counterfactual fairness when the true
causal graph is unknown. To be able to select features that lead to
counterfactual fairness, we derive the conditions and algorithms to identify
ancestral relations between variables on a \textit{Partially Directed Acyclic
Graph (PDAG)}, specifically, a class of causal DAGs that can be learned from
observational data combined with domain knowledge. Interestingly, we find that
counterfactual fairness can be achieved as if the true causal graph were fully
known, when specific background knowledge is provided: the sensitive attributes
do not have ancestors in the causal graph. Results on both simulated and
real-world datasets demonstrate the effectiveness of our method.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Graph Fairness Learning under Distribution Shifts [33.9878682279549]
Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
arXiv Detail & Related papers (2024-01-30T06:51:24Z) - Interventional Fairness on Partially Known Causal Graphs: A Constrained
Optimization Approach [44.48385991344273]
We propose a framework for achieving causal fairness based on the notion of interventions when the true causal graph is partially known.
The proposed approach involves modeling fair prediction using a class of causal DAGs that can be learned from observational data combined with domain knowledge.
Results on both simulated and real-world datasets demonstrate the effectiveness of this method.
arXiv Detail & Related papers (2024-01-19T11:20:31Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fair Attribute Completion on Graph with Missing Attributes [14.950261239035882]
We propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes.
We show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning.
arXiv Detail & Related papers (2023-02-25T04:12:30Z) - Impact Of Missing Data Imputation On The Fairness And Accuracy Of Graph
Node Classifiers [0.19573380763700707]
We analyze the effect on fairness in the context of graph data (node attributes) imputation using different embedding and neural network methods.
Our results provide valuable insights into graph data fairness and how to handle missingness in graphs efficiently.
arXiv Detail & Related papers (2022-11-01T23:16:36Z) - Identifiability of Causal-based Fairness Notions: A State of the Art [4.157415305926584]
Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations.
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.
arXiv Detail & Related papers (2022-03-11T13:10:32Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.