Learning Fair Node Representations with Graph Counterfactual Fairness
- URL: http://arxiv.org/abs/2201.03662v1
- Date: Mon, 10 Jan 2022 21:43:44 GMT
- Title: Learning Fair Node Representations with Graph Counterfactual Fairness
- Authors: Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang,
Jundong Li
- Abstract summary: We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
- Score: 56.32231787113689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair machine learning aims to mitigate the biases of model predictions
against certain subpopulations regarding sensitive attributes such as race and
gender. Among the many existing fairness notions, counterfactual fairness
measures the model fairness from a causal perspective by comparing the
predictions of each individual from the original data and the counterfactuals.
In counterfactuals, the sensitive attribute values of this individual had been
modified. Recently, a few works extend counterfactual fairness to graph data,
but most of them neglect the following facts that can lead to biases: 1) the
sensitive attributes of each node's neighbors may causally affect the
prediction w.r.t. this node; 2) the sensitive attributes may causally affect
other features and the graph structure. To tackle these issues, in this paper,
we propose a novel fairness notion - graph counterfactual fairness, which
considers the biases led by the above facts. To learn node representations
towards graph counterfactual fairness, we propose a novel framework based on
counterfactual data augmentation. In this framework, we generate
counterfactuals corresponding to perturbations on each node's and their
neighbors' sensitive attributes. Then we enforce fairness by minimizing the
discrepancy between the representations learned from the original graph and the
counterfactuals for each node. Experiments on both synthetic and real-world
graphs show that our framework outperforms the state-of-the-art baselines in
graph counterfactual fairness, and also achieves comparable prediction
performance.
Related papers
- Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Graph Fairness Learning under Distribution Shifts [33.9878682279549]
Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
arXiv Detail & Related papers (2024-01-30T06:51:24Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Attribute Completion on Graph with Missing Attributes [14.950261239035882]
We propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes.
We show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning.
arXiv Detail & Related papers (2023-02-25T04:12:30Z) - Graph Learning with Localized Neighborhood Fairness [32.301270877134]
We introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings.
We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings.
arXiv Detail & Related papers (2022-12-22T21:20:43Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.