Towards Fair Graph Neural Networks via Graph Counterfactual
- URL: http://arxiv.org/abs/2307.04937v2
- Date: Mon, 21 Aug 2023 14:05:05 GMT
- Title: Towards Fair Graph Neural Networks via Graph Counterfactual
- Authors: Zhimeng Guo, Jialiang Li, Teng Xiao, Yao Ma, Suhang Wang
- Abstract summary: Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
- Score: 38.721295940809135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks have shown great ability in representation (GNNs)
learning on graphs, facilitating various tasks. Despite their great performance
in modeling graphs, recent works show that GNNs tend to inherit and amplify the
bias from training data, causing concerns of the adoption of GNNs in high-stake
scenarios. Hence, many efforts have been taken for fairness-aware GNNs.
However, most existing fair GNNs learn fair node representations by adopting
statistical fairness notions, which may fail to alleviate bias in the presence
of statistical anomalies. Motivated by causal theory, there are several
attempts utilizing graph counterfactual fairness to mitigate root causes of
unfairness. However, these methods suffer from non-realistic counterfactuals
obtained by perturbation or generation. In this paper, we take a causal view on
fair graph learning problem. Guided by the casual analysis, we propose a novel
framework CAF, which can select counterfactuals from training data to avoid
non-realistic counterfactuals and adopt selected counterfactuals to learn fair
node representations for node classification task. Extensive experiments on
synthetic and real-world datasets show the effectiveness of CAF. Our code is
available at https://github.com/TimeLovercc/CAF-GNN.
Related papers
- Fair Graph Neural Network with Supervised Contrastive Regularization [12.666235467177131]
We propose a novel model for training fairness-aware Graph Neural Networks (GNNs)
Our approach integrates Supervised Contrastive Loss and Environmental Loss to enhance both accuracy and fairness.
arXiv Detail & Related papers (2024-04-09T07:49:05Z) - The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation [35.17007613884196]
Graph neural networks (GNNs) are being increasingly used in many high-stakes tasks.
GNNs have been shown to be unfair as they tend to make discriminatory decisions toward certain demographic groups.
We present a demographic-agnostic method to learn fair GNNs via knowledge distillation, namely FairGKD.
arXiv Detail & Related papers (2023-11-29T05:54:58Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution [46.384034587689136]
We study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs.
We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets.
arXiv Detail & Related papers (2022-11-25T21:52:30Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information [37.90997236795843]
Graph neural networks (GNNs) have shown great power in modeling graph structured data.
GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender.
We propose FairGNN to eliminate the bias of GNNs whilst maintaining high node classification accuracy.
arXiv Detail & Related papers (2020-09-03T05:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.