Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution
- URL: http://arxiv.org/abs/2211.14383v1
- Date: Fri, 25 Nov 2022 21:52:30 GMT
- Title: Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution
- Authors: Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
- Abstract summary: We study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs.
We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets.
- Score: 46.384034587689136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as the leading paradigm for solving
graph analytical problems in various real-world applications. Nevertheless,
GNNs could potentially render biased predictions towards certain demographic
subgroups. Understanding how the bias in predictions arises is critical, as it
guides the design of GNN debiasing mechanisms. However, most existing works
overwhelmingly focus on GNN debiasing, but fall short on explaining how such
bias is induced. In this paper, we study a novel problem of interpreting GNN
unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution
Disparity (PDD) to measure the bias exhibited in GNNs, and develop an algorithm
to efficiently estimate the influence of each training node on such bias. We
verify the validity of PDD and the effectiveness of influence estimation
through experiments on real-world datasets. Finally, we also demonstrate how
the proposed framework could be used for debiasing GNNs. Open-source code can
be found at https://github.com/yushundong/BIND.
Related papers
- ComFairGNN: Community Fair Graph Neural Network [6.946292440025013]
We introduce a novel framework designed to mitigate community-level bias in Graph Neural Networks (GNNs)
Our approach employs a learnable coreset-based debiasing function that addresses bias arising from diverse local neighborhood distributions during GNNs neighborhood aggregation.
arXiv Detail & Related papers (2024-11-07T02:04:34Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Editable Graph Neural Network for Node Classifications [43.39295712456175]
We propose underlineEditable underlineGraph underlineNeural underlineNetworks (EGNN) to correct the model prediction on misclassified nodes.
EGNN simply stitches an underlying GNNs, where the weights of GNNs are frozen during model editing.
arXiv Detail & Related papers (2023-05-24T19:35:42Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.