FairNorm: Fair and Fast Graph Neural Network Training
- URL: http://arxiv.org/abs/2205.09977v1
- Date: Fri, 20 May 2022 06:10:27 GMT
- Title: FairNorm: Fair and Fast Graph Neural Network Training
- Authors: O. Deniz Kose, Yanning Shen
- Abstract summary: Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks.
It has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups.
This work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning.
- Score: 9.492903649862761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have been demonstrated to achieve
state-of-the-art for a number of graph-based learning tasks, which leads to a
rise in their employment in various domains. However, it has been shown that
GNNs may inherit and even amplify bias within training data, which leads to
unfair results towards certain sensitive groups. Meanwhile, training of GNNs
introduces additional challenges, such as slow convergence and possible
instability. Faced with these limitations, this work proposes FairNorm, a
unified normalization framework that reduces the bias in GNN-based learning
while also providing provably faster convergence. Specifically, FairNorm
employs fairness-aware normalization operators over different sensitive groups
with learnable parameters to reduce the bias in GNNs. The design of FairNorm is
built upon analyses that illuminate the sources of bias in graph-based
learning. Experiments on node classification over real-world networks
demonstrate the efficiency of the proposed scheme in improving fairness in
terms of statistical parity and equal opportunity compared to fairness-aware
baselines. In addition, it is empirically shown that the proposed framework
leads to faster convergence compared to the naive baseline where no
normalization is employed.
Related papers
- Towards Fair Graph Representation Learning in Social Networks [20.823461673845756]
We introduce constraints for fair representation learning based on three principles: sufficiency, independence, and separation.
We theoretically demonstrate that our EAGNN method can effectively achieve group fairness.
arXiv Detail & Related papers (2024-10-15T10:57:02Z) - Disentangling, Amplifying, and Debiasing: Learning Disentangled Representations for Fair Graph Neural Networks [22.5976413484192]
We propose a novel GNN framework, DAB-GNN, that Disentangles, Amplifies, and deBiases attribute, structure, and potential biases in the GNN mechanism.
Dab-GNN significantly outperforms ten state-of-the-art competitors in terms of achieving an optimal balance between accuracy and fairness.
arXiv Detail & Related papers (2024-08-23T07:14:56Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage [1.8238848494579714]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.