FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently
- URL: http://arxiv.org/abs/2401.14702v1
- Date: Fri, 26 Jan 2024 08:17:12 GMT
- Title: FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently
- Authors: Zicun Cong, Shi Baoxu, Shan Li, Jaewon Yang, Qi He, Jian Pei
- Abstract summary: Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
- Score: 29.457338893912656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness in Graph Convolutional Neural Networks (GCNs) becomes a more and
more important concern as GCNs are adopted in many crucial applications.
Societal biases against sensitive groups may exist in many real world graphs.
GCNs trained on those graphs may be vulnerable to being affected by such
biases. In this paper, we adopt the well-known fairness notion of demographic
parity and tackle the challenge of training fair and accurate GCNs efficiently.
We present an in-depth analysis on how graph structure bias, node attribute
bias, and model parameters may affect the demographic parity of GCNs. Our
insights lead to FairSample, a framework that jointly mitigates the three types
of biases. We employ two intuitive strategies to rectify graph structures.
First, we inject edges across nodes that are in different sensitive groups but
similar in node features. Second, to enhance model fairness and retain model
quality, we develop a learnable neighbor sampling policy using reinforcement
learning. To address the bias in node features and model parameters, FairSample
is complemented by a regularization objective to optimize fairness.
Related papers
- Rethinking Fair Graph Neural Networks from Re-balancing [26.70771023446706]
We find that simple re-balancing methods can easily match or surpass existing fair GNN methods.
We propose FairGB, Fair Graph Neural Network via re-Balancing, which mitigates the unfairness of GNNs by group balancing.
arXiv Detail & Related papers (2024-07-16T11:39:27Z) - Fair Graph Neural Network with Supervised Contrastive Regularization [12.666235467177131]
We propose a novel model for training fairness-aware Graph Neural Networks (GNNs)
Our approach integrates Supervised Contrastive Loss and Environmental Loss to enhance both accuracy and fairness.
arXiv Detail & Related papers (2024-04-09T07:49:05Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Marginal Nodes Matter: Towards Structure Fairness in Graphs [77.25149739933596]
We propose textbfStructural textbfFair textbfGraph textbfNeural textbfNetwork (SFairGNN) to achieve structure fairness.
Our experiments show SFairGNN can significantly improve structure fairness while maintaining overall performance in the downstream tasks.
arXiv Detail & Related papers (2023-10-23T03:20:32Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.