FMP: Toward Fair Graph Message Passing against Topology Bias
- URL: http://arxiv.org/abs/2202.04187v1
- Date: Tue, 8 Feb 2022 23:00:26 GMT
- Title: FMP: Toward Fair Graph Message Passing against Topology Bias
- Authors: Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali
Mostafavi, and Xia Hu
- Abstract summary: A textsfFair textsfMessage textsfPassing (FMP) scheme is proposed to aggregate useful information from neighbors but minimize the effect of topology bias.
The proposed FMP is effective, transparent, and compatible with back-propagation training.
- Score: 43.70672256020857
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Despite recent advances in achieving fair representations and predictions
through regularization, adversarial debiasing, and contrastive learning in
graph neural networks (GNNs), the working mechanism (i.e., message passing)
behind GNNs inducing unfairness issue remains unknown. In this work, we
theoretically and experimentally demonstrate that representative aggregation in
message-passing schemes accumulates bias in node representation due to topology
bias induced by graph topology. Thus, a \textsf{F}air \textsf{M}essage
\textsf{P}assing (FMP) scheme is proposed to aggregate useful information from
neighbors but minimize the effect of topology bias in a unified framework
considering graph smoothness and fairness objectives. The proposed FMP is
effective, transparent, and compatible with back-propagation training. An
acceleration approach on gradient calculation is also adopted to improve
algorithm efficiency. Experiments on node classification tasks demonstrate that
the proposed FMP outperforms the state-of-the-art baselines in effectively and
efficiently mitigating bias on three real-world datasets.
Related papers
- DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Anisotropic Graph Convolutional Network for Semi-supervised Learning [7.843067454030999]
Graph convolutional networks learn effective node embeddings that have proven to be useful in achieving high-accuracy prediction results.
These networks suffer from the issue of over-smoothing and shrinking effect of the graph due in large part to the fact that they diffuse features across the edges of the graph using a linear Laplacian flow.
We propose an anisotropic graph convolutional network for semi-supervised node classification by introducing a nonlinear function that captures informative features from nodes, while preventing oversmoothing.
arXiv Detail & Related papers (2020-10-20T13:56:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.