Chasing Fairness in Graphs: A GNN Architecture Perspective
- URL: http://arxiv.org/abs/2312.12369v1
- Date: Tue, 19 Dec 2023 18:00:15 GMT
- Title: Chasing Fairness in Graphs: A GNN Architecture Perspective
- Authors: Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali
Mostafavi, Xia Hu
- Abstract summary: We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
- Score: 73.43111851492593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been significant progress in improving the performance of graph
neural networks (GNNs) through enhancements in graph data, model architecture
design, and training strategies. For fairness in graphs, recent studies achieve
fair representations and predictions through either graph data pre-processing
(e.g., node feature masking, and topology rewiring) or fair training strategies
(e.g., regularization, adversarial debiasing, and fair contrastive learning).
How to achieve fairness in graphs from the model architecture perspective is
less explored. More importantly, GNNs exhibit worse fairness performance
compared to multilayer perception since their model architecture (i.e.,
neighbor aggregation) amplifies biases. To this end, we aim to achieve fairness
via a new GNN architecture. We propose \textsf{F}air \textsf{M}essage
\textsf{P}assing (FMP) designed within a unified optimization framework for
GNNs. Notably, FMP \textit{explicitly} renders sensitive attribute usage in
\textit{forward propagation} for node classification task using cross-entropy
loss without data pre-processing. In FMP, the aggregation is first adopted to
utilize neighbors' information and then the bias mitigation step explicitly
pushes demographic group node presentation centers together. In this way, FMP
scheme can aggregate useful information from neighbors and mitigate bias to
achieve better fairness and prediction tradeoff performance. Experiments on
node classification tasks demonstrate that the proposed FMP outperforms several
baselines in terms of fairness and accuracy on three real-world datasets. The
code is available in {\url{https://github.com/zhimengj0326/FMP}}.
Related papers
- TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - A Unified Graph Selective Prompt Learning for Graph Neural Networks [20.595782116049428]
Graph Prompt Feature (GPF) has achieved remarkable success in adapting pre-trained models for Graph Neural Networks (GNNs)
We propose a new unified Graph Selective Prompt Feature learning (GSPF) for GNN fine-tuning.
arXiv Detail & Related papers (2024-06-15T04:36:40Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - FMP: Toward Fair Graph Message Passing against Topology Bias [43.70672256020857]
A textsfFair textsfMessage textsfPassing (FMP) scheme is proposed to aggregate useful information from neighbors but minimize the effect of topology bias.
The proposed FMP is effective, transparent, and compatible with back-propagation training.
arXiv Detail & Related papers (2022-02-08T23:00:26Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.