FairGT: A Fairness-aware Graph Transformer
- URL: http://arxiv.org/abs/2404.17169v1
- Date: Fri, 26 Apr 2024 05:48:59 GMT
- Title: FairGT: A Fairness-aware Graph Transformer
- Authors: Renqiang Luo, Huafei Huang, Shuo Yu, Xiuzhen Zhang, Feng Xia,
- Abstract summary: We propose FairGT, a Fairness-aware Graph Transformer crafted to mitigate fairness concerns inherent in GTs.
FairGT incorporates a meticulous structural feature selection strategy and a multi-hop node feature integration method.
Empirical evaluations conducted across five real-world datasets demonstrate FairGT's superiority in fairness metrics over existing graph transformers, graph neural networks, and state-of-the-art fairness-aware graph learning approaches.
- Score: 10.338939339111912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of Graph Transformers (GTs) generally neglects considerations for fairness, resulting in biased outcomes against certain sensitive subgroups. Since GTs encode graph information without relying on message-passing mechanisms, conventional fairness-aware graph learning methods cannot be directly applicable to address these issues. To tackle this challenge, we propose FairGT, a Fairness-aware Graph Transformer explicitly crafted to mitigate fairness concerns inherent in GTs. FairGT incorporates a meticulous structural feature selection strategy and a multi-hop node feature integration method, ensuring independence of sensitive features and bolstering fairness considerations. These fairness-aware graph information encodings seamlessly integrate into the Transformer framework for downstream tasks. We also prove that the proposed fair structural topology encoding with adjacency matrix eigenvector selection and multi-hop integration are theoretically effective. Empirical evaluations conducted across five real-world datasets demonstrate FairGT's superiority in fairness metrics over existing graph transformers, graph neural networks, and state-of-the-art fairness-aware graph learning approaches.
Related papers
- FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning [15.383535436798065]
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models.
GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs.
We propose Graph Partitioning (FairGP) which partitions the graph to minimize the negative impact of higher-order nodes.
arXiv Detail & Related papers (2024-12-14T04:15:32Z) - Towards Fair Graph Neural Networks via Graph Counterfactual without Sensitive Attributes [4.980930265721185]
We propose a framework named Fairwos (improving Fairness without sensitive attributes)
We first propose a mechanism to generate pseudo-sensitive attributes to remedy the problem of missing sensitive attributes.
We then design a strategy for finding graph counterfactuals from the real dataset.
arXiv Detail & Related papers (2024-12-13T08:11:40Z) - SGFormer: Single-Layer Graph Transformers with Approximation-Free Linear Complexity [74.51827323742506]
We evaluate the necessity of adopting multi-layer attentions in Transformers on graphs.
We show that one-layer propagation can be reduced to one-layer propagation, with the same capability for representation learning.
It suggests a new technical path for building powerful and efficient Transformers on graphs.
arXiv Detail & Related papers (2024-09-13T17:37:34Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Fairness-aware Optimal Graph Filter Design [25.145533328758614]
Graphs are mathematical tools that can be used to represent complex real-world interconnected systems.
Machine learning (ML) over graphs has attracted significant attention recently.
We take a fresh look at the problem of bias mitigation in graph-based learning by borrowing insights from graph signal processing.
arXiv Detail & Related papers (2023-10-22T22:40:40Z) - Fairness-aware Message Passing for Graph Neural Networks [35.36630284275523]
We propose a novel fairness-aware message passing framework GMMD.
GMMD can be intuitively interpreted as encouraging a node to aggregate representations of other nodes from different sensitive groups.
We show that our proposed framework can significantly improve the fairness of various backbone GNN models while maintaining high accuracy.
arXiv Detail & Related papers (2023-06-19T19:31:35Z) - FairGen: Towards Fair Graph Generation [76.34239875010381]
We propose a fairness-aware graph generative model named FairGen.
Our model jointly trains a label-informed graph generation module and a fair representation learning module.
Experimental results on seven real-world data sets, including web-based graphs, demonstrate that FairGen obtains performance on par with state-of-the-art graph generative models.
arXiv Detail & Related papers (2023-03-30T23:30:42Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.