Mitigating Degree Biases in Message Passing Mechanism by Utilizing
Community Structures
- URL: http://arxiv.org/abs/2312.16788v1
- Date: Thu, 28 Dec 2023 02:30:13 GMT
- Title: Mitigating Degree Biases in Message Passing Mechanism by Utilizing
Community Structures
- Authors: Van Thuy Hoang and O-Joun Lee
- Abstract summary: We propose Community-aware Graph Transformers (CGT) to learn degree-unbiased representations based on learnable augmentations and graph transformers.
We first design a learnable graph augmentation to generate more within-community edges connecting low-degree nodes through edge perturbations.
Second, we propose an improved self-attention to learn underlying proximity and the roles of nodes within the community.
- Score: 2.5252594834159643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study utilizes community structures to address node degree biases in
message-passing (MP) via learnable graph augmentations and novel graph
transformers. Recent augmentation-based methods showed that MP neural networks
often perform poorly on low-degree nodes, leading to degree biases due to a
lack of messages reaching low-degree nodes. Despite their success, most methods
use heuristic or uniform random augmentations, which are non-differentiable and
may not always generate valuable edges for learning representations. In this
paper, we propose Community-aware Graph Transformers, namely CGT, to learn
degree-unbiased representations based on learnable augmentations and graph
transformers by extracting within community structures. We first design a
learnable graph augmentation to generate more within-community edges connecting
low-degree nodes through edge perturbation. Second, we propose an improved
self-attention to learn underlying proximity and the roles of nodes within the
community. Third, we propose a self-supervised learning task that could learn
the representations to preserve the global graph structure and regularize the
graph augmentations. Extensive experiments on various benchmark datasets showed
CGT outperforms state-of-the-art baselines and significantly improves the node
degree biases. The source code is available at
https://github.com/NSLab-CUK/Community-aware-Graph-Transformer.
Related papers
- Mitigating Degree Bias in Graph Representation Learning with Learnable Structural Augmentation and Structural Self-Attention [1.9019250262578853]
In real-world graphs, high-degree nodes dominate message passing, causing a degree bias where low-degree nodes remain under-represented.
We propose a novel Degree Fairness Graph Transformer, named DegFairGT, to mitigate degree bias.
Our key idea is to exploit non-adjacent nodes with similar roles in the same community to generate informative edges under our augmentation.
arXiv Detail & Related papers (2025-04-21T13:03:40Z) - Synergistic Deep Graph Clustering Network [14.569867830074292]
We propose a graph clustering framework named Synergistic Deep Graph Clustering Network (SynC)
In our approach, we design a Transform Input Graph Auto-Encoder (TIGAE) to obtain high-quality embeddings for guiding structure augmentation.
Notably, representation learning and structure augmentation share weights, significantly reducing the number of model parameters.
arXiv Detail & Related papers (2024-06-22T09:40:34Z) - Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Transitivity-Preserving Graph Representation Learning for Bridging Local
Connectivity and Role-based Similarity [2.5252594834159643]
We propose Unified Graph Transformer Networks (UGT) that integrate local and global structural information into fixed-length vector representations.
First, UGT learns local structure by identifying the local substructures and aggregating features of the $k$-hop neighborhoods of each node.
Third, UGT learns unified representations through self-attention, encoding structural distance and $p$-step transition probability between node pairs.
arXiv Detail & Related papers (2023-08-18T12:49:57Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Representing Long-Range Context for Graph Neural Networks with Global
Attention [37.212747564546156]
We propose the use of Transformer-based self-attention to learn long-range pairwise relationships.
Our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module.
Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs.
arXiv Detail & Related papers (2022-01-21T18:16:21Z) - Augmentation-Free Self-Supervised Learning on Graphs [7.146027549101716]
We propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL.
Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph.
arXiv Detail & Related papers (2021-12-05T04:20:44Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.