Transitivity-Preserving Graph Representation Learning for Bridging Local
Connectivity and Role-based Similarity
- URL: http://arxiv.org/abs/2308.09517v1
- Date: Fri, 18 Aug 2023 12:49:57 GMT
- Title: Transitivity-Preserving Graph Representation Learning for Bridging Local
Connectivity and Role-based Similarity
- Authors: Van Thuy Hoang and O-Joun Lee
- Abstract summary: We propose Unified Graph Transformer Networks (UGT) that integrate local and global structural information into fixed-length vector representations.
First, UGT learns local structure by identifying the local substructures and aggregating features of the $k$-hop neighborhoods of each node.
Third, UGT learns unified representations through self-attention, encoding structural distance and $p$-step transition probability between node pairs.
- Score: 2.5252594834159643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning (GRL) methods, such as graph neural networks
and graph transformer models, have been successfully used to analyze
graph-structured data, mainly focusing on node classification and link
prediction tasks. However, the existing studies mostly only consider local
connectivity while ignoring long-range connectivity and the roles of nodes. In
this paper, we propose Unified Graph Transformer Networks (UGT) that
effectively integrate local and global structural information into fixed-length
vector representations. First, UGT learns local structure by identifying the
local substructures and aggregating features of the $k$-hop neighborhoods of
each node. Second, we construct virtual edges, bridging distant nodes with
structural similarity to capture the long-range dependencies. Third, UGT learns
unified representations through self-attention, encoding structural distance
and $p$-step transition probability between node pairs. Furthermore, we propose
a self-supervised learning task that effectively learns transition probability
to fuse local and global structural features, which could then be transferred
to other downstream tasks. Experimental results on real-world benchmark
datasets over various downstream tasks showed that UGT significantly
outperformed baselines that consist of state-of-the-art models. In addition,
UGT reaches the expressive power of the third-order Weisfeiler-Lehman
isomorphism test (3d-WL) in distinguishing non-isomorphic graph pairs. The
source code is available at
https://github.com/NSLab-CUK/Unified-Graph-Transformer.
Related papers
- Improving Graph Neural Networks by Learning Continuous Edge Directions [0.0]
Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism that resembles diffusion over undirected graphs.
Our key insight is to assign fuzzy edge directions to the edges of a graph so that features can preferentially flow in one direction between nodes.
We propose a general framework, called Continuous Edge Direction (CoED) GNN, for learning on graphs with fuzzy edges.
arXiv Detail & Related papers (2024-10-18T01:34:35Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Node-like as a Whole: Structure-aware Searching and Coarsening for Graph Classification [14.602474387096244]
Graph Transformers (GTs) have made remarkable achievements in graph-level tasks.
Can we treat graph structures node-like as a whole to learn high-level features?
We propose a novel multi-view graph representation learning model via structure-aware searching and coarsening.
arXiv Detail & Related papers (2024-04-18T03:03:37Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - Edge-augmented Graph Transformers: Global Self-attention is Enough for
Graphs [24.796242917673755]
We propose a simple yet powerful extension to the transformer - residual edge channels.
The resultant framework, which we call Edge-augmented Graph Transformer (EGT), can directly accept, process and output structural information as well as node information.
Our framework, which relies on global node feature aggregation, achieves better performance compared to Graph Convolutional Networks (GCN)
arXiv Detail & Related papers (2021-08-07T02:18:11Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Deep graph learning for semi-supervised classification [11.260083018676548]
Graph learning (GL) can dynamically capture the distribution structure (graph structure) of data based on graph convolutional networks (GCN)
Existing methods mostly combine the computational layer and the related losses into GCN for exploring the global graph or local graph.
Deep graph learning(DGL) is proposed to find the better graph representation for semi-supervised classification.
arXiv Detail & Related papers (2020-05-29T05:59:45Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.