Graph External Attention Enhanced Transformer
- URL: http://arxiv.org/abs/2405.21061v2
- Date: Mon, 3 Jun 2024 14:20:27 GMT
- Title: Graph External Attention Enhanced Transformer
- Authors: Jianqing Liang, Min Chen, Jiye Liang,
- Abstract summary: We propose Graph External Attention (GEA) -- a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly.
On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET)
Experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance.
- Score: 20.44782028691701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Transformer architecture has recently gained considerable attention in the field of graph representation learning, as it naturally overcomes several limitations of Graph Neural Networks (GNNs) with customized attention mechanisms or positional and structural encodings. Despite making some progress, existing works tend to overlook external information of graphs, specifically the correlation between graphs. Intuitively, graphs with similar structures should have similar representations. Therefore, we propose Graph External Attention (GEA) -- a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly. On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET), which integrates local structure and global interaction information for more comprehensive graph representations. Extensive experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance. The source code is available for reproducibility at: https://github.com/icm1018/GEAET.
Related papers
- RAGraph: A General Retrieval-Augmented Graph Learning Framework [35.25522856244149]
We introduce a novel framework called General Retrieval-Augmented Graph Learning (RAGraph)
RAGraph brings external graph data into the general graph foundation model to improve model generalization on unseen scenarios.
During inference, the RAGraph adeptly retrieves similar toy graphs based on key similarities in downstream tasks.
arXiv Detail & Related papers (2024-10-31T12:05:21Z) - GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction [6.817416560637197]
Graph autoencoders (GAEs) reconstruct graph structures from node embeddings.
We introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities.
We also propose GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks.
arXiv Detail & Related papers (2024-10-04T12:59:45Z) - Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.
GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.
It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Transforming Graphs for Enhanced Attribute Clustering: An Innovative
Graph Transformer-Based Method [8.989218350080844]
This study introduces an innovative method known as the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC)
By melding the Graph Auto-Encoder with the Graph Transformer, GTAGC is adept at capturing global dependencies between nodes.
The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component.
arXiv Detail & Related papers (2023-06-20T06:04:03Z) - Diffusing Graph Attention [15.013509382069046]
We develop a new model for Graph Transformers that integrates the arbitrary graph structure into the architecture.
GD learns to extract structural and positional relationships between distant nodes in the graph, which it then uses to direct the Transformer's attention and node representation.
Experiments on eight benchmarks show Graph diffuser to be a highly competitive model, outperforming the state-of-the-art in a diverse set of domains.
arXiv Detail & Related papers (2023-03-01T16:11:05Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.