Diffusing Graph Attention
- URL: http://arxiv.org/abs/2303.00613v1
- Date: Wed, 1 Mar 2023 16:11:05 GMT
- Title: Diffusing Graph Attention
- Authors: Daniel Glickman, Eran Yahav
- Abstract summary: We develop a new model for Graph Transformers that integrates the arbitrary graph structure into the architecture.
GD learns to extract structural and positional relationships between distant nodes in the graph, which it then uses to direct the Transformer's attention and node representation.
Experiments on eight benchmarks show Graph diffuser to be a highly competitive model, outperforming the state-of-the-art in a diverse set of domains.
- Score: 15.013509382069046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dominant paradigm for machine learning on graphs uses Message Passing
Graph Neural Networks (MP-GNNs), in which node representations are updated by
aggregating information in their local neighborhood. Recently, there have been
increasingly more attempts to adapt the Transformer architecture to graphs in
an effort to solve some known limitations of MP-GNN. A challenging aspect of
designing Graph Transformers is integrating the arbitrary graph structure into
the architecture. We propose Graph Diffuser (GD) to address this challenge. GD
learns to extract structural and positional relationships between distant nodes
in the graph, which it then uses to direct the Transformer's attention and node
representation. We demonstrate that existing GNNs and Graph Transformers
struggle to capture long-range interactions and how Graph Diffuser does so
while admitting intuitive visualizations. Experiments on eight benchmarks show
Graph Diffuser to be a highly competitive model, outperforming the
state-of-the-art in a diverse set of domains.
Related papers
- InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - Graph External Attention Enhanced Transformer [20.44782028691701]
We propose Graph External Attention (GEA) -- a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly.
On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET)
Experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance.
arXiv Detail & Related papers (2024-05-31T17:50:27Z) - Technical Report: The Graph Spectral Token -- Enhancing Graph Transformers with Spectral Information [0.8184895397419141]
Graph Transformers have emerged as a powerful alternative to Message-Passing Graph Neural Networks (MP-GNNs)
We propose the Graph Spectral Token, a novel approach to directly encode graph spectral information.
We benchmark the effectiveness of our approach by enhancing two existing graph transformers, GraphTrans and SubFormer.
arXiv Detail & Related papers (2024-04-08T15:24:20Z) - SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention [1.4126245676224705]
Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs)
We propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module.
SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets.
arXiv Detail & Related papers (2024-03-21T03:11:53Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs [20.85042364993559]
We propose Graph Transformer Networks (GTNs) that generate new graph structures and include useful connections for tasks.
Fast Graph Transformer Networks (FastGTNs) are 230x faster and use 100x less memory.
We extend graph transformations to the semantic proximity of nodes allowing non-local operations beyond meta-paths.
arXiv Detail & Related papers (2021-06-11T07:56:55Z) - Do Transformers Really Perform Bad for Graph Representation? [62.68420868623308]
We present Graphormer, which is built upon the standard Transformer architecture.
Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.
arXiv Detail & Related papers (2021-06-09T17:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.