SignGT: Signed Attention-based Graph Transformer for Graph
Representation Learning
- URL: http://arxiv.org/abs/2310.11025v1
- Date: Tue, 17 Oct 2023 06:42:11 GMT
- Title: SignGT: Signed Attention-based Graph Transformer for Graph
Representation Learning
- Authors: Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
- Abstract summary: We propose a Signed Attention-based Graph Transformer (SignGT) to adaptively capture various frequency information from the graphs.
Specifically, SignGT develops a new signed self-attention mechanism (SignSA) that produces signed attention values according to the semantic relevance of node pairs.
- Score: 15.248591535696146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emerging graph Transformers have achieved impressive performance for
graph representation learning over graph neural networks (GNNs). In this work,
we regard the self-attention mechanism, the core module of graph Transformers,
as a two-step aggregation operation on a fully connected graph. Due to the
property of generating positive attention values, the self-attention mechanism
is equal to conducting a smooth operation on all nodes, preserving the
low-frequency information. However, only capturing the low-frequency
information is inefficient in learning complex relations of nodes on diverse
graphs, such as heterophily graphs where the high-frequency information is
crucial. To this end, we propose a Signed Attention-based Graph Transformer
(SignGT) to adaptively capture various frequency information from the graphs.
Specifically, SignGT develops a new signed self-attention mechanism (SignSA)
that produces signed attention values according to the semantic relevance of
node pairs. Hence, the diverse frequency information between different node
pairs could be carefully preserved. Besides, SignGT proposes a structure-aware
feed-forward network (SFFN) that introduces the neighborhood bias to preserve
the local topology information. In this way, SignGT could learn informative
node representations from both long-range dependencies and local topology
information. Extensive empirical results on both node-level and graph-level
tasks indicate the superiority of SignGT against state-of-the-art graph
Transformers as well as advanced GNNs.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Diffusing Graph Attention [15.013509382069046]
We develop a new model for Graph Transformers that integrates the arbitrary graph structure into the architecture.
GD learns to extract structural and positional relationships between distant nodes in the graph, which it then uses to direct the Transformer's attention and node representation.
Experiments on eight benchmarks show Graph diffuser to be a highly competitive model, outperforming the state-of-the-art in a diverse set of domains.
arXiv Detail & Related papers (2023-03-01T16:11:05Z) - Adaptive Multi-Neighborhood Attention based Transformer for Graph
Representation Learning [11.407118196728943]
We propose an adaptive graph Transformer termed Multi-Neighborhood Attention based Graph Transformer (MNA-GT)
MNA-GT captures the graph structural information for each node from the multi-neighborhood attention mechanism adaptively.
Experiments are conducted on a variety of graph benchmarks, and the empirical results show that MNA-GT outperforms many strong baselines.
arXiv Detail & Related papers (2022-11-15T08:12:44Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.