A Survey on Structure-Preserving Graph Transformers
- URL: http://arxiv.org/abs/2401.16176v1
- Date: Mon, 29 Jan 2024 14:18:09 GMT
- Title: A Survey on Structure-Preserving Graph Transformers
- Authors: Van Thuy Hoang and O-Joun Lee
- Abstract summary: We provide a comprehensive overview of structure-preserving graph transformers and generalize these methods from the perspective of their design objective.
We also discuss challenges and future directions for graph transformer models to preserve the graph structure and understand the nature of graphs.
- Score: 2.5252594834159643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transformer architecture has shown remarkable success in various domains,
such as natural language processing and computer vision. When it comes to graph
learning, transformers are required not only to capture the interactions
between pairs of nodes but also to preserve graph structures connoting the
underlying relations and proximity between them, showing the expressive power
to capture different graph structures. Accordingly, various
structure-preserving graph transformers have been proposed and widely used for
various tasks, such as graph-level tasks in bioinformatics and
chemoinformatics. However, strategies related to graph structure preservation
have not been well organized and systematized in the literature. In this paper,
we provide a comprehensive overview of structure-preserving graph transformers
and generalize these methods from the perspective of their design objective.
First, we divide strategies into four main groups: node feature modulation,
context node sampling, graph rewriting, and transformer architecture
improvements. We then further divide the strategies according to the coverage
and goals of graph structure preservation. Furthermore, we also discuss
challenges and future directions for graph transformer models to preserve the
graph structure and understand the nature of graphs.
Related papers
- Graph Transformers: A Survey [15.68583521879617]
Graph transformers are a recent advancement in machine learning, offering a new class of neural network models for graph-structured data.
This survey provides an in-depth review of recent progress and challenges in graph transformer research.
arXiv Detail & Related papers (2024-07-13T05:15:24Z) - Topology-Informed Graph Transformer [7.857955053895979]
'Topology-Informed Graph Transformer (TIGT)' is a novel transformer enhancing both discriminative power in detecting graph isomorphisms and the overall performance of Graph Transformers.
TIGT consists of four components: A topological positional embedding layer using non-isomorphic universal covers based on cyclic subgraphs of graphs to ensure unique graph representation.
TIGT outperforms previous Graph Transformers in classifying synthetic dataset aimed at distinguishing isomorphism classes of graphs.
arXiv Detail & Related papers (2024-02-03T03:17:44Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Adaptive Multi-Neighborhood Attention based Transformer for Graph
Representation Learning [11.407118196728943]
We propose an adaptive graph Transformer termed Multi-Neighborhood Attention based Graph Transformer (MNA-GT)
MNA-GT captures the graph structural information for each node from the multi-neighborhood attention mechanism adaptively.
Experiments are conducted on a variety of graph benchmarks, and the empirical results show that MNA-GT outperforms many strong baselines.
arXiv Detail & Related papers (2022-11-15T08:12:44Z) - Hierarchical Graph Transformer with Adaptive Node Sampling [19.45896788055167]
We identify the main deficiencies of current graph transformers.
Most sampling strategies only focus on local neighbors and neglect the long-range dependencies in the graph.
We propose a hierarchical attention scheme with graph coarsening to capture the long-range interactions.
arXiv Detail & Related papers (2022-10-08T05:53:25Z) - Transformer for Graphs: An Overview from Architecture Perspective [86.3545861392215]
It's imperative to sort out the existing Transformer models for graphs and systematically investigate their effectiveness on various graph tasks.
We first disassemble the existing models and conclude three typical ways to incorporate the graph information into the vanilla Transformer.
Our experiments confirm the benefits of current graph-specific modules on Transformer and reveal their advantages on different kinds of graph tasks.
arXiv Detail & Related papers (2022-02-17T06:02:06Z) - Do Transformers Really Perform Bad for Graph Representation? [62.68420868623308]
We present Graphormer, which is built upon the standard Transformer architecture.
Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.
arXiv Detail & Related papers (2021-06-09T17:18:52Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Bridging Knowledge Graphs to Generate Scene Graphs [49.69377653925448]
We propose a novel graph-based neural network that iteratively propagates information between the two graphs, as well as within each of them.
Our Graph Bridging Network, GB-Net, successively infers edges and nodes, allowing to simultaneously exploit and refine the rich, heterogeneous structure of the interconnected scene and commonsense graphs.
arXiv Detail & Related papers (2020-01-07T23:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.