A Scalable and Effective Alternative to Graph Transformers
- URL: http://arxiv.org/abs/2406.12059v1
- Date: Mon, 17 Jun 2024 19:57:34 GMT
- Title: A Scalable and Effective Alternative to Graph Transformers
- Authors: Kaan Sancak, Zhigang Hua, Jin Fang, Yan Xie, Andrey Malevich, Bo Long, Muhammed Fatih Balin, Ümit V. Çatalyürek,
- Abstract summary: Graph Transformers (GTs) were introduced, utilizing self-attention mechanism to model pairwise node relationships.
GTs suffer from complexity w.r.t. the number of nodes in the graph, hindering their applicability to large graphs.
We present Graph-Enhanced Contextual Operator (GECO), a scalable and effective alternative to GTs.
- Score: 19.018320937729264
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Neural Networks (GNNs) have shown impressive performance in graph representation learning, but they face challenges in capturing long-range dependencies due to their limited expressive power. To address this, Graph Transformers (GTs) were introduced, utilizing self-attention mechanism to effectively model pairwise node relationships. Despite their advantages, GTs suffer from quadratic complexity w.r.t. the number of nodes in the graph, hindering their applicability to large graphs. In this work, we present Graph-Enhanced Contextual Operator (GECO), a scalable and effective alternative to GTs that leverages neighborhood propagation and global convolutions to effectively capture local and global dependencies in quasilinear time. Our study on synthetic datasets reveals that GECO reaches 169x speedup on a graph with 2M nodes w.r.t. optimized attention. Further evaluations on diverse range of benchmarks showcase that GECO scales to large graphs where traditional GTs often face memory and time limitations. Notably, GECO consistently achieves comparable or superior quality compared to baselines, improving the SOTA up to 4.5%, and offering a scalable and effective solution for large-scale graph learning.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - What Can We Learn from State Space Models for Machine Learning on Graphs? [11.38076877943004]
We propose Graph State Space Convolution (GSSC) as a principled extension of State Space Models (SSMs) to graph-structured data.
By leveraging global permutation-equivariant set aggregation and factorizable graph kernels, GSSC preserves all three advantages of SSMs.
Our findings highlight the potential of GSSC as a powerful and scalable model for graph machine learning.
arXiv Detail & Related papers (2024-06-09T15:03:36Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Graph Condensation for Open-World Graph Learning [48.38802327346445]
Graph condensation (GC) has emerged as a promising acceleration solution for efficiently training graph neural networks (GNNs)
Existing GC methods are limited to aligning the condensed graph with merely the observed static graph distribution.
In real-world scenarios, however, graphs are dynamic and constantly evolving, with new nodes and edges being continually integrated.
We propose OpenGC, a robust GC framework that integrates structure-aware distribution shift to simulate evolving graph patterns.
arXiv Detail & Related papers (2024-05-27T09:47:09Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - CoSimGNN: Towards Large-scale Graph Similarity Computation [5.17905821006887]
Graph Neural Networks (GNNs) provide a data-driven solution for this task.
Existing GNN-based methods, which either respectively embeds two graphs or deploy cross-graph interactions for whole graph pairs, are still not able to achieve competitive results.
We propose the "embedding-coarsening-matching" framework CoSimGNN, which first embeds and coarsens large graphs with adaptive pooling operation and then deploys fine-grained interactions on the coarsened graphs for final similarity scores.
arXiv Detail & Related papers (2020-05-14T16:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.