What Dense Graph Do You Need for Self-Attention?
- URL: http://arxiv.org/abs/2205.14014v2
- Date: Tue, 31 May 2022 02:36:34 GMT
- Title: What Dense Graph Do You Need for Self-Attention?
- Authors: Yuxing Wang, Chu-Tak Lee, Qipeng Guo, Zhangyue Yin, Yunhua Zhou,
Xuanjing Huang, Xipeng Qiu
- Abstract summary: We present Hypercube Transformer, a sparse Transformer that models token interactions in a hypercube and shows comparable or even better results with vanilla Transformer.
Experiments on tasks requiring various sequence lengths lay validation for our graph function well.
- Score: 73.82686008622596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformers have made progress in miscellaneous tasks, but suffer from
quadratic computational and memory complexities. Recent works propose sparse
Transformers with attention on sparse graphs to reduce complexity and remain
strong performance. While effective, the crucial parts of how dense a graph
needs to be to perform well are not fully explored. In this paper, we propose
Normalized Information Payload (NIP), a graph scoring function measuring
information transfer on graph, which provides an analysis tool for trade-offs
between performance and complexity. Guided by this theoretical analysis, we
present Hypercube Transformer, a sparse Transformer that models token
interactions in a hypercube and shows comparable or even better results with
vanilla Transformer while yielding $O(N\log N)$ complexity with sequence length
$N$. Experiments on tasks requiring various sequence lengths lay validation for
our graph function well.
Related papers
- Masked Graph Transformer for Large-Scale Recommendation [56.37903431721977]
We propose an efficient Masked Graph Transformer, named MGFormer, capable of capturing all-pair interactions among nodes with a linear complexity.
Experimental results show the superior performance of our MGFormer, even with a single attention layer.
arXiv Detail & Related papers (2024-05-07T06:00:47Z) - SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention [1.4126245676224705]
Graph Transformers have emerged as a promising solution to alleviate the inherent limitations of Graph Neural Networks (GNNs)
We propose a novel insight into integrating SNNs with Graph Transformers and design a Spiking Graph Attention (SGA) module.
SpikeGraphormer consistently outperforms existing state-of-the-art approaches across various datasets.
arXiv Detail & Related papers (2024-03-21T03:11:53Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations [75.71298846760303]
We show that a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks.
We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model.
We believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.
arXiv Detail & Related papers (2023-06-19T08:03:25Z) - Pure Transformers are Powerful Graph Learners [51.36884247453605]
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
We prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers.
Our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results.
arXiv Detail & Related papers (2022-07-06T08:13:06Z) - Transformers Generalize DeepSets and Can be Extended to Graphs and
Hypergraphs [15.844680924751984]
We present a generalization of Transformers to any-order permutation invariant data (sets, graphs, and hypergraphs)
In particular, we show that the sparse second-order Transformers with kernel attention are theoretically more expressive than message passing operations.
Our models achieve significant improvement over invariants and message-passing graph neural networks in large-scale graph regression and set-to-(hyper)graph prediction tasks.
arXiv Detail & Related papers (2021-10-27T13:20:05Z) - Gophormer: Ego-Graph Transformer for Node Classification [27.491500255498845]
In this paper, we propose a novel Gophormer model which applies transformers on ego-graphs instead of full-graphs.
Specifically, Node2Seq module is proposed to sample ego-graphs as the input of transformers, which alleviates the challenge of scalability.
In order to handle the uncertainty introduced by the ego-graph sampling, we propose a consistency regularization and a multi-sample inference strategy.
arXiv Detail & Related papers (2021-10-25T16:43:32Z) - $O(n)$ Connections are Expressive Enough: Universal Approximability of
Sparse Transformers [71.31712741938837]
We show that sparse Transformers with only $O(n)$ connections per attention layer can approximate the same function class as the dense model with $n2$ connections.
We also present experiments comparing different patterns/levels of sparsity on standard NLP tasks.
arXiv Detail & Related papers (2020-06-08T18:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.