Transformers meet Stochastic Block Models: Attention with Data-Adaptive
Sparsity and Cost
- URL: http://arxiv.org/abs/2210.15541v1
- Date: Thu, 27 Oct 2022 15:30:52 GMT
- Title: Transformers meet Stochastic Block Models: Attention with Data-Adaptive
Sparsity and Cost
- Authors: Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee,
Seunghoon Hong
- Abstract summary: Recent works have proposed various sparse attention modules to overcome the quadratic cost of self-attention.
We propose a model that resolves both problems by endowing each attention head with a mixed-membership Block Model.
Our model outperforms previous efficient variants as well as the original Transformer with full attention.
- Score: 53.746169882193456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To overcome the quadratic cost of self-attention, recent works have proposed
various sparse attention modules, most of which fall under one of two groups:
1) sparse attention under a hand-crafted patterns and 2) full attention
followed by a sparse variant of softmax such as $\alpha$-entmax. Unfortunately,
the first group lacks adaptability to data while the second still requires
quadratic cost in training. In this work, we propose SBM-Transformer, a model
that resolves both problems by endowing each attention head with a
mixed-membership Stochastic Block Model (SBM). Then, each attention head
data-adaptively samples a bipartite graph, the adjacency of which is used as an
attention mask for each input. During backpropagation, a straight-through
estimator is used to flow gradients beyond the discrete sampling step and
adjust the probabilities of sampled edges based on the predictive loss. The
forward and backward cost are thus linear to the number of edges, which each
attention head can also choose flexibly based on the input. By assessing the
distribution of graphs, we theoretically show that SBM-Transformer is a
universal approximator for arbitrary sequence-to-sequence functions in
expectation. Empirical evaluations under the LRA and GLUE benchmarks
demonstrate that our model outperforms previous efficient variants as well as
the original Transformer with full attention. Our implementation can be found
in https://github.com/sc782/SBM-Transformer .
Related papers
- Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers [0.7704032792820767]
Deep neural networks are applied in more and more areas of everyday life.
They still lack essential abilities, such as robustly dealing with spatially transformed input signals.
We propose a novel technique to emulate such an inference process for neural nets.
arXiv Detail & Related papers (2024-05-06T09:47:29Z) - Hierarchical Vector Quantized Transformer for Multi-class Unsupervised
Anomaly Detection [24.11900895337062]
Unsupervised image Anomaly Detection (UAD) aims to learn robust and discriminative representations of normal samples.
This paper focuses on building a unified framework for multiple classes.
arXiv Detail & Related papers (2023-10-22T08:20:33Z) - Transformers as Support Vector Machines [54.642793677472724]
We establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem.
We characterize the implicit bias of 1-layer transformers optimized with gradient descent.
We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.
arXiv Detail & Related papers (2023-08-31T17:57:50Z) - STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action Recognition [50.064502884594376]
We study the problem of human action recognition using motion capture (MoCap) sequences.
We propose a novel Spatial-Temporal Mesh Transformer (STMT) to directly model the mesh sequences.
The proposed method achieves state-of-the-art performance compared to skeleton-based and point-cloud-based models.
arXiv Detail & Related papers (2023-03-31T16:19:27Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded
learning [16.526326919313924]
We study an approach to learning pruning masks by optimizing the expected loss of pruning masks.
We analyze the training dynamics of the inducedadaptive predictor in the setting of linear regression.
We show that a PAC-Bayes generalization error bound is controlled by the magnitude of the change in feature alignment between the 'prior' and 'posterior' data.
arXiv Detail & Related papers (2021-10-22T14:25:22Z) - Predicting Attention Sparsity in Transformers [0.9786690381850356]
We propose Sparsefinder, a model trained to identify the sparsity pattern of entmax attention before computing it.
Our work provides a new angle to study model efficiency by doing extensive analysis of the tradeoff between the sparsity and recall of the predicted attention graph.
arXiv Detail & Related papers (2021-09-24T20:51:21Z) - Combiner: Full Attention Transformer with Sparse Computation Cost [142.10203598824964]
We propose Combiner, which provides full attention capability in each attention head while maintaining low computation complexity.
We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention.
An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach.
arXiv Detail & Related papers (2021-07-12T22:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.