Momentum Transformer: Closing the Performance Gap Between Self-attention
and Its Linearization
- URL: http://arxiv.org/abs/2208.00579v1
- Date: Mon, 1 Aug 2022 02:37:49 GMT
- Title: Momentum Transformer: Closing the Performance Gap Between Self-attention
and Its Linearization
- Authors: Tan Nguyen and Richard G. Baraniuk and Robert M. Kirby and Stanley J.
Osher and Bao Wang
- Abstract summary: Leveraging techniques include sparse and linear attention and hashing tricks; efficient transformers have been proposed to reduce the quadratic complexity of transformers but significantly degrade the accuracy.
We first interpret the linear attention and residual connections in computing the attention map as gradient descent steps.
We then introduce momentum into these components and propose the emphmomentum transformer, which utilizes momentum to improve the accuracy of linear transformers while maintaining linear memory and computational complexities.
- Score: 31.28396970291575
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Transformers have achieved remarkable success in sequence modeling and beyond
but suffer from quadratic computational and memory complexities with respect to
the length of the input sequence. Leveraging techniques include sparse and
linear attention and hashing tricks; efficient transformers have been proposed
to reduce the quadratic complexity of transformers but significantly degrade
the accuracy. In response, we first interpret the linear attention and residual
connections in computing the attention map as gradient descent steps. We then
introduce momentum into these components and propose the \emph{momentum
transformer}, which utilizes momentum to improve the accuracy of linear
transformers while maintaining linear memory and computational complexities.
Furthermore, we develop an adaptive strategy to compute the momentum value for
our model based on the optimal momentum for quadratic optimization. This
adaptive momentum eliminates the need to search for the optimal momentum value
and further enhances the performance of the momentum transformer. A range of
experiments on both autoregressive and non-autoregressive tasks, including
image generation and machine translation, demonstrate that the momentum
transformer outperforms popular linear transformers in training efficiency and
accuracy.
Related papers
- Parallelizing Linear Transformers with the Delta Rule over Sequence Length [49.88826673324244]
This work describes a hardware-efficient algorithm for training linear transformers with the delta rule.
We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines.
arXiv Detail & Related papers (2024-06-10T17:24:42Z) - Linear Transformers are Versatile In-Context Learners [19.988368693379087]
We prove that each layer of a linear transformer maintains a weight vector for an implicit linear regression problem.
We also investigate the use of linear transformers in a challenging scenario where the training data is corrupted with different levels of noise.
Remarkably, we demonstrate that for this problem linear transformers discover an intricate and highly effective optimization algorithm.
arXiv Detail & Related papers (2024-02-21T23:45:57Z) - Linear attention is (maybe) all you need (to understand transformer
optimization) [55.81555204646486]
We make progress towards understanding the subtleties of training Transformers by studying a simple yet canonicalized shallow Transformer model.
Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of Transformer training dynamics.
arXiv Detail & Related papers (2023-10-02T10:48:42Z) - SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood
Filling [1.0128808054306186]
We propose a novel sparsification scheme for the Transformer that integrates convolution filters and the flood filling method.
Our sparsification approach reduces the computational complexity and memory footprint of the Transformer during training.
New SPION achieves up to 3.08X speedup over existing state-of-the-art sparse Transformer models.
arXiv Detail & Related papers (2023-09-22T02:14:46Z) - FLatten Transformer: Vision Transformer using Focused Linear Attention [80.61335173752146]
Linear attention offers a much more efficient alternative with its linear complexity.
Current linear attention approaches either suffer from significant performance degradation or introduce additional computation overhead.
We propose a novel Focused Linear Attention module to achieve both high efficiency and expressiveness.
arXiv Detail & Related papers (2023-08-01T10:37:12Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Transformers learn in-context by gradient descent [58.24152335931036]
Training Transformers on auto-regressive objectives is closely related to gradient-based meta-learning formulations.
We show how trained Transformers become mesa-optimizers i.e. learn models by gradient descent in their forward pass.
arXiv Detail & Related papers (2022-12-15T09:21:21Z) - Finetuning Pretrained Transformers into RNNs [81.72974646901136]
Transformers have outperformed recurrent neural networks (RNNs) in natural language generation.
A linear-complexity recurrent variant has proven well suited for autoregressive generation.
This work aims to convert a pretrained transformer into its efficient recurrent counterpart.
arXiv Detail & Related papers (2021-03-24T10:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.