Attention Is Not All You Need Anymore
- URL: http://arxiv.org/abs/2308.07661v2
- Date: Tue, 19 Sep 2023 13:32:07 GMT
- Title: Attention Is Not All You Need Anymore
- Authors: Zhe Chen
- Abstract summary: We propose a family of drop-in replacements for the self-attention mechanism in the Transformer.
Experimental results show that replacing the self-attention mechanism with the SHE evidently improves the performance of the Transformer.
The proposed Extractors have the potential or are able to run faster than the self-attention mechanism.
- Score: 3.9693969407364427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the popular Transformer architecture has achieved great
success in many application areas, including natural language processing and
computer vision. Many existing works aim to reduce the computational and memory
complexity of the self-attention mechanism in the Transformer by trading off
performance. However, performance is key for the continuing success of the
Transformer. In this paper, a family of drop-in replacements for the
self-attention mechanism in the Transformer, called the Extractors, is
proposed. Four types of the Extractors, namely the super high-performance
Extractor (SHE), the higher-performance Extractor (HE), the worthwhile
Extractor (WE), and the minimalist Extractor (ME), are proposed as examples.
Experimental results show that replacing the self-attention mechanism with the
SHE evidently improves the performance of the Transformer, whereas the
simplified versions of the SHE, i.e., the HE, the WE, and the ME, perform close
to or better than the self-attention mechanism with less computational and
memory complexity. Furthermore, the proposed Extractors have the potential or
are able to run faster than the self-attention mechanism since their critical
paths of computation are much shorter. Additionally, the sequence prediction
problem in the context of text generation is formulated using variable-length
discrete-time Markov chains, and the Transformer is reviewed based on our
understanding.
Related papers
- Differential Transformer [99.5117269150629]
Transformer tends to overallocate attention to irrelevant context.
We introduce Diff Transformer, which amplifies attention to relevant context while canceling noise.
It offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers.
arXiv Detail & Related papers (2024-10-07T17:57:38Z) - FAST: Factorizable Attention for Speeding up Transformers [1.3637227185793512]
We present a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification.
Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.
arXiv Detail & Related papers (2024-02-12T18:59:39Z) - Sparse Universal Transformer [64.78045820484299]
The Universal Transformer (UT) is a variant of the Transformer that shares parameters across its layers.
This paper proposes the Sparse Universal Transformer (SUT), which leverages Sparse Mixture of Experts (SMoE) and a new stick-breaking-based dynamic halting mechanism.
arXiv Detail & Related papers (2023-10-11T00:38:57Z) - The Inhibitor: ReLU and Addition-Based Attention for Efficient
Transformers [0.0]
We replace the dot-product and Softmax-based attention with an alternative mechanism involving addition and ReLU activation only.
This side-steps the expansion to double precision often required by matrix multiplication and avoids costly Softmax evaluations.
It can enable more efficient execution and support larger quantized Transformer models on resource-constrained hardware or alternative arithmetic systems like homomorphic encryption.
arXiv Detail & Related papers (2023-10-03T13:34:21Z) - Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model [10.473819332984005]
We propose a segmented recurrent transformer (SRformer) that combines segmented (local) attention with recurrent attention.
The proposed model achieves $6-22%$ higher ROUGE1 scores than a segmented transformer and outperforms other recurrent transformer approaches.
arXiv Detail & Related papers (2023-05-24T03:47:22Z) - Linearizing Transformer with Key-Value Memory Bank [54.83663647680612]
We propose MemSizer, an approach to project the source sequence into lower dimension representation.
MemSizer not only achieves the same linear time complexity but also enjoys efficient recurrent-style autoregressive generation.
We demonstrate that MemSizer provides an improved tradeoff between efficiency and accuracy over the vanilla transformer.
arXiv Detail & Related papers (2022-03-23T18:10:18Z) - Transformer with a Mixture of Gaussian Keys [31.91701434633319]
Multi-head attention is a driving force behind state-of-the-art transformers.
Transformer-MGK replaces redundant heads in transformers with a mixture of keys at each head.
Compared to its conventional transformer counterpart, Transformer-MGK accelerates training and inference, has fewer parameters, and requires less FLOPs to compute.
arXiv Detail & Related papers (2021-10-16T23:43:24Z) - Transformers with Competitive Ensembles of Independent Mechanisms [97.93090139318294]
We propose a new Transformer layer which divides the hidden representation and parameters into multiple mechanisms, which only exchange information through attention.
We study TIM on a large-scale BERT model, on the Image Transformer, and on speech enhancement and find evidence for semantically meaningful specialization as well as improved performance.
arXiv Detail & Related papers (2021-02-27T21:48:46Z) - Attention that does not Explain Away [54.42960937271612]
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances.
We propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect.
arXiv Detail & Related papers (2020-09-29T21:05:39Z) - Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing [112.2208052057002]
We propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one.
With comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks.
arXiv Detail & Related papers (2020-06-05T05:16:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.