Linear Log-Normal Attention with Unbiased Concentration
- URL: http://arxiv.org/abs/2311.13541v4
- Date: Mon, 26 Feb 2024 08:40:50 GMT
- Title: Linear Log-Normal Attention with Unbiased Concentration
- Authors: Yury Nahshan, Joseph Kampeas and Emir Haleva
- Abstract summary: We study the self-attention mechanism by analyzing the distribution of the attention matrix and its concentration ability.
We propose instruments to measure these quantities and introduce a novel self-attention mechanism, Linear Log-Normal Attention.
Our experimental results on popular natural language benchmarks reveal that our proposed Linear Log-Normal Attention outperforms other linearized attention alternatives.
- Score: 3.034257650900382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer models have achieved remarkable results in a wide range of
applications. However, their scalability is hampered by the quadratic time and
memory complexity of the self-attention mechanism concerning the sequence
length. This limitation poses a substantial obstacle when dealing with long
documents or high-resolution images. In this work, we study the self-attention
mechanism by analyzing the distribution of the attention matrix and its
concentration ability. Furthermore, we propose instruments to measure these
quantities and introduce a novel self-attention mechanism, Linear Log-Normal
Attention, designed to emulate the distribution and concentration behavior of
the original self-attention. Our experimental results on popular natural
language benchmarks reveal that our proposed Linear Log-Normal Attention
outperforms other linearized attention alternatives, offering a promising
avenue for enhancing the scalability of transformer models.
Related papers
- Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences [60.489682735061415]
We propose CHELA, which replaces state space models with short-long convolutions and implements linear attention in a divide-and-conquer manner.
Our experiments on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-06-12T12:12:38Z) - Latte: Latent Attention for Linear Time Transformers [11.524573224123905]
We propose a probabilistic framework for attention.
Our method can be seamlessly integrated as a drop-in replacement for the standard attention mechanism.
The resulting Latte Transformer'' achieves performance comparable to standard attention and other state-of-the-art models.
arXiv Detail & Related papers (2024-02-27T13:54:48Z) - FAST: Factorizable Attention for Speeding up Transformers [1.3637227185793512]
We present a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification.
Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.
arXiv Detail & Related papers (2024-02-12T18:59:39Z) - Easy attention: A simple attention mechanism for temporal predictions with transformers [2.172584429650463]
We show that the keys, queries and softmax are not necessary for obtaining the attention score required to capture long-term dependencies in temporal sequences.
Our proposed easy-attention method directly treats the attention scores as learnable parameters.
This approach produces excellent results when reconstructing and predicting the temporal dynamics of chaotic systems.
arXiv Detail & Related papers (2023-08-24T15:54:32Z) - Flowformer: Linearizing Transformers with Conservation Flows [77.25101425464773]
We linearize Transformers free from specific inductive biases based on the flow network theory.
By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions.
arXiv Detail & Related papers (2022-02-13T08:44:10Z) - Alignment Attention by Matching Key and Query Distributions [48.93793773929006]
This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head.
It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention.
On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks.
arXiv Detail & Related papers (2021-10-25T00:54:57Z) - SparseBERT: Rethinking the Importance Analysis in Self-attention [107.68072039537311]
Transformer-based models are popular for natural language processing (NLP) tasks due to its powerful capacity.
Attention map visualization of a pre-trained model is one direct method for understanding self-attention mechanism.
We propose a Differentiable Attention Mask (DAM) algorithm, which can be also applied in guidance of SparseBERT design.
arXiv Detail & Related papers (2021-02-25T14:13:44Z) - Attention that does not Explain Away [54.42960937271612]
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances.
We propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect.
arXiv Detail & Related papers (2020-09-29T21:05:39Z) - Untangling tradeoffs between recurrence and self-attention in neural
networks [81.30894993852813]
We present a formal analysis of how self-attention affects gradient propagation in recurrent networks.
We prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies.
We propose a relevancy screening mechanism that allows for a scalable use of sparse self-attention with recurrence.
arXiv Detail & Related papers (2020-06-16T19:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.