Fastformer: Additive Attention Can Be All You Need
- URL: http://arxiv.org/abs/2108.09084v2
- Date: Mon, 23 Aug 2021 13:11:51 GMT
- Title: Fastformer: Additive Attention Can Be All You Need
- Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
- Abstract summary: We propose Fastformer, which is an efficient Transformer model based on additive attention.
In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts.
In this way, Fastformer can achieve effective context modeling with linear complexity.
- Score: 51.79399904527525
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Transformer is a powerful model for text understanding. However, it is
inefficient due to its quadratic complexity to input sequence length. Although
there are many methods on Transformer acceleration, they are still either
inefficient on long sequences or not effective enough. In this paper, we
propose Fastformer, which is an efficient Transformer model based on additive
attention. In Fastformer, instead of modeling the pair-wise interactions
between tokens, we first use additive attention mechanism to model global
contexts, and then further transform each token representation based on its
interaction with global context representations. In this way, Fastformer can
achieve effective context modeling with linear complexity. Extensive
experiments on five datasets show that Fastformer is much more efficient than
many existing Transformer models and can meanwhile achieve comparable or even
better long text modeling performance.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.