Do Efficient Transformers Really Save Computation?
- URL: http://arxiv.org/abs/2402.13934v1
- Date: Wed, 21 Feb 2024 17:00:56 GMT
- Title: Do Efficient Transformers Really Save Computation?
- Authors: Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen
Feng, Qiwei Ye, Di He, Liwei Wang
- Abstract summary: We focus on the capabilities and limitations of efficient Transformers, specifically the Sparse Transformer and the Linear Transformer.
Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size.
We identify a class of DP problems for which these models can be more efficient than the standard Transformer.
- Score: 34.15764596496696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As transformer-based language models are trained on increasingly large
datasets and with vast numbers of parameters, finding more efficient
alternatives to the standard Transformer has become very valuable. While many
efficient Transformers and Transformer alternatives have been proposed, none
provide theoretical guarantees that they are a suitable replacement for the
standard Transformer. This makes it challenging to identify when to use a
specific model and what directions to prioritize for further investigation. In
this paper, we aim to understand the capabilities and limitations of efficient
Transformers, specifically the Sparse Transformer and the Linear Transformer.
We focus on their reasoning capability as exhibited by Chain-of-Thought (CoT)
prompts and follow previous works to model them as Dynamic Programming (DP)
problems. Our results show that while these models are expressive enough to
solve general DP tasks, contrary to expectations, they require a model size
that scales with the problem size. Nonetheless, we identify a class of DP
problems for which these models can be more efficient than the standard
Transformer. We confirm our theoretical results through experiments on
representative DP tasks, adding to the understanding of efficient Transformers'
practical strengths and weaknesses.
Related papers
- Shrinking the Giant : Quasi-Weightless Transformers for Low Energy Inference [0.30104001512119216]
Building models with fast and energy-efficient inference is imperative to enable a variety of transformer-based applications.
We build on an approach for learning LUT networks directly via an Extended Finite Difference method.
This allows for a computational and energy-efficient inference solution for transformer-based models.
arXiv Detail & Related papers (2024-11-04T05:38:56Z) - MoEUT: Mixture-of-Experts Universal Transformers [75.96744719516813]
Universal Transformers (UTs) have advantages over standard Transformers in learning compositional generalizations.
Layer-sharing drastically reduces the parameter count compared to the non-shared model with the same dimensionality.
No previous work has succeeded in proposing a shared-layer Transformer design that is competitive in parameter count-dominated tasks such as language modeling.
arXiv Detail & Related papers (2024-05-25T03:24:32Z) - Repeat After Me: Transformers are Better than State Space Models at Copying [53.47717661441142]
We show that while generalized state space models are promising in terms of inference-time efficiency, they are limited compared to transformer models on tasks that require copying from the input context.
arXiv Detail & Related papers (2024-02-01T21:44:11Z) - Your Transformer May Not be as Powerful as You Expect [88.11364619182773]
We mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions.
We present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is.
We develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions.
arXiv Detail & Related papers (2022-05-26T14:51:30Z) - Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks [126.33843752332139]
We introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer.
We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets.
Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks.
arXiv Detail & Related papers (2022-04-16T11:30:26Z) - Are Transformers More Robust? Towards Exact Robustness Verification for
Transformers [3.2259574483835673]
We study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns.
Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Quadratically Constrained Programming (MIQCP) problem.
We then conduct experiments using the application of Land Departure to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs.
arXiv Detail & Related papers (2022-02-08T15:27:33Z) - Sparse is Enough in Scaling Transformers [12.561317511514469]
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach.
We propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer.
arXiv Detail & Related papers (2021-11-24T19:53:46Z) - Scalable Transformers for Neural Machine Translation [86.4530299266897]
Transformer has been widely adopted in Neural Machine Translation (NMT) because of its large capacity and parallel training of sequence generation.
We propose a novel scalable Transformers, which naturally contains sub-Transformers of different scales and have shared parameters.
A three-stage training scheme is proposed to tackle the difficulty of training the scalable Transformers.
arXiv Detail & Related papers (2021-06-04T04:04:10Z) - AutoTrans: Automating Transformer Design via Reinforced Architecture
Search [52.48985245743108]
This paper empirically explore how to set layer-norm, whether to scale, number of layers, number of heads, activation function, etc, so that one can obtain a transformer architecture that better suits the tasks at hand.
Experiments on the CoNLL03, Multi-30k, IWSLT14 and WMT-14 shows that the searched transformer model can outperform the standard transformers.
arXiv Detail & Related papers (2020-09-04T08:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.