Simplicity Bias in Transformers and their Ability to Learn Sparse
Boolean Functions
- URL: http://arxiv.org/abs/2211.12316v2
- Date: Mon, 10 Jul 2023 17:15:23 GMT
- Title: Simplicity Bias in Transformers and their Ability to Learn Sparse
Boolean Functions
- Authors: Satwik Bhattamishra, Arkil Patel, Varun Kanade and Phil Blunsom
- Abstract summary: Recent works have found that Transformers struggle to model several formal languages when compared to recurrent models.
This raises the question of why Transformers perform well in practice and whether they have any properties that enable them to generalize better than recurrent models.
- Score: 29.461559919821802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the widespread success of Transformers on NLP tasks, recent works
have found that they struggle to model several formal languages when compared
to recurrent models. This raises the question of why Transformers perform well
in practice and whether they have any properties that enable them to generalize
better than recurrent models. In this work, we conduct an extensive empirical
study on Boolean functions to demonstrate the following: (i) Random
Transformers are relatively more biased towards functions of low sensitivity.
(ii) When trained on Boolean functions, both Transformers and LSTMs prioritize
learning functions of low sensitivity, with Transformers ultimately converging
to functions of lower sensitivity. (iii) On sparse Boolean functions which have
low sensitivity, we find that Transformers generalize near perfectly even in
the presence of noisy labels whereas LSTMs overfit and achieve poor
generalization accuracy. Overall, our results provide strong quantifiable
evidence that suggests differences in the inductive biases of Transformers and
recurrent models which may help explain Transformer's effective generalization
performance despite relatively limited expressiveness.
Related papers
- In-Context Learning with Representations: Contextual Generalization of Trained Transformers [66.78052387054593]
In-context learning (ICL) refers to a capability of pretrained large language models, which can learn a new task given a few examples during inference.
This paper investigates the training dynamics of transformers by gradient descent through the lens of non-linear regression tasks.
arXiv Detail & Related papers (2024-08-19T16:47:46Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Simplicity Bias of Transformers to Learn Low Sensitivity Functions [19.898451497341714]
Transformers achieve state-of-the-art accuracy and robustness across many tasks.
An understanding of the inductive biases that they have and how those biases are different from other neural network architectures remains elusive.
arXiv Detail & Related papers (2024-03-11T17:12:09Z) - Transformers are Expressive, But Are They Expressive Enough for Regression? [38.369337945109855]
We show that Transformers struggle to reliably approximate smooth functions, relying on piecewise constant approximations with sizable intervals.
By shedding light on these challenges, we advocate a refined understanding of Transformers' capabilities.
arXiv Detail & Related papers (2024-02-23T18:12:53Z) - Are Transformers with One Layer Self-Attention Using Low-Rank Weight
Matrices Universal Approximators? [37.820617032391404]
We show that a single layer of self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence.
One-layer and single-head Transformers have a memorization capacity for finite samples, and that Transformers consisting of one self-attention layer with two feed-forward neural networks are universal approximators for continuous permutation equivariant functions on a compact domain.
arXiv Detail & Related papers (2023-07-26T08:07:37Z) - Approximation and Estimation Ability of Transformers for
Sequence-to-Sequence Functions with Infinite Dimensional Input [50.83356836818667]
We study the approximation and estimation ability of Transformers as sequence-to-sequence functions with infinite dimensional inputs.
Our theoretical results support the practical success of Transformers for high dimensional data.
arXiv Detail & Related papers (2023-05-30T02:44:49Z) - Your Transformer May Not be as Powerful as You Expect [88.11364619182773]
We mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions.
We present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is.
We develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions.
arXiv Detail & Related papers (2022-05-26T14:51:30Z) - Learning Bounded Context-Free-Grammar via LSTM and the
Transformer:Difference and Explanations [51.77000472945441]
Long Short-Term Memory (LSTM) and Transformers are two popular neural architectures used for natural language processing tasks.
In practice, it is often observed that Transformer models have better representation power than LSTM.
We study such practical differences between LSTM and Transformer and propose an explanation based on their latent space decomposition patterns.
arXiv Detail & Related papers (2021-12-16T19:56:44Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.