Understanding In-Context Learning in Transformers and LLMs by Learning
to Learn Discrete Functions
- URL: http://arxiv.org/abs/2310.03016v1
- Date: Wed, 4 Oct 2023 17:57:33 GMT
- Title: Understanding In-Context Learning in Transformers and LLMs by Learning
to Learn Discrete Functions
- Authors: Satwik Bhattamishra, Arkil Patel, Phil Blunsom, Varun Kanade
- Abstract summary: We show that Transformers can learn to implement two distinct algorithms to solve a single task.
We also show that extant Large Language Models (LLMs) can compete with nearest-neighbor baselines on prediction tasks.
- Score: 32.59746882017483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to understand the in-context learning phenomenon, recent works have
adopted a stylized experimental framework and demonstrated that Transformers
can learn gradient-based learning algorithms for various classes of real-valued
functions. However, the limitations of Transformers in implementing learning
algorithms, and their ability to learn other forms of algorithms are not well
understood. Additionally, the degree to which these capabilities are confined
to attention-based models is unclear. Furthermore, it remains to be seen
whether the insights derived from these stylized settings can be extrapolated
to pretrained Large Language Models (LLMs). In this work, we take a step
towards answering these questions by demonstrating the following: (a) On a
test-bed with a variety of Boolean function classes, we find that Transformers
can nearly match the optimal learning algorithm for 'simpler' tasks, while
their performance deteriorates on more 'complex' tasks. Additionally, we find
that certain attention-free models perform (almost) identically to Transformers
on a range of tasks. (b) When provided a teaching sequence, i.e. a set of
examples that uniquely identifies a function in a class, we show that
Transformers learn more sample-efficiently. Interestingly, our results show
that Transformers can learn to implement two distinct algorithms to solve a
single task, and can adaptively select the more sample-efficient algorithm
depending on the sequence of in-context examples. (c) Lastly, we show that
extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines
on prediction tasks that are guaranteed to not be in their training set.
Related papers
- Algorithmic Capabilities of Random Transformers [49.73113518329544]
We investigate what functions can be learned by randomly transformers in which only the embedding layers are optimized.
We find that these random transformers can perform a wide range of meaningful algorithmic tasks.
Our results indicate that some algorithmic capabilities are present in transformers even before these models are trained.
arXiv Detail & Related papers (2024-10-06T06:04:23Z) - In-Context Learning with Representations: Contextual Generalization of Trained Transformers [66.78052387054593]
In-context learning (ICL) refers to a capability of pretrained large language models, which can learn a new task given a few examples during inference.
This paper investigates the training dynamics of transformers by gradient descent through the lens of non-linear regression tasks.
arXiv Detail & Related papers (2024-08-19T16:47:46Z) - Limits of Transformer Language Models on Learning to Compose Algorithms [77.2443883991608]
We evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks.
Our results indicate that compositional learning in state-of-the-art Transformer language models is highly sample inefficient.
arXiv Detail & Related papers (2024-02-08T16:23:29Z) - Supervised Pretraining Can Learn In-Context Reinforcement Learning [96.62869749926415]
In this paper, we study the in-context learning capabilities of transformers in decision-making problems.
We introduce and study Decision-Pretrained Transformer (DPT), a supervised pretraining method where the transformer predicts an optimal action.
We find that the pretrained transformer can be used to solve a range of RL problems in-context, exhibiting both exploration online and conservatism offline.
arXiv Detail & Related papers (2023-06-26T17:58:50Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Transformers as Algorithms: Generalization and Implicit Model Selection
in In-context Learning [23.677503557659705]
In-context learning (ICL) is a type of prompting where a transformer model operates on a sequence of examples and performs inference on-the-fly.
We treat the transformer model as a learning algorithm that can be specialized via training to implement-at inference-time-another target algorithm.
We show that transformers can act as an adaptive learning algorithm and perform model selection across different hypothesis classes.
arXiv Detail & Related papers (2023-01-17T18:31:12Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.