In-Context Learning Creates Task Vectors
- URL: http://arxiv.org/abs/2310.15916v1
- Date: Tue, 24 Oct 2023 15:17:14 GMT
- Title: In-Context Learning Creates Task Vectors
- Authors: Roee Hendel, Mor Geva, Amir Globerson
- Abstract summary: In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm.
Here we show that the functions learned by ICL often have a very simple structure.
We support the above claim via comprehensive experiments across a range of models and tasks.
- Score: 40.990432572831885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In-context learning (ICL) in Large Language Models (LLMs) has emerged as a
powerful new learning paradigm. However, its underlying mechanism is still not
well understood. In particular, it is challenging to map it to the "standard"
machine learning framework, where one uses a training set $S$ to find a
best-fitting function $f(x)$ in some hypothesis class. Here we make progress on
this problem by showing that the functions learned by ICL often have a very
simple structure: they correspond to the transformer LLM whose only inputs are
the query $x$ and a single "task vector" calculated from the training set.
Thus, ICL can be seen as compressing $S$ into a single task vector
$\boldsymbol{\theta}(S)$ and then using this task vector to modulate the
transformer to produce the output. We support the above claim via comprehensive
experiments across a range of models and tasks.
Related papers
- Pretrained transformer efficiently learns low-dimensional target functions in-context [40.77319247558742]
We show that a nonlinear transformer optimized by gradient descent learns $f_*$ in-context with a prompt length that only depends on the dimension of the distribution of target functions $r$.
Our result highlights the adaptivity of the pretrained transformer to low-dimensional structures of the function class, which enables sample-efficient ICL.
arXiv Detail & Related papers (2024-11-04T19:24:39Z) - IT$^3$: Idempotent Test-Time Training [95.78053599609044]
This paper introduces Idempotent Test-Time Training (IT$3$), a novel approach to addressing the challenge of distribution shift.
IT$3$ is based on the universal property of idempotence.
We demonstrate the versatility of our approach across various tasks, including corrupted image classification.
arXiv Detail & Related papers (2024-10-05T15:39:51Z) - Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks [5.358878931933351]
We study the emergence of in-context learning and skill composition in a collection of modular arithmetic tasks.
Specifically, we consider a finite collection of linear modular functions $z = a, x + b, y ;mathrmmod; p$ labeled by the vector $(a, b) in mathbbZ_p2$.
arXiv Detail & Related papers (2024-06-04T17:59:36Z) - Metalearning with Very Few Samples Per Task [19.78398372660794]
We consider a binary classification setting where tasks are related by a shared representation.
Here, the amount of data is measured in terms of the number of tasks $t$ that we need to see and the number of samples $n$ per task.
Our work also yields a characterization of distribution-free multitask learning and reductions between meta and multitask learning.
arXiv Detail & Related papers (2023-12-21T16:06:44Z) - Blessing of Class Diversity in Pre-training [54.335530406959435]
We prove that when the classes of the pre-training task are sufficiently diverse, pre-training can significantly improve the sample efficiency of downstream tasks.
Our proof relies on a vector-form Rademacher complexity chain rule for composite function classes and a modified self-concordance condition.
arXiv Detail & Related papers (2022-09-07T20:10:12Z) - On the Theory of Transfer Learning: The Importance of Task Diversity [114.656572506859]
We consider $t+1$ tasks parameterized by functions of the form $f_j circ h$ in a general function class $mathcalF circ mathcalH$.
We show that for diverse training tasks the sample complexity needed to learn the shared representation across the first $t$ training tasks scales as $C(mathcalH) + t C(mathcalF)$.
arXiv Detail & Related papers (2020-06-20T20:33:59Z) - On the Modularity of Hypernetworks [103.1147622394852]
We show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.
arXiv Detail & Related papers (2020-02-23T22:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.