The Evolution of Statistical Induction Heads: In-Context Learning Markov
Chains
- URL: http://arxiv.org/abs/2402.11004v1
- Date: Fri, 16 Feb 2024 18:28:36 GMT
- Title: The Evolution of Statistical Induction Heads: In-Context Learning Markov
Chains
- Authors: Benjamin L. Edelman, Ezra Edelman, Surbhi Goel, Eran Malach, Nikolaos
Tsilivis
- Abstract summary: We introduce a simple Markov Chain sequence modeling task to study how this in-context learning (ICL) capability emerges.
Transformers trained on this task form emphstatistical induction heads which compute accurate next-token probabilities.
We show how successful learning results from the interaction between the transformer's layers, and uncover evidence that the presence of the simpler unigram solution may delay formation of the final bigram solution.
- Score: 28.41876902994335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have the ability to generate text that mimics patterns
in their inputs. We introduce a simple Markov Chain sequence modeling task in
order to study how this in-context learning (ICL) capability emerges. In our
setting, each example is sampled from a Markov chain drawn from a prior
distribution over Markov chains. Transformers trained on this task form
\emph{statistical induction heads} which compute accurate next-token
probabilities given the bigram statistics of the context. During the course of
training, models pass through multiple phases: after an initial stage in which
predictions are uniform, they learn to sub-optimally predict using in-context
single-token statistics (unigrams); then, there is a rapid phase transition to
the correct in-context bigram solution. We conduct an empirical and theoretical
investigation of this multi-phase process, showing how successful learning
results from the interaction between the transformer's layers, and uncovering
evidence that the presence of the simpler unigram solution may delay formation
of the final bigram solution. We examine how learning is affected by varying
the prior distribution over Markov chains, and consider the generalization of
our in-context learning of Markov chains (ICL-MC) task to $n$-grams for $n >
2$.
Related papers
- Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensembles [23.134664392314264]
Tokenization is associated with many poorly understood shortcomings in language models (LM)
This work studies how tokenization impacts model performance by analyzing and comparing models with their byte-level counterparts.
We develop a next-byte sampling algorithm that eliminates tokenization bias without requiring further training or optimization.
arXiv Detail & Related papers (2024-10-11T23:30:42Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Autoregressive Speech Synthesis without Vector Quantization [135.4776759536272]
We present MELLE, a novel continuous-valued tokens based language modeling approach for text to speech synthesis (TTS)
MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition.
arXiv Detail & Related papers (2024-07-11T14:36:53Z) - Ai-Sampler: Adversarial Learning of Markov kernels with involutive maps [28.229819253644862]
We propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing.
This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data.
arXiv Detail & Related papers (2024-06-04T17:00:14Z) - Toward a Theory of Tokenization in LLMs [26.516041872337887]
We study tokenization from a theoretical point of view by studying the behavior of transformers on simple data generating processes.
We show that even the simplest unigram models (over tokens) learnt by transformers are able to model the probability of sequences drawn from $ktextth$-order Markov sources near optimally.
arXiv Detail & Related papers (2024-04-12T09:01:14Z) - From Self-Attention to Markov Models: Unveiling the Dynamics of
Generative Transformers [41.82477691012942]
We study learning a 1-layer self-attention model from a set of prompts and associated output data.
We first establish a precise mapping between the self-attention mechanism and Markov models.
We characterize an intriguing winner-takes-all phenomenon where the generative process implemented by self-attention collapses into sampling a limited subset of tokens.
arXiv Detail & Related papers (2024-02-21T03:51:34Z) - Explaining Emergent In-Context Learning as Kernel Regression [61.57151500616111]
Large language models (LLMs) have initiated a paradigm shift in transfer learning.
In this paper, we investigate the reason why a transformer-based language model can accomplish in-context learning after pre-training.
We find that during ICL, the attention and hidden features in LLMs match the behaviors of a kernel regression.
arXiv Detail & Related papers (2023-05-22T06:45:02Z) - A Closer Look at Few-shot Classification Again [68.44963578735877]
Few-shot classification consists of a training phase and an adaptation phase.
We empirically prove that the training algorithm and the adaptation algorithm can be completely disentangled.
Our meta-analysis for each phase reveals several interesting insights that may help better understand key aspects of few-shot classification.
arXiv Detail & Related papers (2023-01-28T16:42:05Z) - An Explanation of In-context Learning as Implicit Bayesian Inference [117.19809377740188]
We study the role of the pretraining distribution on the emergence of in-context learning.
We prove that in-context learning occurs implicitly via Bayesian inference of the latent concept.
We empirically find that scaling model size improves in-context accuracy even when the pretraining loss is the same.
arXiv Detail & Related papers (2021-11-03T09:12:33Z) - Masked Language Modeling and the Distributional Hypothesis: Order Word
Matters Pre-training for Little [74.49773960145681]
A possible explanation for the impressive performance of masked language model (MLM)-training is that such models have learned to represent the syntactic structures prevalent in NLP pipelines.
In this paper, we propose a different explanation: pre-trains succeed on downstream tasks almost entirely due to their ability to model higher-order word co-occurrence statistics.
Our results show that purely distributional information largely explains the success of pre-training, and underscore the importance of curating challenging evaluation datasets that require deeper linguistic knowledge.
arXiv Detail & Related papers (2021-04-14T06:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.