An Information-Theoretic Analysis of In-Context Learning
- URL: http://arxiv.org/abs/2401.15530v1
- Date: Sun, 28 Jan 2024 00:36:44 GMT
- Title: An Information-Theoretic Analysis of In-Context Learning
- Authors: Hong Jun Jeon, Jason D. Lee, Qi Lei, Benjamin Van Roy
- Abstract summary: We introduce new information-theoretic tools that lead to an elegant and very general decomposition of error into three components: irreducible error, meta-learning error, and intra-task error.
Our theoretical results characterizes how error decays in both the number of training sequences and sequence lengths.
- Score: 67.62099509406173
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous theoretical results pertaining to meta-learning on sequences build
on contrived assumptions and are somewhat convoluted. We introduce new
information-theoretic tools that lead to an elegant and very general
decomposition of error into three components: irreducible error, meta-learning
error, and intra-task error. These tools unify analyses across many
meta-learning challenges. To illustrate, we apply them to establish new results
about in-context learning with transformers. Our theoretical results
characterizes how error decays in both the number of training sequences and
sequence lengths. Our results are very general; for example, they avoid
contrived mixing time assumptions made by all prior results that establish
decay of error with sequence length.
Related papers
- Implicit biases in multitask and continual learning from a backward
error analysis perspective [5.710971447109951]
We compute implicit training biases in multitask and continual learning settings for neural networks trained with gradient descent.
We derive modified losses that are implicitly minimized during training.
arXiv Detail & Related papers (2023-11-01T02:37:32Z) - Theoretical Characterization of the Generalization Performance of
Overfitted Meta-Learning [70.52689048213398]
This paper studies the performance of overfitted meta-learning under a linear regression model with Gaussian features.
We find new and interesting properties that do not exist in single-task linear regression.
Our analysis suggests that benign overfitting is more significant and easier to observe when the noise and the diversity/fluctuation of the ground truth of each training task are large.
arXiv Detail & Related papers (2023-04-09T20:36:13Z) - Generalization Analysis for Contrastive Representation Learning [80.89690821916653]
Existing generalization error bounds depend linearly on the number $k$ of negative examples.
We establish novel generalization bounds for contrastive learning which do not depend on $k$, up to logarithmic terms.
arXiv Detail & Related papers (2023-02-24T01:03:56Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - What causes the test error? Going beyond bias-variance via ANOVA [21.359033212191218]
Modern machine learning methods are often overparametrized, allowing adaptation to the data at a fine level.
Recent work aimed to understand in greater depth why overparametrization is helpful for generalization.
We propose using the analysis of variance (ANOVA) to decompose the variance in the test error in a symmetric way.
arXiv Detail & Related papers (2020-10-11T05:21:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.