General-Purpose In-Context Learning by Meta-Learning Transformers
- URL: http://arxiv.org/abs/2212.04458v2
- Date: Tue, 9 Jan 2024 13:38:35 GMT
- Title: General-Purpose In-Context Learning by Meta-Learning Transformers
- Authors: Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz
- Abstract summary: We show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners.
We characterize transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all.
We propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose in-context learning algorithms.
- Score: 45.63069059498147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern machine learning requires system designers to specify aspects of the
learning pipeline, such as losses, architectures, and optimizers.
Meta-learning, or learning-to-learn, instead aims to learn those aspects, and
promises to unlock greater capabilities with less manual effort. One
particularly ambitious goal of meta-learning is to train general-purpose
in-context learning algorithms from scratch, using only black-box models with
minimal inductive bias. Such a model takes in training data, and produces
test-set predictions across a wide range of problems, without any explicit
definition of an inference model, training loss, or optimization algorithm. In
this paper we show that Transformers and other black-box models can be
meta-trained to act as general-purpose in-context learners. We characterize
transitions between algorithms that generalize, algorithms that memorize, and
algorithms that fail to meta-train at all, induced by changes in model size,
number of tasks, and meta-optimization. We further show that the capabilities
of meta-trained algorithms are bottlenecked by the accessible state size
(memory) determining the next prediction, unlike standard models which are
thought to be bottlenecked by parameter count. Finally, we propose practical
interventions such as biasing the training distribution that improve the
meta-training and meta-generalization of general-purpose in-context learning
algorithms.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - A Brief Summary of Interactions Between Meta-Learning and
Self-Supervised Learning [0.0]
This paper briefly reviews the connections between meta-learning and self-supervised learning.
We show that an integration of meta-learning and self-supervised learning models can best contribute to the improvement of model generalization capability.
arXiv Detail & Related papers (2021-03-01T08:31:28Z) - B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic
Meta-Learning [2.9189409618561966]
We propose a Bayesian neural network based MAML algorithm, which we refer to as the B-SMALL algorithm.
We demonstrate the performance of B-MAML using classification and regression tasks, and highlight that training a sparsifying BNN using MAML indeed improves the parameter footprint of the model.
arXiv Detail & Related papers (2021-01-01T09:19:48Z) - Modeling and Optimization Trade-off in Meta-learning [23.381986209234164]
We introduce and rigorously define the trade-off between accurate modeling and ease in meta-learning.
Taking MAML as a representative metalearning algorithm, we theoretically characterize the trade-off for general non risk functions as well as linear regression.
We also empirically solve a trade-off for metareinforcement learning benchmarks.
arXiv Detail & Related papers (2020-10-24T15:32:08Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.