Variable-Shot Adaptation for Online Meta-Learning
- URL: http://arxiv.org/abs/2012.07769v1
- Date: Mon, 14 Dec 2020 18:05:24 GMT
- Title: Variable-Shot Adaptation for Online Meta-Learning
- Authors: Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine
- Abstract summary: We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
- Score: 123.47725004094472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot meta-learning methods consider the problem of learning new tasks
from a small, fixed number of examples, by meta-learning across static data
from a set of previous tasks. However, in many real world settings, it is more
natural to view the problem as one of minimizing the total amount of
supervision --- both the number of examples needed to learn a new task and the
amount of data needed for meta-learning. Such a formulation can be studied in a
sequential learning setting, where tasks are presented in sequence. When
studying meta-learning in this online setting, a critical question arises: can
meta-learning improve over the sample complexity and regret of standard
empirical risk minimization methods, when considering both meta-training and
adaptation together? The answer is particularly non-obvious for meta-learning
algorithms with complex bi-level optimizations that may demand large amounts of
meta-training data. To answer this question, we extend previous meta-learning
algorithms to handle the variable-shot settings that naturally arise in
sequential learning: from many-shot learning at the start, to zero-shot
learning towards the end. On sequential learning problems, we find that
meta-learning solves the full task set with fewer overall labels and achieves
greater cumulative performance, compared to standard supervised methods. These
results suggest that meta-learning is an important ingredient for building
learning systems that continuously learn and improve over a sequence of
problems.
Related papers
- On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - Meta-Meta Classification for One-Shot Learning [11.27833234287093]
We present a new approach, called meta-meta classification, to learning in small-data settings.
In this approach, one uses a large set of learning problems to design an ensemble of learners, where each learner has high bias and low variance.
We evaluate the approach on a one-shot, one-class-versus-all classification task and show that it is able to outperform traditional meta-learning as well as ensembling approaches.
arXiv Detail & Related papers (2020-04-17T07:05:03Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding
Meta-Amortization Error [50.83356836818667]
We develop a novel meta-regularization objective using it cyclical annealing schedule and it maximum mean discrepancy (MMD) criterion.
The experimental results show that our approach substantially outperforms standard meta-learning algorithms.
arXiv Detail & Related papers (2020-03-04T04:43:16Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.