Meta-Learning with Neural Tangent Kernels
- URL: http://arxiv.org/abs/2102.03909v2
- Date: Tue, 9 Feb 2021 02:28:15 GMT
- Title: Meta-Learning with Neural Tangent Kernels
- Authors: Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
- Abstract summary: We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
- Score: 58.06951624702086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for
meta-learning, where a meta-model is learned with the ability of fast adapting
to new tasks. However, as a double-looped optimization problem, MAML needs to
differentiate through the whole inner-loop optimization path for every
outer-loop training step, which may lead to both computational inefficiency and
sub-optimal solutions. In this paper, we generalize MAML to allow meta-learning
to be defined in function spaces, and propose the first meta-learning paradigm
in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's
Neural Tangent Kernel (NTK). Within this paradigm, we introduce two
meta-learning algorithms in the RKHS, which no longer need a sub-optimal
iterative inner-loop adaptation as in the MAML framework. We achieve this goal
by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS;
and 2) solving the adaptation analytically based on the NTK theory. Extensive
experimental studies demonstrate advantages of our paradigm in both efficiency
and quality of solutions compared to related meta-learning algorithms. Another
interesting feature of our proposed methods is that they are demonstrated to be
more robust to adversarial attacks and out-of-distribution adaptation than
popular baselines, as demonstrated in our experiments.
Related papers
- End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Bridging Multi-Task Learning and Meta-Learning: Towards Efficient
Training and Effective Adaptation [19.792537914018933]
Multi-task learning (MTL) aims to improve the generalization of several related tasks by learning them jointly.
Modern meta-learning allows unseen tasks with limited labels during the test phase, in the hope of fast adaptation over them.
We show that MTL shares the same optimization formulation with a class of gradient-based meta-learning (GBML) algorithms.
arXiv Detail & Related papers (2021-06-16T17:58:23Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Modeling and Optimization Trade-off in Meta-learning [23.381986209234164]
We introduce and rigorously define the trade-off between accurate modeling and ease in meta-learning.
Taking MAML as a representative metalearning algorithm, we theoretically characterize the trade-off for general non risk functions as well as linear regression.
We also empirically solve a trade-off for metareinforcement learning benchmarks.
arXiv Detail & Related papers (2020-10-24T15:32:08Z) - On the Global Optimality of Model-Agnostic Meta-Learning [133.16370011229776]
Model-a meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior.
We characterize optimality of the stationary points attained by MAML for both learning and supervised learning, where the inner-level outer-level problems are solved via first-order optimization methods.
arXiv Detail & Related papers (2020-06-23T17:33:14Z) - Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning [63.64636047748605]
We develop a new theoretical framework to provide convergence guarantee for the general multi-step MAML algorithm.
In particular, our results suggest that an inner-stage step needs to be chosen inversely proportional to $N$ of inner-stage steps in order for $N$ MAML to have guaranteed convergence.
arXiv Detail & Related papers (2020-02-18T19:17:54Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.