PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
- URL: http://arxiv.org/abs/2002.05551v5
- Date: Fri, 18 Jun 2021 07:08:24 GMT
- Title: PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
- Authors: Jonas Rothfuss and Vincent Fortuin and Martin Josifoski and Andreas
Krause
- Abstract summary: We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
- Score: 77.67258935234403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning can successfully acquire useful inductive biases from data.
Yet, its generalization properties to unseen learning tasks are poorly
understood. Particularly if the number of meta-training tasks is small, this
raises concerns about overfitting. We provide a theoretical analysis using the
PAC-Bayesian framework and derive novel generalization bounds for
meta-learning. Using these bounds, we develop a class of PAC-optimal
meta-learning algorithms with performance guarantees and a principled
meta-level regularization. Unlike previous PAC-Bayesian meta-learners, our
method results in a standard stochastic optimization problem which can be
solved efficiently and scales well. When instantiating our PAC-optimal
hyper-posterior (PACOH) with Gaussian processes and Bayesian Neural Networks as
base learners, the resulting methods yield state-of-the-art performance, both
in terms of predictive accuracy and the quality of uncertainty estimates.
Thanks to their principled treatment of uncertainty, our meta-learners can also
be successfully employed for sequential decision problems.
Related papers
- Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation [4.239829789304117]
We use the PAC-Bayesian theory for the setting of learning-to-optimize.
We present the first framework to learn optimization algorithms with provable generalization guarantees.
Our learned algorithms provably outperform related ones derived from a (deterministic) worst-case analysis.
arXiv Detail & Related papers (2024-04-04T08:24:57Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - PAC-Bayesian Learning of Optimization Algorithms [6.624726878647541]
We apply the PAC-Bayes theory to the setting of learning-to-optimize.
We learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed.
Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families.
arXiv Detail & Related papers (2022-10-20T09:16:36Z) - Meta-Learning Reliable Priors in the Function Space [36.869587157481284]
We introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as processes and performs meta-level regularization directly in the function space.
This allows us to directly steer the predictions of the meta-learner towards high uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates.
arXiv Detail & Related papers (2021-06-06T18:07:49Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - PAC-Bayes Bounds for Meta-learning with Data-Dependent Prior [36.38937352131301]
We derive three novel generalisation error bounds for meta-learning based on PAC-Bayes relative entropy bound.
Experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning guarantee a competitive generalization performance guarantee.
arXiv Detail & Related papers (2021-02-07T09:03:43Z) - On the Global Optimality of Model-Agnostic Meta-Learning [133.16370011229776]
Model-a meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior.
We characterize optimality of the stationary points attained by MAML for both learning and supervised learning, where the inner-level outer-level problems are solved via first-order optimization methods.
arXiv Detail & Related papers (2020-06-23T17:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.