Meta-Learning Reliable Priors in the Function Space
- URL: http://arxiv.org/abs/2106.03195v1
- Date: Sun, 6 Jun 2021 18:07:49 GMT
- Title: Meta-Learning Reliable Priors in the Function Space
- Authors: Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause
- Abstract summary: We introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as processes and performs meta-level regularization directly in the function space.
This allows us to directly steer the predictions of the meta-learner towards high uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates.
- Score: 36.869587157481284
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Meta-Learning promises to enable more data-efficient inference by harnessing
previous experience from related learning tasks. While existing meta-learning
methods help us to improve the accuracy of our predictions in face of data
scarcity, they fail to supply reliable uncertainty estimates, often being
grossly overconfident in their predictions. Addressing these shortcomings, we
introduce a novel meta-learning framework, called F-PACOH, that treats
meta-learned priors as stochastic processes and performs meta-level
regularization directly in the function space. This allows us to directly steer
the probabilistic predictions of the meta-learner towards high epistemic
uncertainty in regions of insufficient meta-training data and, thus, obtain
well-calibrated uncertainty estimates. Finally, we showcase how our approach
can be integrated with sequential decision making, where reliable uncertainty
quantification is imperative. In our benchmark study on meta-learning for
Bayesian Optimization (BO), F-PACOH significantly outperforms all other
meta-learners and standard baselines. Even in a challenging lifelong BO
setting, where optimization tasks arrive one at a time and the meta-learner
needs to build up informative prior knowledge incrementally, our proposed
method demonstrates strong positive transfer.
Related papers
- Conformal Meta-learners for Predictive Inference of Individual Treatment
Effects [0.0]
We investigate the problem of machine learning-based (ML) predictive inference on individual treatment effects (ITEs)
We develop conformal meta-learners, a general framework for issuing predictive intervals for ITEs by applying the standard conformal prediction (CP) procedure on top of CATE meta-learners.
arXiv Detail & Related papers (2023-08-28T20:32:22Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Uncertainty-based Meta-Reinforcement Learning for Robust Radar Tracking [3.012203489670942]
This paper proposes an uncertainty-based Meta-Reinforcement Learning (Meta-RL) approach with Out-of-Distribution (OOD) detection.
Using information about its complexity, the proposed algorithm is able to point out when tracking is reliable.
There, we show that our method outperforms related Meta-RL approaches on unseen tracking scenarios in peak performance by 16% and the baseline by 35%.
arXiv Detail & Related papers (2022-10-26T07:48:56Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - On Fast Adversarial Robustness Adaptation in Model-Agnostic
Meta-Learning [100.14809391594109]
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning.
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning.
arXiv Detail & Related papers (2021-02-20T22:03:04Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.