Uncertainty-Aware Meta-Learning for Multimodal Task Distributions
- URL: http://arxiv.org/abs/2210.01881v1
- Date: Tue, 4 Oct 2022 20:02:25 GMT
- Title: Uncertainty-Aware Meta-Learning for Multimodal Task Distributions
- Authors: Cesar Almecija, Apoorva Sharma and Navid Azizan
- Abstract summary: We present UnLiMiTD (uncertainty-aware meta-learning for multimodal task distributions)
We take a probabilistic perspective and train a parametric, tuneable distribution over tasks on the meta-dataset.
We demonstrate that UnLiMiTD's predictions compare favorably to, and outperform in most cases, the standard baselines.
- Score: 3.7470451129384825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning or learning to learn is a popular approach for learning new
tasks with limited data (i.e., few-shot learning) by leveraging the
commonalities among different tasks. However, meta-learned models can perform
poorly when context data is limited, or when data is drawn from an
out-of-distribution (OoD) task. Especially in safety-critical settings, this
necessitates an uncertainty-aware approach to meta-learning. In addition, the
often multimodal nature of task distributions can pose unique challenges to
meta-learning methods. In this work, we present UnLiMiTD (uncertainty-aware
meta-learning for multimodal task distributions), a novel method for
meta-learning that (1) makes probabilistic predictions on in-distribution tasks
efficiently, (2) is capable of detecting OoD context data at test time, and (3)
performs on heterogeneous, multimodal task distributions. To achieve this goal,
we take a probabilistic perspective and train a parametric, tuneable
distribution over tasks on the meta-dataset. We construct this distribution by
performing Bayesian inference on a linearized neural network, leveraging
Gaussian process theory. We demonstrate that UnLiMiTD's predictions compare
favorably to, and outperform in most cases, the standard baselines, especially
in the low-data regime. Furthermore, we show that UnLiMiTD is effective in
detecting data from OoD tasks. Finally, we confirm that both of these findings
continue to hold in the multimodal task-distribution setting.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.