Multidimensional Belief Quantification for Label-Efficient Meta-Learning
- URL: http://arxiv.org/abs/2203.12768v1
- Date: Wed, 23 Mar 2022 23:37:16 GMT
- Title: Multidimensional Belief Quantification for Label-Efficient Meta-Learning
- Authors: Deep Pandey, Qi Yu
- Abstract summary: We propose a novel uncertainty-aware task selection model for label efficient meta-learning.
The proposed model formulates a multidimensional belief measure, which can quantify the known uncertainty and lower bound the unknown uncertainty of any given task.
Experiments conducted over multiple real-world few-shot image classification tasks demonstrate the effectiveness of the proposed model.
- Score: 7.257751371276488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimization-based meta-learning offers a promising direction for few-shot
learning that is essential for many real-world computer vision applications.
However, learning from few samples introduces uncertainty, and quantifying
model confidence for few-shot predictions is essential for many critical
domains. Furthermore, few-shot tasks used in meta training are usually sampled
randomly from a task distribution for an iterative model update, leading to
high labeling costs and computational overhead in meta-training. We propose a
novel uncertainty-aware task selection model for label efficient meta-learning.
The proposed model formulates a multidimensional belief measure, which can
quantify the known uncertainty and lower bound the unknown uncertainty of any
given task. Our theoretical result establishes an important relationship
between the conflicting belief and the incorrect belief. The theoretical result
allows us to estimate the total uncertainty of a task, which provides a
principled criterion for task selection. A novel multi-query task formulation
is further developed to improve both the computational and labeling efficiency
of meta-learning. Experiments conducted over multiple real-world few-shot image
classification tasks demonstrate the effectiveness of the proposed model.
Related papers
- Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - Post-hoc Uncertainty Learning using a Dirichlet Meta-Model [28.522673618527417]
We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities.
Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties.
We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications.
arXiv Detail & Related papers (2022-12-14T17:34:11Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP [39.457091182683406]
We aim to provide task distributions for meta-learning by considering self-supervised tasks automatically proposed from unlabeled text.
Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models.
arXiv Detail & Related papers (2021-11-02T01:50:09Z) - Meta-learning with an Adaptive Task Scheduler [93.63502984214918]
Existing meta-learning algorithms randomly sample meta-training tasks with a uniform probability.
It is likely that tasks are detrimental with noise or imbalanced given a limited number of meta-training tasks.
We propose an adaptive task scheduler (ATS) for the meta-training process.
arXiv Detail & Related papers (2021-10-26T22:16:35Z) - BAMLD: Bayesian Active Meta-Learning by Disagreement [39.59987601426039]
This paper introduces an information-theoretic active task selection mechanism to decrease the number of labeling requests for meta-training tasks.
We report its empirical performance results that compare favourably against existing acquisition mechanisms.
arXiv Detail & Related papers (2021-10-19T13:06:51Z) - Meta-learning Amidst Heterogeneity and Ambiguity [11.061517140668961]
We devise a novel meta-learning framework, called Meta-learning Amidst Heterogeneity and Ambiguity (MAHA)
By extensively conducting several experiments in regression and classification, we demonstrate the validity of our model.
arXiv Detail & Related papers (2021-07-05T18:54:31Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.