MetaKernel: Learning Variational Random Features with Limited Labels
- URL: http://arxiv.org/abs/2105.03781v1
- Date: Sat, 8 May 2021 21:24:09 GMT
- Title: MetaKernel: Learning Variational Random Features with Limited Labels
- Authors: Yingjun Du, Haoliang Sun, Xiantong Zhen, Jun Xu, Yilong Yin, Ling
Shao, Cees G. M. Snoek
- Abstract summary: Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
- Score: 120.90737681252594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning deals with the fundamental and challenging problem of
learning from a few annotated samples, while being able to generalize well on
new tasks. The crux of few-shot learning is to extract prior knowledge from
related tasks to enable fast adaptation to a new task with a limited amount of
data. In this paper, we propose meta-learning kernels with random Fourier
features for few-shot learning, we call MetaKernel. Specifically, we propose
learning variational random features in a data-driven manner to obtain
task-specific kernels by leveraging the shared knowledge provided by related
tasks in a meta-learning setting. We treat the random feature basis as the
latent variable, which is estimated by variational inference. The shared
knowledge from related tasks is incorporated into a context inference of the
posterior, which we achieve via a long-short term memory module. To establish
more expressive kernels, we deploy conditional normalizing flows based on
coupling layers to achieve a richer posterior distribution over random Fourier
bases. The resultant kernels are more informative and discriminative, which
further improves the few-shot learning. To evaluate our method, we conduct
extensive experiments on both few-shot image classification and regression
tasks. A thorough ablation study demonstrates that the effectiveness of each
introduced component in our method. The benchmark results on fourteen datasets
demonstrate MetaKernel consistently delivers at least comparable and often
better performance than state-of-the-art alternatives.
Related papers
- Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Generative Kernel Continual learning [117.79080100313722]
We introduce generative kernel continual learning, which exploits the synergies between generative models and kernels for continual learning.
The generative model is able to produce representative samples for kernel learning, which removes the dependence on memory in kernel continual learning.
We conduct extensive experiments on three widely-used continual learning benchmarks that demonstrate the abilities and benefits of our contributions.
arXiv Detail & Related papers (2021-12-26T16:02:10Z) - Squeezing Backbone Feature Distributions to the Max for Efficient
Few-Shot Learning [3.1153758106426603]
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.
We propose a novel transfer-based method which aims at processing the feature vectors so that they become closer to Gaussian-like distributions.
In the case of transductive few-shot learning where unlabelled test samples are available during training, we also introduce an optimal-transport inspired algorithm to boost even further the achieved performance.
arXiv Detail & Related papers (2021-10-18T16:29:17Z) - Kernel Continual Learning [117.79080100313722]
kernel continual learning is a simple but effective variant of continual learning to tackle catastrophic forgetting.
episodic memory unit stores a subset of samples for each task to learn task-specific classifiers based on kernel ridge regression.
variational random features to learn a data-driven kernel for each task.
arXiv Detail & Related papers (2021-07-12T22:09:30Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Learning to Learn Kernels with Variational Random Features [118.09565227041844]
We introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability.
We formulate the optimization of MetaVRF as a variational inference problem.
We show that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.
arXiv Detail & Related papers (2020-06-11T18:05:29Z) - XtarNet: Learning to Extract Task-Adaptive Representation for
Incremental Few-Shot Learning [24.144499302568565]
We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning.
The TAR contains effective information for classifying both novel and base categories.
XtarNet achieves state-of-the-art incremental few-shot learning performance.
arXiv Detail & Related papers (2020-03-19T04:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.