Laplacian Regularized Few-Shot Learning
- URL: http://arxiv.org/abs/2006.15486v3
- Date: Wed, 28 Apr 2021 15:17:38 GMT
- Title: Laplacian Regularized Few-Shot Learning
- Authors: Imtiaz Masud Ziko, Jose Dolz, Eric Granger and Ismail Ben Ayed
- Abstract summary: We propose a transductive Laplacian-regularized inference for few-shot tasks.
Our inference does not re-train the base model, and can be viewed as a graph clustering of the query set.
Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models.
- Score: 35.381119443377195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a transductive Laplacian-regularized inference for few-shot tasks.
Given any feature embedding learned from the base classes, we minimize a
quadratic binary-assignment function containing two terms: (1) a unary term
assigning query samples to the nearest class prototype, and (2) a pairwise
Laplacian term encouraging nearby query samples to have consistent label
assignments. Our transductive inference does not re-train the base model, and
can be viewed as a graph clustering of the query set, subject to supervision
constraints from the support set. We derive a computationally efficient bound
optimizer of a relaxation of our function, which computes independent
(parallel) updates for each query sample, while guaranteeing convergence.
Following a simple cross-entropy training on the base classes, and without
complex meta-learning strategies, we conducted comprehensive experiments over
five few-shot learning benchmarks. Our LaplacianShot consistently outperforms
state-of-the-art methods by significant margins across different models,
settings, and data sets. Furthermore, our transductive inference is very fast,
with computational times that are close to inductive inference, and can be used
for large-scale few-shot tasks.
Related papers
- Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Towards Practical Few-Shot Query Sets: Transductive Minimum Description
Length Inference [0.0]
We introduce a PrimAl Dual Minimum Description LEngth (PADDLE) formulation, which balances data-fitting accuracy and model complexity for a given few-shot task.
Our constrained MDL-like objective promotes competition among a large set of possible classes, preserving only effective classes that befit better the data of a few-shot task.
arXiv Detail & Related papers (2022-10-26T08:06:57Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Mutual-Information Based Few-Shot Classification [34.95314059362982]
We introduce Transductive Infomation Maximization (TIM) for few-shot learning.
Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task.
We propose a new alternating-direction solver, which speeds up transductive inference over gradient-based optimization.
arXiv Detail & Related papers (2021-06-23T09:17:23Z) - Transductive Few-Shot Learning: Clustering is All You Need? [31.21306826132773]
We investigate a general formulation for transive few-shot learning, which integrates prototype-based objectives.
We find that our method yields competitive performances, in term of accuracy and optimization, while scaling up to large problems.
Surprisingly, we find that our general model already achieve competitive performances in comparison to the state-of-the-art learning.
arXiv Detail & Related papers (2021-06-16T16:14:01Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Transductive Information Maximization For Few-Shot Learning [41.461586994394565]
We introduce Transductive Infomation Maximization (TIM) for few-shot learning.
Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task.
We propose a new alternating-direction solver for our mutual-information loss.
arXiv Detail & Related papers (2020-08-25T22:38:41Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.