Few-shot Classification via Adaptive Attention
- URL: http://arxiv.org/abs/2008.02465v2
- Date: Sat, 21 Nov 2020 15:15:20 GMT
- Title: Few-shot Classification via Adaptive Attention
- Authors: Zihang Jiang, Bingyi Kang, Kuangqi Zhou, Jiashi Feng
- Abstract summary: We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
- Score: 93.06105498633492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a neural network model that can quickly adapt to a new task is
highly desirable yet challenging for few-shot learning problems. Recent
few-shot learning methods mostly concentrate on developing various
meta-learning strategies from two aspects, namely optimizing an initial model
or learning a distance metric. In this work, we propose a novel few-shot
learning method via optimizing and fast adapting the query sample
representation based on very few reference samples. To be specific, we devise a
simple and efficient meta-reweighting strategy to adapt the sample
representations and generate soft attention to refine the representation such
that the relevant features from the query and support samples can be extracted
for a better few-shot classification. Such an adaptive attention model is also
able to explain what the classification model is looking for as the evidence
for classification to some extent. As demonstrated experimentally, the proposed
model achieves state-of-the-art classification results on various benchmark
few-shot classification and fine-grained recognition datasets.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Meta-tuning Loss Functions and Data Augmentation for Few-shot Object
Detection [7.262048441360132]
Few-shot object detection is an emerging topic in the area of few-shot learning and object detection.
We propose a training scheme that allows learning inductive biases that can boost few-shot detection.
The proposed approach yields interpretable loss functions, as opposed to highly parametric and complex few-shot meta-models.
arXiv Detail & Related papers (2023-04-24T15:14:16Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - A Study on Representation Transfer for Few-Shot Learning [5.717951523323085]
Few-shot classification aims to learn to classify new object categories well using only a few labeled examples.
In this work we perform a systematic study of various feature representations for few-shot classification.
We find that learning from more complex tasks tend to give better representations for few-shot classification.
arXiv Detail & Related papers (2022-09-05T17:56:02Z) - Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Training few-shot classification via the perspective of minibatch and
pretraining [10.007569291231915]
Few-shot classification is a challenging task which aims to formulate the ability of humans to learn concepts from limited prior data.
Recent progress in few-shot classification has featured meta-learning.
We propose multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems.
arXiv Detail & Related papers (2020-04-10T03:14:48Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.