Asymmetric Distribution Measure for Few-shot Learning
- URL: http://arxiv.org/abs/2002.00153v1
- Date: Sat, 1 Feb 2020 06:41:52 GMT
- Title: Asymmetric Distribution Measure for Few-shot Learning
- Authors: Wenbin Li, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao, and Jiebo Luo
- Abstract summary: metric-based few-shot image classification aims to measure the relations between query images and support classes.
We propose a novel Asymmetric Distribution Measure (ADM) network for few-shot learning.
We achieve $3.02%$ and $1.56%$ gains over the state-of-the-art method on the $5$-way $1$-shot task.
- Score: 82.91276814477126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The core idea of metric-based few-shot image classification is to directly
measure the relations between query images and support classes to learn
transferable feature embeddings. Previous work mainly focuses on image-level
feature representations, which actually cannot effectively estimate a class's
distribution due to the scarcity of samples. Some recent work shows that local
descriptor based representations can achieve richer representations than
image-level based representations. However, such works are still based on a
less effective instance-level metric, especially a symmetric metric, to measure
the relations between query images and support classes. Given the natural
asymmetric relation between a query image and a support class, we argue that an
asymmetric measure is more suitable for metric-based few-shot learning. To that
end, we propose a novel Asymmetric Distribution Measure (ADM) network for
few-shot learning by calculating a joint local and global asymmetric measure
between two multivariate local distributions of queries and classes. Moreover,
a task-aware Contrastive Measure Strategy (CMS) is proposed to further enhance
the measure function. On popular miniImageNet and tieredImageNet, we achieve
$3.02\%$ and $1.56\%$ gains over the state-of-the-art method on the $5$-way
$1$-shot task, respectively, validating our innovative design of asymmetric
distribution measures for few-shot learning.
Related papers
- Rethinking the Metric in Few-shot Learning: From an Adaptive
Multi-Distance Perspective [30.30691830639013]
We investigate the contributions of different distance metrics, and propose an adaptive fusion scheme, bringing significant improvements in few-shot classification.
Based on Adaptive Metrics Module (AMM), we design a few-shot classification framework AMTNet, including the AMM and the Global Adaptive Loss (GAL)
In the experiment, the proposed AMM achieves 2% higher performance than the naive metrics fusion module, and our AMTNet outperforms the state-of-the-arts on multiple benchmark datasets.
arXiv Detail & Related papers (2022-11-02T05:30:03Z) - Large-to-small Image Resolution Asymmetry in Deep Metric Learning [13.81293627340993]
We explore an asymmetric setup by light-weight processing of the query at a small image resolution to enable fast representation extraction.
The goal is to obtain a network for database examples that is trained to operate on large resolution images and benefits from fine-grained image details.
We conclude that resolution asymmetry is a better way to optimize the performance/efficiency trade-off than architecture asymmetry.
arXiv Detail & Related papers (2022-10-11T14:05:30Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Multi-level Metric Learning for Few-shot Image Recognition [5.861206243996454]
We argue that if query images can simultaneously be well classified via three level similarity metrics, the query images within a class can be more tightly distributed in a smaller feature space.
Motivated by this, we propose a novel Multi-level Metric Learning (MML) method for few-shot learning, which not only calculates the pixel-level similarity but also considers the similarity of part-level features and the similarity of distributions.
arXiv Detail & Related papers (2021-03-21T12:49:07Z) - BSNet: Bi-Similarity Network for Few-shot Fine-grained Image
Classification [35.50808687239441]
We propose a so-called textitBi-Similarity Network (textitBSNet)
The bi-similarity module learns feature maps according to two similarity measures of diverse characteristics.
In this way, the model is enabled to learn more discriminative and less similarity-biased features from few shots of fine-grained images.
arXiv Detail & Related papers (2020-11-29T08:38:17Z) - Memory-Augmented Relation Network for Few-Shot Learning [114.47866281436829]
In this work, we investigate a new metric-learning method, Memory-Augmented Relation Network (MRN)
In MRN, we choose the samples that are visually similar from the working context, and perform weighted information propagation to attentively aggregate helpful information from chosen ones to enhance its representation.
We empirically demonstrate that MRN yields significant improvement over its ancestor and achieves competitive or even better performance when compared with other few-shot learning approaches.
arXiv Detail & Related papers (2020-05-09T10:09:13Z) - Geometrically Mappable Image Features [85.81073893916414]
Vision-based localization of an agent in a map is an important problem in robotics and computer vision.
We propose a method that learns image features targeted for image-retrieval-based localization.
arXiv Detail & Related papers (2020-03-21T15:36:38Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.