Rethinking the Metric in Few-shot Learning: From an Adaptive
Multi-Distance Perspective
- URL: http://arxiv.org/abs/2211.00890v1
- Date: Wed, 2 Nov 2022 05:30:03 GMT
- Title: Rethinking the Metric in Few-shot Learning: From an Adaptive
Multi-Distance Perspective
- Authors: Jinxiang Lai, Siqian Yang, Guannan Jiang, Xi Wang, Yuxi Li, Zihui Jia,
Xiaochen Chen, Jun Liu, Bin-Bin Gao, Wei Zhang, Yuan Xie, Chengjie Wang
- Abstract summary: We investigate the contributions of different distance metrics, and propose an adaptive fusion scheme, bringing significant improvements in few-shot classification.
Based on Adaptive Metrics Module (AMM), we design a few-shot classification framework AMTNet, including the AMM and the Global Adaptive Loss (GAL)
In the experiment, the proposed AMM achieves 2% higher performance than the naive metrics fusion module, and our AMTNet outperforms the state-of-the-arts on multiple benchmark datasets.
- Score: 30.30691830639013
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Few-shot learning problem focuses on recognizing unseen classes given a few
labeled images. In recent effort, more attention is paid to fine-grained
feature embedding, ignoring the relationship among different distance metrics.
In this paper, for the first time, we investigate the contributions of
different distance metrics, and propose an adaptive fusion scheme, bringing
significant improvements in few-shot classification. We start from a naive
baseline of confidence summation and demonstrate the necessity of exploiting
the complementary property of different distance metrics. By finding the
competition problem among them, built upon the baseline, we propose an Adaptive
Metrics Module (AMM) to decouple metrics fusion into metric-prediction fusion
and metric-losses fusion. The former encourages mutual complementary, while the
latter alleviates metric competition via multi-task collaborative learning.
Based on AMM, we design a few-shot classification framework AMTNet, including
the AMM and the Global Adaptive Loss (GAL), to jointly optimize the few-shot
task and auxiliary self-supervised task, making the embedding features more
robust. In the experiment, the proposed AMM achieves 2% higher performance than
the naive metrics fusion module, and our AMTNet outperforms the
state-of-the-arts on multiple benchmark datasets.
Related papers
- DMM: Disparity-guided Multispectral Mamba for Oriented Object Detection in Remote Sensing [8.530409994516619]
Multispectral oriented object detection faces challenges due to both inter-modal and intra-modal discrepancies.
We propose Disparity-guided Multispectral Mamba (DMM), a framework comprised of a Disparity-guided Cross-modal Fusion Mamba (DCFM) module, a Multi-scale Target-aware Attention (MTA) module, and a Target-Prior Aware (TPA) auxiliary task.
arXiv Detail & Related papers (2024-07-11T02:09:59Z) - Beyond Sharing: Conflict-Aware Multivariate Time Series Anomaly
Detection [18.796225184893874]
We introduce CAD, a Conflict-aware Anomaly Detection algorithm.
We find that the poor performance of vanilla MMoE mainly comes from the input-output misalignment settings of MTS formulation.
We show that CAD obtains an average F1-score of 0.943 across three public datasets, notably outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-08-17T11:00:01Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - Meta-Learning Adversarial Bandits [49.094361442409785]
We study online learning with bandit feedback across multiple tasks, with the goal of improving average performance across tasks if they are similar according to some natural task-similarity measure.
As the first to target the adversarial setting, we design a meta-algorithm that setting-specific guarantees for two important cases: multi-armed bandits (MAB) and bandit optimization (BLO)
Our guarantees rely on proving that unregularized follow-the-leader combined with multiplicative weights is enough to online learn a non-smooth and non-B sequence.
arXiv Detail & Related papers (2022-05-27T17:40:32Z) - Hybrid Relation Guided Set Matching for Few-shot Action Recognition [51.3308583226322]
We propose a novel Hybrid Relation guided Set Matching (HyRSM) approach that incorporates two key components.
The purpose of the hybrid relation module is to learn task-specific embeddings by fully exploiting associated relations within and cross videos in an episode.
We evaluate HyRSM on six challenging benchmarks, and the experimental results show its superiority over the state-of-the-art methods by a convincing margin.
arXiv Detail & Related papers (2022-04-28T11:43:41Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Multi-scale Adaptive Task Attention Network for Few-Shot Learning [5.861206243996454]
The goal of few-shot learning is to classify unseen categories with few labeled samples.
This paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning.
arXiv Detail & Related papers (2020-11-30T00:36:01Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Asymmetric Distribution Measure for Few-shot Learning [82.91276814477126]
metric-based few-shot image classification aims to measure the relations between query images and support classes.
We propose a novel Asymmetric Distribution Measure (ADM) network for few-shot learning.
We achieve $3.02%$ and $1.56%$ gains over the state-of-the-art method on the $5$-way $1$-shot task.
arXiv Detail & Related papers (2020-02-01T06:41:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.