ReMP: Rectified Metric Propagation for Few-Shot Learning
- URL: http://arxiv.org/abs/2012.00904v1
- Date: Wed, 2 Dec 2020 00:07:53 GMT
- Title: ReMP: Rectified Metric Propagation for Few-Shot Learning
- Authors: Yang Zhao, Chunyuan Li, Ping Yu, Changyou Chen
- Abstract summary: A rectified metric space is learned to maintain the metric consistency from training to testing.
Numerous analyses indicate that a simple modification of the objective can yield substantial performance gains.
The proposed ReMP is effective and efficient, and outperforms the state of the arts on various standard few-shot learning datasets.
- Score: 67.96021109377809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning features the capability of generalizing from a few
examples. In this paper, we first identify that a discriminative feature space,
namely a rectified metric space, that is learned to maintain the metric
consistency from training to testing, is an essential component to the success
of metric-based few-shot learning. Numerous analyses indicate that a simple
modification of the objective can yield substantial performance gains. The
resulting approach, called rectified metric propagation (ReMP), further
optimizes an attentive prototype propagation network, and applies a repulsive
force to make confident predictions. Extensive experiments demonstrate that the
proposed ReMP is effective and efficient, and outperforms the state of the arts
on various standard few-shot learning datasets.
Related papers
- Long-Tailed Object Detection Pre-training: Dynamic Rebalancing Contrastive Learning with Dual Reconstruction [28.359463356384463]
We introduce a novel pre-training framework for object detection, called Dynamic Rebalancing Contrastive Learning with Dual Reconstruction (2DRCL)
Our method builds on a Holistic-Local Contrastive Learning mechanism, which aligns pre-training with object detection by capturing both global contextual semantics and detailed local patterns.
Experiments on COCO and LVIS v1.0 datasets demonstrate the effectiveness of our method, particularly in improving the mAP/AP scores for tail classes.
arXiv Detail & Related papers (2024-11-14T13:59:01Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Directly Attention Loss Adjusted Prioritized Experience Replay [0.07366405857677226]
Prioritized Replay Experience (PER) enables the model to learn more about relatively important samples by artificially changing their accessed frequencies.
DALAP is proposed, which can directly quantify the changed extent of the shifted distribution through Parallel Self-Attention network.
arXiv Detail & Related papers (2023-11-24T10:14:05Z) - Metric-oriented Speech Enhancement using Diffusion Probabilistic Model [23.84172431047342]
Deep neural network based speech enhancement technique focuses on learning a noisy-to-clean transformation supervised by paired training data.
The task-specific evaluation metric (e.g., PESQ) is usually non-differentiable and can not be directly constructed in the training criteria.
We propose a metric-oriented speech enhancement method (MOSE) which integrates a metric-oriented training strategy into its reverse process.
arXiv Detail & Related papers (2023-02-23T13:12:35Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Few-shot learning with improved local representations via bias rectify
module [13.230636224045137]
We propose a Deep Bias Rectify Network (DBRN) to fully exploit the spatial information that exists in the structure of the feature representations.
bias rectify module is able to focus on the features that are more discriminative for classification by given different weights.
To make full use of the training data, we design a prototype augment mechanism that can make the prototypes generated from the support set to be more representative.
arXiv Detail & Related papers (2021-11-01T08:08:00Z) - Rethinking Deep Contrastive Learning with Embedding Memory [58.66613563148031]
Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML)
We provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory.
arXiv Detail & Related papers (2021-03-25T17:39:34Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.