Few-shot Metric Learning: Online Adaptation of Embedding for Retrieval
- URL: http://arxiv.org/abs/2211.07116v1
- Date: Mon, 14 Nov 2022 05:10:17 GMT
- Title: Few-shot Metric Learning: Online Adaptation of Embedding for Retrieval
- Authors: Deunsol Jung, Dahyun Kang, Suha Kwak, and Minsu Cho
- Abstract summary: Metric learning aims to build a distance metric typically by learning an effective embedding function that maps similar objects into nearby points.
Despite recent advances in deep metric learning, it remains challenging for the learned metric to generalize to unseen classes with a substantial domain gap.
We propose a new problem of few-shot metric learning that aims to adapt the embedding function to the target domain with only a few annotated data.
- Score: 37.601607544184915
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Metric learning aims to build a distance metric typically by learning an
effective embedding function that maps similar objects into nearby points in
its embedding space. Despite recent advances in deep metric learning, it
remains challenging for the learned metric to generalize to unseen classes with
a substantial domain gap. To tackle the issue, we explore a new problem of
few-shot metric learning that aims to adapt the embedding function to the
target domain with only a few annotated data. We introduce three few-shot
metric learning baselines and propose the Channel-Rectifier Meta-Learning
(CRML), which effectively adapts the metric space online by adjusting channels
of intermediate layers. Experimental analyses on miniImageNet, CUB-200-2011,
MPII, as well as a new dataset, miniDeepFashion, demonstrate that our method
consistently improves the learned metric by adapting it to target classes and
achieves a greater gain in image retrieval when the domain gap from the source
classes is larger.
Related papers
- SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction [0.0]
This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
arXiv Detail & Related papers (2021-07-06T16:36:57Z) - Self-Supervised Metric Learning in Multi-View Data: A Downstream Task
Perspective [2.01243755755303]
We study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data.
We show that the target distance of metric learning satisfies several desired properties for the downstream tasks.
Our analysis characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks.
arXiv Detail & Related papers (2021-06-14T02:34:33Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z) - Variational Metric Scaling for Metric-Based Meta-Learning [37.392840869320686]
We recast metric-based meta-learning from a prototypical perspective and develop a variational metric scaling framework.
Our method is end-to-end without any pre-training and can be used as a simple plug-and-play module for existing metric-based meta-algorithms.
arXiv Detail & Related papers (2019-12-26T09:00:36Z) - A Multilayer Framework for Online Metric Learning [71.31889711244739]
This paper proposes a multilayer framework for online metric learning to capture the nonlinear similarities among instances.
A new Mahalanobis-based Online Metric Learning (MOML) algorithm is presented based on the passive-aggressive strategy and one-pass triplet construction strategy.
The proposed MLOML enjoys several nice properties, indeed learns a metric progressively, and performs better on the benchmark datasets.
arXiv Detail & Related papers (2018-05-15T01:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.