Proxy Synthesis: Learning with Synthetic Classes for Deep Metric
Learning
- URL: http://arxiv.org/abs/2103.15454v1
- Date: Mon, 29 Mar 2021 09:39:07 GMT
- Title: Proxy Synthesis: Learning with Synthetic Classes for Deep Metric
Learning
- Authors: Geonmo Gu, Byungsoo Ko, Han-Gyu Kim
- Abstract summary: We propose a simple regularizer called Proxy Synthesis that exploits synthetic classes for stronger generalization in deep metric learning.
The proposed method generates synthetic embeddings and proxies that work as synthetic classes, and they mimic unseen classes when computing proxy-based losses.
Our method is applicable to any proxy-based losses, including softmax and its variants.
- Score: 13.252164137961332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main purposes of deep metric learning is to construct an embedding
space that has well-generalized embeddings on both seen (training) classes and
unseen (test) classes. Most existing works have tried to achieve this using
different types of metric objectives and hard sample mining strategies with
given training data. However, learning with only the training data can be
overfitted to the seen classes, leading to the lack of generalization
capability on unseen classes. To address this problem, we propose a simple
regularizer called Proxy Synthesis that exploits synthetic classes for stronger
generalization in deep metric learning. The proposed method generates synthetic
embeddings and proxies that work as synthetic classes, and they mimic unseen
classes when computing proxy-based losses. Proxy Synthesis derives an embedding
space considering class relations and smooth decision boundaries for robustness
on unseen classes. Our method is applicable to any proxy-based losses,
including softmax and its variants. Extensive experiments on four famous
benchmarks in image retrieval tasks demonstrate that Proxy Synthesis
significantly boosts the performance of proxy-based losses and achieves
state-of-the-art performance.
Related papers
- Robust Calibrate Proxy Loss for Deep Metric Learning [6.784952050036532]
We propose a Calibrate Proxy structure, which uses the real sample information to improve the similarity calculation in proxy-based loss.
We show that our approach can effectively improve the performance of commonly used proxy-based losses on both regular and noisy datasets.
arXiv Detail & Related papers (2023-04-06T02:43:10Z) - Prototype-Anchored Learning for Learning with Imperfect Annotations [83.7763875464011]
It is challenging to learn unbiased classification models from imperfectly annotated datasets.
We propose a prototype-anchored learning (PAL) method, which can be easily incorporated into various learning-based classification schemes.
We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-06-23T10:25:37Z) - Non-isotropy Regularization for Proxy-based Deep Metric Learning [78.18860829585182]
We propose non-isotropy regularization ($mathbbNIR$) for proxy-based Deep Metric Learning.
This allows us to explicitly induce a non-isotropic distribution of samples around a proxy to optimize for.
Experiments highlight consistent generalization benefits of $mathbbNIR$ while achieving competitive and state-of-the-art performance.
arXiv Detail & Related papers (2022-03-16T11:13:20Z) - Learning to Generate Novel Classes for Deep Metric Learning [24.048915378172012]
We introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors.
We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class.
Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples.
arXiv Detail & Related papers (2022-01-04T06:55:19Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Hierarchical Proxy-based Loss for Deep Metric Learning [32.10423536428467]
Proxy-based metric learning losses are superior to pair-based losses due to their fast convergence and low training complexity.
We present a framework that leverages this implicit hierarchy by imposing a hierarchical structure on the proxies.
Results demonstrate that our hierarchical proxy-based loss framework improves the performance of existing proxy-based losses.
arXiv Detail & Related papers (2021-03-25T00:38:33Z) - Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies [65.92826041406802]
We propose a Proxy-based deep Graph Metric Learning approach from the perspective of graph classification.
Multiple global proxies are leveraged to collectively approximate the original data points for each class.
We design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels.
arXiv Detail & Related papers (2020-10-26T14:52:42Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z) - Adaptive additive classification-based loss for deep metric learning [0.0]
We propose an extension to the existing adaptive margin for classification-based deep metric learning.
Our results were achieved with faster convergence and lower code complexity than the prior state-of-the-art.
arXiv Detail & Related papers (2020-06-25T20:45:22Z) - Proxy Anchor Loss for Deep Metric Learning [47.832107446521626]
We present a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations.
Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers.
Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.
arXiv Detail & Related papers (2020-03-31T02:05:27Z) - Symmetrical Synthesis for Deep Metric Learning [17.19890778916312]
We propose a novel method of synthetic hard sample generation called symmetrical synthesis.
Given two original feature points from the same class, the proposed method generates synthetic points with each other as an axis of symmetry.
It performs hard negative pair mining within the original and synthetic points to select a more informative negative pair for computing the metric learning loss.
arXiv Detail & Related papers (2020-01-31T04:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.