Embedding Expansion: Augmentation in Embedding Space for Deep Metric
Learning
- URL: http://arxiv.org/abs/2003.02546v3
- Date: Thu, 23 Apr 2020 06:13:11 GMT
- Title: Embedding Expansion: Augmentation in Embedding Space for Deep Metric
Learning
- Authors: Byungsoo Ko, Geonmo Gu
- Abstract summary: We propose an augmentation method in an embedding space for pair-based metric learning losses, called embedding expansion.
Because of its simplicity and flexibility, it can be used for existing metric learning losses without affecting model size, training speed, or optimization difficulty.
- Score: 17.19890778916312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the distance metric between pairs of samples has been studied for
image retrieval and clustering. With the remarkable success of pair-based
metric learning losses, recent works have proposed the use of generated
synthetic points on metric learning losses for augmentation and generalization.
However, these methods require additional generative networks along with the
main network, which can lead to a larger model size, slower training speed, and
harder optimization. Meanwhile, post-processing techniques, such as query
expansion and database augmentation, have proposed the combination of feature
points to obtain additional semantic information. In this paper, inspired by
query expansion and database augmentation, we propose an augmentation method in
an embedding space for pair-based metric learning losses, called embedding
expansion. The proposed method generates synthetic points containing augmented
information by a combination of feature points and performs hard negative pair
mining to learn with the most informative feature representations. Because of
its simplicity and flexibility, it can be used for existing metric learning
losses without affecting model size, training speed, or optimization
difficulty. Finally, the combination of embedding expansion and representative
metric learning losses outperforms the state-of-the-art losses and previous
sample generation methods in both image retrieval and clustering tasks. The
implementation is publicly available.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Implicit Counterfactual Data Augmentation for Robust Learning [24.795542869249154]
This study proposes an Implicit Counterfactual Data Augmentation method to remove spurious correlations and make stable predictions.
Experiments have been conducted across various biased learning scenarios covering both image and text datasets.
arXiv Detail & Related papers (2023-04-26T10:36:40Z) - Dataset Distillation via Factorization [58.8114016318593]
We introduce a emphdataset factorization approach, termed emphHaBa, which is a plug-and-play strategy portable to any existing dataset distillation (DD) baseline.
emphHaBa explores decomposing a dataset into two components: data emphHallucination networks and emphBases.
Our method can yield significant improvement on downstream classification tasks compared with previous state of the arts, while reducing the total number of compressed parameters by up to 65%.
arXiv Detail & Related papers (2022-10-30T08:36:19Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Kronecker Decomposition for Knowledge Graph Embeddings [5.49810117202384]
We propose a technique based on Kronecker decomposition to reduce the number of parameters in a knowledge graph embedding model.
The decomposition ensures that elementwise interactions between three embedding vectors are extended with interactions within each embedding vector.
Our experiments suggest that applying Kronecker decomposition on embedding matrices leads to an improved parameter efficiency on all benchmark datasets.
arXiv Detail & Related papers (2022-05-13T11:11:03Z) - Hyperbolic Vision Transformers: Combining Improvements in Metric
Learning [116.13290702262248]
We propose a new hyperbolic-based model for metric learning.
At the core of our method is a vision transformer with output embeddings mapped to hyperbolic space.
We evaluate the proposed model with six different formulations on four datasets.
arXiv Detail & Related papers (2022-03-21T09:48:23Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Adaptive additive classification-based loss for deep metric learning [0.0]
We propose an extension to the existing adaptive margin for classification-based deep metric learning.
Our results were achieved with faster convergence and lower code complexity than the prior state-of-the-art.
arXiv Detail & Related papers (2020-06-25T20:45:22Z) - Symmetrical Synthesis for Deep Metric Learning [17.19890778916312]
We propose a novel method of synthetic hard sample generation called symmetrical synthesis.
Given two original feature points from the same class, the proposed method generates synthetic points with each other as an axis of symmetry.
It performs hard negative pair mining within the original and synthetic points to select a more informative negative pair for computing the metric learning loss.
arXiv Detail & Related papers (2020-01-31T04:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.