Adaptive additive classification-based loss for deep metric learning
- URL: http://arxiv.org/abs/2006.14693v1
- Date: Thu, 25 Jun 2020 20:45:22 GMT
- Title: Adaptive additive classification-based loss for deep metric learning
- Authors: Istvan Fehervari and Ives Macedo
- Abstract summary: We propose an extension to the existing adaptive margin for classification-based deep metric learning.
Our results were achieved with faster convergence and lower code complexity than the prior state-of-the-art.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that deep metric learning algorithms can benefit from
weak supervision from another input modality. This additional modality can be
incorporated directly into the popular triplet-based loss function as
distances. Also recently, classification loss and proxy-based metric learning
have been observed to lead to faster convergence as well as better retrieval
results, all the while without requiring complex and costly sampling
strategies. In this paper we propose an extension to the existing adaptive
margin for classification-based deep metric learning. Our extension introduces
a separate margin for each negative proxy per sample. These margins are
computed during training from precomputed distances of the classes in the other
modality. Our results set a new state-of-the-art on both on the Amazon fashion
retrieval dataset as well as on the public DeepFashion dataset. This was
observed with both fastText- and BERT-based embeddings for the additional
textual modality. Our results were achieved with faster convergence and lower
code complexity than the prior state-of-the-art.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Supervised Metric Learning to Rank for Retrieval via Contextual
Similarity Optimization [16.14184145802016]
Many metric learning loss functions focus on learning a correct ranking of training samples, but strongly overfit semantically inconsistent labels.
We propose a new metric learning method, called contextual loss, which optimize contextual similarity in addition to cosine similarity.
We empirically show that the proposed loss is more robust to label noise, and is less prone to overfitting even when a large portion of train data is withheld.
arXiv Detail & Related papers (2022-10-04T21:08:27Z) - On the Exploration of Incremental Learning for Fine-grained Image
Retrieval [45.48333682748607]
We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
arXiv Detail & Related papers (2020-10-15T21:07:44Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z) - Sharing Matters for Generalization in Deep Metric Learning [22.243744691711452]
This work investigates how to learn characteristics that separate between classes without the need for annotations or training data.
By formulating our approach as a novel triplet sampling strategy, it can be easily applied on top of recent ranking loss frameworks.
arXiv Detail & Related papers (2020-04-12T10:21:15Z) - Proxy Anchor Loss for Deep Metric Learning [47.832107446521626]
We present a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations.
Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers.
Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.
arXiv Detail & Related papers (2020-03-31T02:05:27Z) - Embedding Expansion: Augmentation in Embedding Space for Deep Metric
Learning [17.19890778916312]
We propose an augmentation method in an embedding space for pair-based metric learning losses, called embedding expansion.
Because of its simplicity and flexibility, it can be used for existing metric learning losses without affecting model size, training speed, or optimization difficulty.
arXiv Detail & Related papers (2020-03-05T11:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.