Revisiting Training Strategies and Generalization Performance in Deep
Metric Learning
- URL: http://arxiv.org/abs/2002.08473v9
- Date: Sat, 1 Aug 2020 16:14:33 GMT
- Title: Revisiting Training Strategies and Generalization Performance in Deep
Metric Learning
- Authors: Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Bj\"orn
Ommer, Joseph Paul Cohen
- Abstract summary: We revisit the most widely used DML objective functions and conduct a study of the crucial parameter choices.
Under consistent comparison, DML objectives show much higher saturation than indicated by literature.
Exploiting these insights, we propose a simple, yet effective, training regularization to reliably boost the performance of ranking-based DML models.
- Score: 28.54755295856929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Metric Learning (DML) is arguably one of the most influential lines of
research for learning visual similarities with many proposed approaches every
year. Although the field benefits from the rapid progress, the divergence in
training protocols, architectures, and parameter choices make an unbiased
comparison difficult. To provide a consistent reference point, we revisit the
most widely used DML objective functions and conduct a study of the crucial
parameter choices as well as the commonly neglected mini-batch sampling
process. Under consistent comparison, DML objectives show much higher
saturation than indicated by literature. Further based on our analysis, we
uncover a correlation between the embedding space density and compression to
the generalization performance of DML models. Exploiting these insights, we
propose a simple, yet effective, training regularization to reliably boost the
performance of ranking-based DML models on various standard benchmark datasets.
Code and a publicly accessible WandB-repo are available at
https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch.
Related papers
- Improved Diversity-Promoting Collaborative Metric Learning for Recommendation [127.08043409083687]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2024-09-02T07:44:48Z) - Take the Bull by the Horns: Hard Sample-Reweighted Continual Training
Improves LLM Generalization [165.98557106089777]
A key challenge is to enhance the capabilities of large language models (LLMs) amid a looming shortage of high-quality training data.
Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets.
We then formalize this strategy into a principled framework of Instance-Reweighted Distributionally Robust Optimization.
arXiv Detail & Related papers (2024-02-22T04:10:57Z) - Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning [13.964106147449051]
Existing solutions concentrate on fine-tuning the pre-trained models on conventional image datasets.
We propose a novel and effective framework based on learning Visual Prompts (VPT) in the pre-trained Vision Transformers (ViT)
We demonstrate that our new approximations with semantic information are superior to representative capabilities.
arXiv Detail & Related papers (2024-02-04T04:42:05Z) - Mean-AP Guided Reinforced Active Learning for Object Detection [31.304039641225504]
This paper introduces Mean-AP Guided Reinforced Active Learning for Object Detection (MGRAL)
MGRAL is a novel approach that leverages the concept of expected model output changes as informativeness for deep detection networks.
Our approach demonstrates strong performance, establishing a new paradigm in reinforcement learning-based active learning for object detection.
arXiv Detail & Related papers (2023-10-12T14:59:22Z) - Guided Deep Metric Learning [0.9786690381850356]
We propose a novel approach to DML that we call Guided Deep Metric Learning.
The proposed method is capable of a better manifold generalization and representation to up to 40% improvement.
arXiv Detail & Related papers (2022-06-04T17:34:11Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Characterizing Generalization under Out-Of-Distribution Shifts in Deep
Metric Learning [32.51394862932118]
We present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML.
ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts.
We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases.
arXiv Detail & Related papers (2021-07-20T15:26:09Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.