Multi-level Distance Regularization for Deep Metric Learning
- URL: http://arxiv.org/abs/2102.04223v1
- Date: Mon, 8 Feb 2021 14:16:07 GMT
- Title: Multi-level Distance Regularization for Deep Metric Learning
- Authors: Yonghyun Kim and Wonpyo Park
- Abstract summary: We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR)
MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels.
By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
- Score: 20.178765779788492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel distance-based regularization method for deep metric
learning called Multi-level Distance Regularization (MDR). MDR explicitly
disturbs a learning procedure by regularizing pairwise distances between
embedding vectors into multiple levels that represents a degree of similarity
between a pair. In the training stage, the model is trained with both MDR and
an existing loss function of deep metric learning, simultaneously; the two
losses interfere with the objective of each other, and it makes the learning
process difficult. Moreover, MDR prevents some examples from being ignored or
overly influenced in the learning process. These allow the parameters of the
embedding network to be settle on a local optima with better generalization.
Without bells and whistles, MDR with simple Triplet loss achieves
the-state-of-the-art performance in various benchmark datasets: CUB-200-2011,
Cars-196, Stanford Online Products, and In-Shop Clothes Retrieval. We
extensively perform ablation studies on its behaviors to show the effectiveness
of MDR. By easily adopting our MDR, the previous approaches can be improved in
performance and generalization ability.
Related papers
- DAAL: Density-Aware Adaptive Line Margin Loss for Multi-Modal Deep Metric Learning [1.9472493183927981]
We propose a novel loss function called Density-Aware Adaptive Margin Loss (DAAL)
DAAL preserves the density distribution of embeddings while encouraging the formation of adaptive sub-clusters within each class.
Experiments on benchmark fine-grained datasets demonstrate the superior performance of DAAL.
arXiv Detail & Related papers (2024-10-07T19:04:24Z) - FULLER: Unified Multi-modality Multi-task 3D Perception via Multi-level
Gradient Calibration [89.4165092674947]
Multi-modality fusion and multi-task learning are becoming trendy in 3D autonomous driving scenario.
Previous works manually coordinate the learning framework with empirical knowledge, which may lead to sub-optima.
We propose a novel yet simple multi-level gradient calibration learning framework across tasks and modalities during optimization.
arXiv Detail & Related papers (2023-07-31T12:50:15Z) - Meta-Learning Adversarial Bandit Algorithms [55.72892209124227]
We study online meta-learning with bandit feedback.
We learn to tune online mirror descent generalization (OMD) with self-concordant barrier regularizers.
arXiv Detail & Related papers (2023-07-05T13:52:10Z) - SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - Learning Empirical Bregman Divergence for Uncertain Distance
Representation [3.9142982525021512]
We introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting.
Our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
arXiv Detail & Related papers (2023-04-16T04:16:28Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Rethinking Deep Contrastive Learning with Embedding Memory [58.66613563148031]
Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML)
We provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory.
arXiv Detail & Related papers (2021-03-25T17:39:34Z) - Decoupled and Memory-Reinforced Networks: Towards Effective Feature
Learning for One-Step Person Search [65.51181219410763]
One-step methods have been developed to handle pedestrian detection and identification sub-tasks using a single network.
There are two major challenges in the current one-step approaches.
We propose a decoupled and memory-reinforced network (DMRNet) to overcome these problems.
arXiv Detail & Related papers (2021-02-22T06:19:45Z) - Multimodal-Aware Weakly Supervised Metric Learning with Self-weighting
Triplet Loss [2.010312620798609]
We propose a novel weakly supervised metric learning algorithm, named MultimoDal Aware weakly supervised Metric Learning (MDaML)
MDaML partitions the data space into several clusters and allocates the local cluster centers and weight for each sample.
Experiments conducted on 13 datasets validate the superiority of the proposed MDaML.
arXiv Detail & Related papers (2021-02-03T07:27:05Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.