Robust Calibrate Proxy Loss for Deep Metric Learning
- URL: http://arxiv.org/abs/2304.09162v1
- Date: Thu, 6 Apr 2023 02:43:10 GMT
- Title: Robust Calibrate Proxy Loss for Deep Metric Learning
- Authors: Xinyue Li, Jian Wang, Wei Song, Yanling Du, Zhixiang Liu
- Abstract summary: We propose a Calibrate Proxy structure, which uses the real sample information to improve the similarity calculation in proxy-based loss.
We show that our approach can effectively improve the performance of commonly used proxy-based losses on both regular and noisy datasets.
- Score: 6.784952050036532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mainstream researche in deep metric learning can be divided into two
genres: proxy-based and pair-based methods. Proxy-based methods have attracted
extensive attention due to the lower training complexity and fast network
convergence. However, these methods have limitations as the poxy optimization
is done by network, which makes it challenging for the proxy to accurately
represent the feature distrubtion of the real class of data. In this paper, we
propose a Calibrate Proxy (CP) structure, which uses the real sample
information to improve the similarity calculation in proxy-based loss and
introduces a calibration loss to constraint the proxy optimization towards the
center of the class features. At the same time, we set a small number of
proxies for each class to alleviate the impact of intra-class differences on
retrieval performance. The effectiveness of our method is evaluated by
extensive experiments on three public datasets and multiple synthetic
label-noise datasets. The results show that our approach can effectively
improve the performance of commonly used proxy-based losses on both regular and
noisy datasets.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Deep Metric Learning with Soft Orthogonal Proxies [1.823505080809275]
We propose a novel approach that introduces Soft Orthogonality (SO) constraint on proxies.
Our approach leverages Data-Efficient Image Transformer (DeiT) as an encoder to extract contextual features from images along with a DML objective.
Our evaluations demonstrate the superiority of our proposed approach over state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-06-22T17:22:15Z) - Non-isotropy Regularization for Proxy-based Deep Metric Learning [78.18860829585182]
We propose non-isotropy regularization ($mathbbNIR$) for proxy-based Deep Metric Learning.
This allows us to explicitly induce a non-isotropic distribution of samples around a proxy to optimize for.
Experiments highlight consistent generalization benefits of $mathbbNIR$ while achieving competitive and state-of-the-art performance.
arXiv Detail & Related papers (2022-03-16T11:13:20Z) - Proxy Synthesis: Learning with Synthetic Classes for Deep Metric
Learning [13.252164137961332]
We propose a simple regularizer called Proxy Synthesis that exploits synthetic classes for stronger generalization in deep metric learning.
The proposed method generates synthetic embeddings and proxies that work as synthetic classes, and they mimic unseen classes when computing proxy-based losses.
Our method is applicable to any proxy-based losses, including softmax and its variants.
arXiv Detail & Related papers (2021-03-29T09:39:07Z) - Hierarchical Proxy-based Loss for Deep Metric Learning [32.10423536428467]
Proxy-based metric learning losses are superior to pair-based losses due to their fast convergence and low training complexity.
We present a framework that leverages this implicit hierarchy by imposing a hierarchical structure on the proxies.
Results demonstrate that our hierarchical proxy-based loss framework improves the performance of existing proxy-based losses.
arXiv Detail & Related papers (2021-03-25T00:38:33Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies [65.92826041406802]
We propose a Proxy-based deep Graph Metric Learning approach from the perspective of graph classification.
Multiple global proxies are leveraged to collectively approximate the original data points for each class.
We design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels.
arXiv Detail & Related papers (2020-10-26T14:52:42Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Proxy Anchor Loss for Deep Metric Learning [47.832107446521626]
We present a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations.
Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers.
Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.
arXiv Detail & Related papers (2020-03-31T02:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.