Hierarchical Proxy-based Loss for Deep Metric Learning
- URL: http://arxiv.org/abs/2103.13538v1
- Date: Thu, 25 Mar 2021 00:38:33 GMT
- Title: Hierarchical Proxy-based Loss for Deep Metric Learning
- Authors: Zhibo Yang, Muhammet Bastan, Xinliang Zhu, Doug Gray, Dimitris Samaras
- Abstract summary: Proxy-based metric learning losses are superior to pair-based losses due to their fast convergence and low training complexity.
We present a framework that leverages this implicit hierarchy by imposing a hierarchical structure on the proxies.
Results demonstrate that our hierarchical proxy-based loss framework improves the performance of existing proxy-based losses.
- Score: 32.10423536428467
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Proxy-based metric learning losses are superior to pair-based losses due to
their fast convergence and low training complexity. However, existing
proxy-based losses focus on learning class-discriminative features while
overlooking the commonalities shared across classes which are potentially
useful in describing and matching samples. Moreover, they ignore the implicit
hierarchy of categories in real-world datasets, where similar subordinate
classes can be grouped together. In this paper, we present a framework that
leverages this implicit hierarchy by imposing a hierarchical structure on the
proxies and can be used with any existing proxy-based loss. This allows our
model to capture both class-discriminative features and class-shared
characteristics without breaking the implicit data hierarchy. We evaluate our
method on five established image retrieval datasets such as In-Shop and SOP.
Results demonstrate that our hierarchical proxy-based loss framework improves
the performance of existing proxy-based losses, especially on large datasets
which exhibit strong hierarchical structure.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Robust Calibrate Proxy Loss for Deep Metric Learning [6.784952050036532]
We propose a Calibrate Proxy structure, which uses the real sample information to improve the similarity calculation in proxy-based loss.
We show that our approach can effectively improve the performance of commonly used proxy-based losses on both regular and noisy datasets.
arXiv Detail & Related papers (2023-04-06T02:43:10Z) - HIER: Metric Learning Beyond Class Labels via Hierarchical
Regularization [17.575016642108253]
We propose a new regularization method, dubbed HIER, to discover the latent semantic hierarchy of training data.
HIER achieves this goal with no annotation for the semantic hierarchy but by learning hierarchical proxies in hyperbolic spaces.
arXiv Detail & Related papers (2022-12-29T11:05:47Z) - Weakly-supervised Action Localization via Hierarchical Mining [76.00021423700497]
Weakly-supervised action localization aims to localize and classify action instances in the given videos temporally with only video-level categorical labels.
We propose a hierarchical mining strategy under video-level and snippet-level manners, i.e., hierarchical supervision and hierarchical consistency mining.
We show that HiM-Net outperforms existing methods on THUMOS14 and ActivityNet1.3 datasets with large margins by hierarchically mining the supervision and consistency.
arXiv Detail & Related papers (2022-06-22T12:19:09Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - The Group Loss++: A deeper look into group loss for deep metric learning [65.19665861268574]
Group Loss is a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group.
We show state-of-the-art results on clustering and image retrieval on four datasets, and present competitive results on two person re-identification datasets.
arXiv Detail & Related papers (2022-04-04T14:09:58Z) - Provable Hierarchy-Based Meta-Reinforcement Learning [50.17896588738377]
We analyze HRL in the meta-RL setting, where learner learns latent hierarchical structure during meta-training for use in a downstream task.
We provide "diversity conditions" which, together with a tractable optimism-based algorithm, guarantee sample-efficient recovery of this natural hierarchy.
Our bounds incorporate common notions in HRL literature such as temporal and state/action abstractions, suggesting that our setting and analysis capture important features of HRL in practice.
arXiv Detail & Related papers (2021-10-18T17:56:02Z) - Proxy Synthesis: Learning with Synthetic Classes for Deep Metric
Learning [13.252164137961332]
We propose a simple regularizer called Proxy Synthesis that exploits synthetic classes for stronger generalization in deep metric learning.
The proposed method generates synthetic embeddings and proxies that work as synthetic classes, and they mimic unseen classes when computing proxy-based losses.
Our method is applicable to any proxy-based losses, including softmax and its variants.
arXiv Detail & Related papers (2021-03-29T09:39:07Z) - Pitfalls of Assessing Extracted Hierarchies for Multi-Class
Classification [4.89253144446913]
We identify some common pitfalls that may lead practitioners to make misleading conclusions about their methods.
We show how the hierarchy's quality can become irrelevant depending on the experimental setup.
Our results confirm that datasets with a high number of classes generally present complex structures in how these classes relate to each other.
arXiv Detail & Related papers (2021-01-26T21:50:57Z) - Hierarchical Class-Based Curriculum Loss [18.941207332233805]
Most real world data have dependencies between labels, which can be captured by using a hierarchy.
We propose a loss function, hierarchical curriculum loss, with two properties: (i) satisfy hierarchical constraints present in the label space, and (ii) provide non-uniform weights to labels based on their levels in the hierarchy.
arXiv Detail & Related papers (2020-06-05T18:48:57Z) - Proxy Anchor Loss for Deep Metric Learning [47.832107446521626]
We present a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations.
Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers.
Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.
arXiv Detail & Related papers (2020-03-31T02:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.