Learning Deep Optimal Embeddings with Sinkhorn Divergences
- URL: http://arxiv.org/abs/2209.06469v1
- Date: Wed, 14 Sep 2022 07:54:16 GMT
- Title: Learning Deep Optimal Embeddings with Sinkhorn Divergences
- Authors: Soumava Kumar Roy, Yan Han, Mehrtash Harandi, Lars Petersson
- Abstract summary: Deep Metric Learning algorithms aim to learn an efficient embedding space to preserve the similarity relationships among the input data.
These algorithms have achieved significant performance gains across a wide plethora of tasks, but fail to consider and increase comprehensive similarity constraints.
Here, we address the concern of learning a discriminative deep embedding space by designing a novel, yet effective Deep Class-wise Discrepancy Loss function.
- Score: 33.496926214655666
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep Metric Learning algorithms aim to learn an efficient embedding space to
preserve the similarity relationships among the input data. Whilst these
algorithms have achieved significant performance gains across a wide plethora
of tasks, they have also failed to consider and increase comprehensive
similarity constraints; thus learning a sub-optimal metric in the embedding
space. Moreover, up until now; there have been few studies with respect to
their performance in the presence of noisy labels. Here, we address the concern
of learning a discriminative deep embedding space by designing a novel, yet
effective Deep Class-wise Discrepancy Loss (DCDL) function that segregates the
underlying similarity distributions (thus introducing class-wise discrepancy)
of the embedding points between each and every class. Our empirical results
across three standard image classification datasets and two fine-grained image
recognition datasets in the presence and absence of noise clearly demonstrate
the need for incorporating such class-wise similarity relationships along with
traditional algorithms while learning a discriminative embedding space.
Related papers
- SimO Loss: Anchor-Free Contrastive Loss for Fine-Grained Supervised Contrastive Learning [0.0]
We introduce a novel anchor-free contrastive learning (L) method leveraging our proposed Similarity-Orthogonality (SimO) loss.
Our approach minimizes a semi-metric discriminative loss function that simultaneously optimize two key objectives.
We provide visualizations that demonstrate the impact of SimO loss on the embedding space.
arXiv Detail & Related papers (2024-10-07T17:41:10Z) - Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - DNA: Denoised Neighborhood Aggregation for Fine-grained Category
Discovery [25.836440772705505]
We propose a self-supervised framework that encodes semantic structures of data into the embedding space.
We retrieve k-nearest neighbors of a query as its positive keys to capture semantic similarities between data and then aggregate information from the neighbors to learn compact cluster representations.
Our method can retrieve more accurate neighbors (21.31% accuracy improvement) and outperform state-of-the-art models by a large margin.
arXiv Detail & Related papers (2023-10-16T07:43:30Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Pseudo-supervised Deep Subspace Clustering [27.139553299302754]
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance.
However, self-reconstruction loss of an AE ignores rich useful relation information.
It is also challenging to learn high-level similarity without feeding semantic labels.
arXiv Detail & Related papers (2021-04-08T06:25:47Z) - Hyperspherical embedding for novel class classification [1.5952956981784217]
We present a constraint-based approach applied to representations in the latent space under the normalized softmax loss.
We experimentally validate the proposed approach for the classification of unseen classes on different datasets using both metric learning and the normalized softmax loss.
Our results show that not only our proposed strategy can be efficiently trained on larger set of classes, as it does not require pairwise learning, but also present better classification results than the metric learning strategies.
arXiv Detail & Related papers (2021-02-05T15:42:13Z) - Beyond the Deep Metric Learning: Enhance the Cross-Modal Matching with
Adversarial Discriminative Domain Regularization [21.904563910555368]
We propose a novel learning framework to construct a set of discriminative data domains within each image-text pairs.
Our approach can generally improve the learning efficiency and the performance of existing metrics learning frameworks.
arXiv Detail & Related papers (2020-10-23T01:48:37Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.