Learning Empirical Bregman Divergence for Uncertain Distance
Representation
- URL: http://arxiv.org/abs/2304.07689v3
- Date: Mon, 15 May 2023 16:38:23 GMT
- Title: Learning Empirical Bregman Divergence for Uncertain Distance
Representation
- Authors: Zhiyuan Li, Ziru Liu, Anna Zou, Anca L. Ralescu
- Abstract summary: We introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting.
Our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
- Score: 3.9142982525021512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep metric learning techniques have been used for visual representation in
various supervised and unsupervised learning tasks through learning embeddings
of samples with deep networks. However, classic approaches, which employ a
fixed distance metric as a similarity function between two embeddings, may lead
to suboptimal performance for capturing the complex data distribution. The
Bregman divergence generalizes measures of various distance metrics and arises
throughout many fields of deep metric learning. In this paper, we first show
how deep metric learning loss can arise from the Bregman divergence. We then
introduce a novel method for learning empirical Bregman divergence directly
from data based on parameterizing the convex function underlying the Bregman
divergence with a deep learning setting. We further experimentally show that
our approach performs effectively on five popular public datasets compared to
other SOTA deep metric learning methods, particularly for pattern recognition
problems.
Related papers
- Piecewise-Linear Manifolds for Deep Metric Learning [8.670873561640903]
Unsupervised deep metric learning focuses on learning a semantic representation space using only unlabeled data.
We propose to model the high-dimensional data manifold using a piecewise-linear approximation, with each low-dimensional linear piece approximating the data manifold in a small neighborhood of a point.
We empirically show that this similarity estimate correlates better with the ground truth than the similarity estimates of current state-of-the-art techniques.
arXiv Detail & Related papers (2024-03-22T06:22:20Z) - Learning Generalized Hybrid Proximity Representation for Image
Recognition [8.750658662419328]
We propose a novel supervised metric learning method that can learn the distance metrics in both geometric and probabilistic space for image recognition.
In contrast to the previous metric learning methods which usually focus on learning the distance metrics in Euclidean space, our proposed method is able to learn better distance representation in a hybrid approach.
arXiv Detail & Related papers (2023-01-31T07:49:25Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - BatchFormer: Learning to Explore Sample Relationships for Robust
Representation Learning [93.38239238988719]
We propose to enable deep neural networks with the ability to learn the sample relationships from each mini-batch.
BatchFormer is applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training.
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications.
arXiv Detail & Related papers (2022-03-03T05:31:33Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Bregman Divergence for Contrastive Learning of Visual
Representations [4.994260049719745]
Deep Bregman divergence measures divergence of data points using neural networks which is beyond Euclidean distance.
We aim to enhance contrastive loss used in self-supervised learning by training additional networks based on functional Bregman divergence.
arXiv Detail & Related papers (2021-09-15T17:44:40Z) - Towards Interpretable Deep Metric Learning with Structural Matching [86.16700459215383]
We present a deep interpretable metric learning (DIML) method for more transparent embedding learning.
Our method is model-agnostic, which can be applied to off-the-shelf backbone networks and metric learning methods.
We evaluate our method on three major benchmarks of deep metric learning including CUB200-2011, Cars196, and Stanford Online Products.
arXiv Detail & Related papers (2021-08-12T17:59:09Z) - Multi-level Distance Regularization for Deep Metric Learning [20.178765779788492]
We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR)
MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels.
By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
arXiv Detail & Related papers (2021-02-08T14:16:07Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z) - Deep Divergence Learning [11.88774207521156]
We introduce deep Bregman divergences, which are based on learning and parameterizing functional Bregman divergences using neural networks.
We show in particular how deep metric learning formulations, kernel metric learning, Mahalanobis metric learning, and moment-matching functions for comparing distributions arise.
We then describe a deep learning framework for learning general functional Bregman divergences, and show in experiments that this method yields superior performance on benchmark datasets.
arXiv Detail & Related papers (2020-05-06T06:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.