Towards Certified Robustness of Distance Metric Learning
- URL: http://arxiv.org/abs/2006.05945v2
- Date: Tue, 16 Aug 2022 15:16:12 GMT
- Title: Towards Certified Robustness of Distance Metric Learning
- Authors: Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue
- Abstract summary: We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
- Score: 53.96113074344632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metric learning aims to learn a distance metric such that semantically
similar instances are pulled together while dissimilar instances are pushed
away. Many existing methods consider maximizing or at least constraining a
distance margin in the feature space that separates similar and dissimilar
pairs of instances to guarantee their generalization ability. In this paper, we
advocate imposing an adversarial margin in the input space so as to improve the
generalization and robustness of metric learning algorithms. We first show
that, the adversarial margin, defined as the distance between training
instances and their closest adversarial examples in the input space, takes
account of both the distance margin in the feature space and the correlation
between the metric and triplet constraints. Next, to enhance robustness to
instance perturbation, we propose to enlarge the adversarial margin through
minimizing a derived novel loss function termed the perturbation loss. The
proposed loss can be viewed as a data-dependent regularizer and easily plugged
into any existing metric learning methods. Finally, we show that the enlarged
margin is beneficial to the generalization ability by using the theoretical
technique of algorithmic robustness. Experimental results on 16 datasets
demonstrate the superiority of the proposed method over existing
state-of-the-art methods in both discrimination accuracy and robustness against
possible noise.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Hyp-UML: Hyperbolic Image Retrieval with Uncertainty-aware Metric
Learning [8.012146883983227]
Metric learning plays a critical role in training image retrieval and classification.
Hyperbolic embedding can be more effective in representing the hierarchical data structure.
We propose two types of uncertainty-aware metric learning, for the popular Contrastive learning and conventional margin-based metric learning.
arXiv Detail & Related papers (2023-10-12T15:00:06Z) - Deep Metric Learning with Soft Orthogonal Proxies [1.823505080809275]
We propose a novel approach that introduces Soft Orthogonality (SO) constraint on proxies.
Our approach leverages Data-Efficient Image Transformer (DeiT) as an encoder to extract contextual features from images along with a DML objective.
Our evaluations demonstrate the superiority of our proposed approach over state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-06-22T17:22:15Z) - Learning Empirical Bregman Divergence for Uncertain Distance
Representation [3.9142982525021512]
We introduce a novel method for learning empirical Bregman divergence directly from data based on parameterizing the convex function underlying the Bregman divergence with a deep learning setting.
Our approach performs effectively on five popular public datasets compared to other SOTA deep metric learning methods, particularly for pattern recognition problems.
arXiv Detail & Related papers (2023-04-16T04:16:28Z) - Learning Generalized Hybrid Proximity Representation for Image
Recognition [8.750658662419328]
We propose a novel supervised metric learning method that can learn the distance metrics in both geometric and probabilistic space for image recognition.
In contrast to the previous metric learning methods which usually focus on learning the distance metrics in Euclidean space, our proposed method is able to learn better distance representation in a hybrid approach.
arXiv Detail & Related papers (2023-01-31T07:49:25Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.