Provably Robust Metric Learning
- URL: http://arxiv.org/abs/2006.07024v2
- Date: Sat, 19 Dec 2020 07:23:05 GMT
- Title: Provably Robust Metric Learning
- Authors: Lu Wang, Xuanqing Liu, Jinfeng Yi, Yuan Jiang, Cho-Jui Hsieh
- Abstract summary: We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
- Score: 98.50580215125142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metric learning is an important family of algorithms for classification and
similarity search, but the robustness of learned metrics against small
adversarial perturbations is less studied. In this paper, we show that existing
metric learning algorithms, which focus on boosting the clean accuracy, can
result in metrics that are less robust than the Euclidean distance. To overcome
this problem, we propose a novel metric learning algorithm to find a
Mahalanobis distance that is robust against adversarial perturbations, and the
robustness of the resulting model is certifiable. Experimental results show
that the proposed metric learning algorithm improves both certified robust
errors and empirical robust errors (errors under adversarial attacks).
Furthermore, unlike neural network defenses which usually encounter a trade-off
between clean and robust errors, our method does not sacrifice clean errors
compared with previous metric learning methods. Our code is available at
https://github.com/wangwllu/provably_robust_metric_learning.
Related papers
- Hyp-UML: Hyperbolic Image Retrieval with Uncertainty-aware Metric
Learning [8.012146883983227]
Metric learning plays a critical role in training image retrieval and classification.
Hyperbolic embedding can be more effective in representing the hierarchical data structure.
We propose two types of uncertainty-aware metric learning, for the popular Contrastive learning and conventional margin-based metric learning.
arXiv Detail & Related papers (2023-10-12T15:00:06Z) - Rapid Adaptation in Online Continual Learning: Are We Evaluating It
Right? [135.71855998537347]
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy.
We show that this metric is unreliable, as even vacuous blind classifiers can achieve unrealistically high online accuracy.
Existing OCL algorithms can also achieve high online accuracy, but perform poorly in retaining useful information.
arXiv Detail & Related papers (2023-05-16T08:29:33Z) - Algorithms that Approximate Data Removal: New Results and Limitations [2.6905021039717987]
We study the problem of deleting user data from machine learning models trained using empirical risk minimization.
We develop an online unlearning algorithm that is both computationally and memory efficient.
arXiv Detail & Related papers (2022-09-25T17:20:33Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z) - Detecting Misclassification Errors in Neural Networks with a Gaussian
Process Model [20.948038514886377]
This paper presents a new framework that produces a quantitative metric for detecting misclassification errors.
The framework, RED, builds an error detector on top of the base classifier and estimates uncertainty of the detection scores using Gaussian Processes.
arXiv Detail & Related papers (2020-10-05T15:01:30Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z) - Calibrated neighborhood aware confidence measure for deep metric
learning [0.0]
Deep metric learning has been successfully applied to problems in few-shot learning, image retrieval, and open-set classifications.
measuring the confidence of a deep metric learning model and identifying unreliable predictions is still an open challenge.
This paper focuses on defining a calibrated and interpretable confidence metric that closely reflects its classification accuracy.
arXiv Detail & Related papers (2020-06-08T21:05:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.