RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval
- URL: http://arxiv.org/abs/2206.11225v1
- Date: Fri, 17 Jun 2022 16:50:50 GMT
- Title: RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval
- Authors: Yihan Wu, Hongyang Zhang, Heng Huang
- Abstract summary: We propose the first 1-nearest neighbor (NN) image retrieval algorithm, RetrievalGuard, which is provably robust against adversarial perturbations.
We show that the smoothed retrieval model has bounded Lipschitz constant and thus the retrieval score is invariant to $ell$ adversarial perturbations.
- Score: 84.33752026418045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research works have shown that image retrieval models are vulnerable
to adversarial attacks, where slightly modified test inputs could lead to
problematic retrieval results. In this paper, we aim to design a provably
robust image retrieval model which keeps the most important evaluation metric
Recall@1 invariant to adversarial perturbation. We propose the first 1-nearest
neighbor (NN) image retrieval algorithm, RetrievalGuard, which is provably
robust against adversarial perturbations within an $\ell_2$ ball of calculable
radius. The challenge is to design a provably robust algorithm that takes into
consideration the 1-NN search and the high-dimensional nature of the embedding
space. Algorithmically, given a base retrieval model and a query sample, we
build a smoothed retrieval model by carefully analyzing the 1-NN search
procedure in the high-dimensional embedding space. We show that the smoothed
retrieval model has bounded Lipschitz constant and thus the retrieval score is
invariant to $\ell_2$ adversarial perturbations. Experiments on image retrieval
tasks validate the robustness of our RetrievalGuard method.
Related papers
- Black-box Adversarial Attacks against Dense Retrieval Models: A
Multi-view Contrastive Learning Method [115.29382166356478]
We introduce the adversarial retrieval attack (AREA) task.
It is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model.
We find that the promising results that have previously been reported on attacking NRMs, do not generalize to DR models.
We propose to formalize attacks on DR models as a contrastive learning problem in a multi-view representation space.
arXiv Detail & Related papers (2023-08-19T00:24:59Z) - Composed Image Retrieval with Text Feedback via Multi-grained
Uncertainty Regularization [73.04187954213471]
We introduce a unified learning approach to simultaneously modeling the coarse- and fine-grained retrieval.
The proposed method has achieved +4.03%, +3.38%, and +2.40% Recall@50 accuracy over a strong baseline.
arXiv Detail & Related papers (2022-11-14T14:25:40Z) - Geometrically Adaptive Dictionary Attack on Face Recognition [23.712389625037442]
We propose a strategy for query-efficient black-box attacks on face recognition.
Our core idea is to create an adversarial perturbation in the UV texture map and project it onto the face in the image.
We show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.
arXiv Detail & Related papers (2021-11-08T10:26:28Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - GeoDA: a geometric framework for black-box adversarial attacks [79.52980486689287]
We propose a framework to generate adversarial examples in one of the most challenging black-box settings.
Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples.
arXiv Detail & Related papers (2020-03-13T20:03:01Z) - Contextual Search in the Presence of Adversarial Corruptions [33.28268414842846]
We study contextual search, a generalization of binary search in higher dimensions.
We show that these algorithms attain near-optimal regret in the absence of adversarial corruptions.
Our techniques draw inspiration from learning theory, game theory, high-dimensional geometry, and convex analysis.
arXiv Detail & Related papers (2020-02-26T17:25:53Z) - Progressive Local Filter Pruning for Image Retrieval Acceleration [43.97722250091591]
We propose a new Progressive Local Filter Pruning (PLFP) method for image retrieval acceleration.
Specifically, layer by layer, we analyze the local geometric properties of each filter and select the one that can be replaced by the neighbors.
In this way, the representation ability of the model is preserved.
arXiv Detail & Related papers (2020-01-24T04:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.