Adversarial Attack on Deep Product Quantization Network for Image
Retrieval
- URL: http://arxiv.org/abs/2002.11374v1
- Date: Wed, 26 Feb 2020 09:25:58 GMT
- Title: Adversarial Attack on Deep Product Quantization Network for Image
Retrieval
- Authors: Yan Feng, Bin Chen, Tao Dai, Shutao Xia
- Abstract summary: Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks.
Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations.
We propose product quantization adversarial generation (PQ-AG) to generate adversarial examples for product quantization based retrieval systems.
- Score: 74.85736968193879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep product quantization network (DPQN) has recently received much attention
in fast image retrieval tasks due to its efficiency of encoding
high-dimensional visual features especially when dealing with large-scale
datasets. Recent studies show that deep neural networks (DNNs) are vulnerable
to input with small and maliciously designed perturbations (a.k.a., adversarial
examples). This phenomenon raises the concern of security issues for DPQN in
the testing/deploying stage as well. However, little effort has been devoted to
investigating how adversarial examples affect DPQN. To this end, we propose
product quantization adversarial generation (PQ-AG), a simple yet effective
method to generate adversarial examples for product quantization based
retrieval systems. PQ-AG aims to generate imperceptible adversarial
perturbations for query images to form adversarial queries, whose nearest
neighbors from a targeted product quantizaiton model are not semantically
related to those from the original queries. Extensive experiments show that our
PQ-AQ successfully creates adversarial examples to mislead targeted product
quantization retrieval models. Besides, we found that our PQ-AG significantly
degrades retrieval performance in both white-box and black-box settings.
Related papers
- Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment [2.290956583394892]
We propose to build a trustworthy IQA model via Causal Perception inspired Representation Learning (CPRL)
CPRL serves as the causation of the subjective quality label, which is invariant to the imperceptible adversarial perturbations.
Experiments on four benchmark databases show that the proposed CPRL method outperforms many state-of-the-art adversarial defense methods.
arXiv Detail & Related papers (2024-04-30T13:55:30Z) - STBA: Towards Evaluating the Robustness of DNNs for Query-Limited Black-box Scenario [50.37501379058119]
We propose the Spatial Transform Black-box Attack (STBA) to craft formidable adversarial examples in the query-limited scenario.
We show that STBA could effectively improve the imperceptibility of the adversarial examples and remarkably boost the attack success rate under query-limited settings.
arXiv Detail & Related papers (2024-03-30T13:28:53Z) - Concurrent Density Estimation with Wasserstein Autoencoders: Some
Statistical Insights [20.894503281724052]
Wasserstein Autoencoders (WAEs) have been a pioneering force in the realm of deep generative models.
Our work is an attempt to offer a theoretical understanding of the machinery behind WAEs.
arXiv Detail & Related papers (2023-12-11T18:27:25Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Towards Robust Classification with Image Quality Assessment [0.9213700601337386]
Deep convolutional neural networks (DCNN) are vulnerable to adversarial examples and sensitive to perceptual quality as well as the acquisition condition of images.
In this paper, we investigate the connection between adversarial manipulation and image quality, then propose a protective mechanism.
Our method combines image quality assessment with knowledge distillation to detect input images that would trigger a DCCN to produce egregiously wrong results.
arXiv Detail & Related papers (2020-04-14T03:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.