Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing
- URL: http://arxiv.org/abs/2105.07553v1
- Date: Mon, 17 May 2021 00:31:37 GMT
- Title: Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing
- Authors: Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu
- Abstract summary: deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
- Score: 65.32148145602865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to its powerful capability of representation learning and high-efficiency
computation, deep hashing has made significant progress in large-scale image
retrieval. However, deep hashing networks are vulnerable to adversarial
examples, which is a practical secure problem but seldom studied in
hashing-based retrieval field. In this paper, we propose a novel
prototype-supervised adversarial network (ProS-GAN), which formulates a
flexible generative architecture for efficient and effective targeted hashing
attack. To the best of our knowledge, this is the first generation-based method
to attack deep hashing networks. Generally, our proposed framework consists of
three parts, i.e., a PrototypeNet, a generator, and a discriminator.
Specifically, the designed PrototypeNet embeds the target label into the
semantic representation and learns the prototype code as the category-level
representative of the target label. Moreover, the semantic representation and
the original image are jointly fed into the generator for a flexible targeted
attack. Particularly, the prototype code is adopted to supervise the generator
to construct the targeted adversarial example by minimizing the Hamming
distance between the hash code of the adversarial example and the prototype
code. Furthermore, the generator is against the discriminator to simultaneously
encourage the adversarial examples visually realistic and the semantic
representation informative. Extensive experiments verify that the proposed
framework can efficiently produce adversarial examples with better targeted
attack performance and transferability over state-of-the-art targeted attack
methods of deep hashing. The related codes could be available at
https://github.com/xunguangwang/ProS-GAN .
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval [26.17466361744519]
Adversarial examples pose a security threat to deep hashing models.
Adversarial examples fabricated by maximizing the Hamming distance between the hash codes of adversarial samples and mainstay features.
For the first time, we formulate the formalized adversarial training of deep hashing into a unified minimax structure.
arXiv Detail & Related papers (2023-10-23T07:21:40Z) - Reliable and Efficient Evaluation of Adversarial Robustness for Deep
Hashing-Based Retrieval [20.3473596316839]
We propose a novel Pharos-guided Attack, dubbed PgA, to evaluate the adversarial robustness of deep hashing networks reliably and efficiently.
PgA can directly conduct a reliable and efficient attack on deep hashing-based retrieval by maximizing the similarity between the hash code of the adversarial example and the pharos code.
arXiv Detail & Related papers (2023-03-22T15:36:19Z) - BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean
Label [20.236328601459203]
We propose BadHash, the first generative-based imperceptible backdoor attack against deep hashing.
We show that BadHash can generate imperceptible poisoned samples with strong attack ability and transferability over state-of-the-art deep hashing schemes.
arXiv Detail & Related papers (2022-07-01T09:10:25Z) - CgAT: Center-Guided Adversarial Training for Deep Hashing-Based
Retrieval [12.421908811085627]
We present a min-max based Center-guided Adversarial Training, namely CgAT, to improve the iteration of deep hashing networks.
CgAT learns to mitigate the effects of adversarial samples by minimizing the Hamming distance to the center codes.
Compared with the current state-of-the-art defense method, we significantly improve the defense performance by an average of 18.61%.
arXiv Detail & Related papers (2022-04-18T04:51:08Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.