Enhancing the Transferability via Feature-Momentum Adversarial Attack
- URL: http://arxiv.org/abs/2204.10606v1
- Date: Fri, 22 Apr 2022 09:52:49 GMT
- Title: Enhancing the Transferability via Feature-Momentum Adversarial Attack
- Authors: Xianglong and Yuezun Li and Haipeng Qu and Junyu Dong
- Abstract summary: We describe a new method called Feature-Momentum Adversarial Attack (FMAA) to further improve transferability.
Our method significantly outperforms other state-of-the-art methods by a large margin on different target models.
- Score: 36.449154438599884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferable adversarial attack has drawn increasing attention due to their
practical threaten to real-world applications. In particular, the feature-level
adversarial attack is one recent branch that can enhance the transferability
via disturbing the intermediate features. The existing methods usually create a
guidance map for features, where the value indicates the importance of the
corresponding feature element and then employs an iterative algorithm to
disrupt the features accordingly. However, the guidance map is fixed in
existing methods, which can not consistently reflect the behavior of networks
as the image is changed during iteration. In this paper, we describe a new
method called Feature-Momentum Adversarial Attack (FMAA) to further improve
transferability. The key idea of our method is that we estimate a guidance map
dynamically at each iteration using momentum to effectively disturb the
category-relevant features. Extensive experiments demonstrate that our method
significantly outperforms other state-of-the-art methods by a large margin on
different target models.
Related papers
- Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - Improving Adversarial Transferability via Intermediate-level
Perturbation Decay [79.07074710460012]
We develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization.
Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models.
arXiv Detail & Related papers (2023-04-26T09:49:55Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Diverse Generative Adversarial Perturbations on Attention Space for
Transferable Adversarial Attacks [29.034390810078172]
Adrial attacks with improved transferability have recently received much attention due to their practicality.
Existing transferable attacks craft perturbations in a deterministic manner and often fail to fully explore the loss surface.
We propose Attentive-Diversity Attack (ADA), which disrupts diverse salient features in a manner to improve transferability.
arXiv Detail & Related papers (2022-08-11T06:00:40Z) - Transferable Physical Attack against Object Detection with Separable
Attention [14.805375472459728]
Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples.
In this paper, we put forward a novel method of generating physically realizable adversarial camouflage to achieve transferable attack against detection models.
arXiv Detail & Related papers (2022-05-19T14:34:55Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Query-Free Adversarial Transfer via Undertrained Surrogates [14.112444998191698]
We introduce a new method for improving the efficacy of adversarial attacks in a black-box setting by undertraining the surrogate model which the attacks are generated on.
We show that this method transfers well across architectures and outperforms state-of-the-art methods by a wide margin.
arXiv Detail & Related papers (2020-07-01T23:12:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.