Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization
- URL: http://arxiv.org/abs/2211.11236v3
- Date: Tue, 16 Jul 2024 08:28:05 GMT
- Title: Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization
- Authors: Jiafeng Wang, Zhaoyu Chen, Kaixun Jiang, Dingkang Yang, Lingyi Hong, Pinxue Guo, Haijing Guo, Wenqiang Zhang,
- Abstract summary: adversarial examples are crafted by adding human-imperceptibles to benign inputs.
adversarial examples exhibit transferability across models, enabling practical black-box attacks.
We introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination.
GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4%.
- Score: 23.13302900115702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are still incapable of achieving the desired transfer attack performance. In this work, focusing on gradient optimization and consistency, we analyse the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these challenges, we introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination. Specifically, we perform gradient pre-convergence before the attack and a global search during this stage. GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4% under various advanced defense mechanisms compared to the state-of-the-art method. Ultimately, GI demonstrates strong transferability in both image and video attack domains. Particularly, when attacking advanced defense methods in the image domain, it achieves an average attack success rate of 95.4%. The code is available at $\href{https://github.com/Omenzychen/Global-Momentum-Initialization}{https://github.com/Omenzychen/Global-Momentum-Initialization}$.
Related papers
- AIM: Additional Image Guided Generation of Transferable Adversarial Attacks [72.24101555828256]
Transferable adversarial examples highlight the vulnerability of deep neural networks (DNNs) to imperceptible perturbations across various real-world applications.
In this work, we focus on generative approaches for targeted transferable attacks.
We introduce a novel plug-and-play module into the general generator architecture to enhance adversarial transferability.
arXiv Detail & Related papers (2025-01-02T07:06:49Z) - Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability [20.46894437876869]
We propose an everywhere scheme to boost targeted transferability.
We aim to optimize 'an army of targets' in every local image region.
Our approach is method-agnostic, which means it can be easily combined with existing transferable attacks.
arXiv Detail & Related papers (2025-01-01T03:06:03Z) - Improving Adversarial Transferability with Neighbourhood Gradient Information [20.55829486744819]
Deep neural networks (DNNs) are susceptible to adversarial examples, leading to significant performance degradation.
This work focuses on enhancing the transferability of adversarial examples to narrow this performance gap.
We propose the NGI-Attack, which incorporates Example Backtracking and Multiplex Mask strategies.
arXiv Detail & Related papers (2024-08-11T10:46:49Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Improving Adversarial Transferability with Spatial Momentum [10.460296317901662]
Deep Neural Networks (DNN) are vulnerable to adversarial examples.
Momentum-based attack (MI-FGSM) is one effective method to improve transferability.
We propose a novel method named Spatial Momentum Iterative FGSM Attack.
arXiv Detail & Related papers (2022-03-25T07:03:17Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Boosting Transferability of Targeted Adversarial Examples via
Hierarchical Generative Networks [56.96241557830253]
Transfer-based adversarial attacks can effectively evaluate model robustness in the black-box setting.
We propose a conditional generative attacking model, which can generate the adversarial examples targeted at different classes.
Our method improves the success rates of targeted black-box attacks by a significant margin over the existing methods.
arXiv Detail & Related papers (2021-07-05T06:17:47Z) - Enhancing the Transferability of Adversarial Attacks through Variance
Tuning [6.5328074334512]
We propose a new method called variance tuning to enhance the class of iterative gradient based attack methods.
Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks.
arXiv Detail & Related papers (2021-03-29T12:41:55Z) - Boosting Adversarial Transferability through Enhanced Momentum [50.248076722464184]
Deep learning models are vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images.
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
We propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability.
arXiv Detail & Related papers (2021-03-19T03:10:32Z) - Adversarial example generation with AdaBelief Optimizer and Crop
Invariance [8.404340557720436]
Adversarial attacks can be an important method to evaluate and select robust models in safety-critical applications.
We propose AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improve the transferability of adversarial examples.
Our method has higher success rates than state-of-the-art gradient-based attack methods.
arXiv Detail & Related papers (2021-02-07T06:00:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.