Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks
- URL: http://arxiv.org/abs/2307.02828v1
- Date: Thu, 6 Jul 2023 07:52:42 GMT
- Title: Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks
- Authors: Xu Han, Anmin Liu, Chenxuan Yao, Yanbo Fan, Kun He
- Abstract summary: We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM)
Specifically, we use data rescaling to substitute the sign function without extra computational cost.
Our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.
- Score: 18.05924632169541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are known to be vulnerable to adversarial examples
crafted by adding human-imperceptible perturbations to the benign input. After
achieving nearly 100% attack success rates in white-box setting, more focus is
shifted to black-box attacks, of which the transferability of adversarial
examples has gained significant attention. In either case, the common
gradient-based methods generally use the sign function to generate
perturbations on the gradient update, that offers a roughly correct direction
and has gained great success. But little work pays attention to its possible
limitation. In this work, we observe that the deviation between the original
gradient and the generated noise may lead to inaccurate gradient update
estimation and suboptimal solutions for adversarial transferability. To this
end, we propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM).
Specifically, we use data rescaling to substitute the sign function without
extra computational cost. We further propose a Depth First Sampling method to
eliminate the fluctuation of rescaling and stabilize the gradient update. Our
method could be used in any gradient-based attacks and is extensible to be
integrated with various input transformation or ensemble methods to further
improve the adversarial transferability. Extensive experiments on the standard
ImageNet dataset show that our method could significantly boost the
transferability of gradient-based attacks and outperform the state-of-the-art
baselines.
Related papers
- Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [19.917677500613788]
gradient-based approaches generally use the $sign$ function to generate perturbations at the end of the process.
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to improve the transferability of crafted adversarial examples.
arXiv Detail & Related papers (2022-04-06T15:12:20Z) - Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based
Prior [50.393092185611536]
We consider the black-box adversarial setting, where the adversary needs to craft adversarial examples without access to the gradients of a target model.
Previous methods attempted to approximate the true gradient either by using the transfer gradient of a surrogate white-box model or based on the feedback of model queries.
We propose two prior-guided random gradient-free (PRGF) algorithms based on biased sampling and gradient averaging.
arXiv Detail & Related papers (2022-03-13T04:06:27Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Staircase Sign Method for Boosting Adversarial Attacks [123.19227129979943]
Crafting adversarial examples for the transfer-based attack is challenging and remains a research hot spot.
We propose a novel Staircase Sign Method (S$2$M) to alleviate this issue, thus boosting transfer-based attacks.
Our method can be generally integrated into any transfer-based attacks, and the computational overhead is negligible.
arXiv Detail & Related papers (2021-04-20T02:31:55Z) - Enhancing the Transferability of Adversarial Attacks through Variance
Tuning [6.5328074334512]
We propose a new method called variance tuning to enhance the class of iterative gradient based attack methods.
Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks.
arXiv Detail & Related papers (2021-03-29T12:41:55Z) - Boosting Adversarial Transferability through Enhanced Momentum [50.248076722464184]
Deep learning models are vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images.
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
We propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability.
arXiv Detail & Related papers (2021-03-19T03:10:32Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.