Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks
- URL: http://arxiv.org/abs/2204.02887v1
- Date: Wed, 6 Apr 2022 15:12:20 GMT
- Title: Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks
- Authors: Xu Han, Anmin Liu, Yifeng Xiong, Yanbo Fan, Kun He
- Abstract summary: gradient-based approaches generally use the $sign$ function to generate perturbations at the end of the process.
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to improve the transferability of crafted adversarial examples.
- Score: 19.917677500613788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have shown to be very vulnerable to adversarial examples
crafted by adding human-imperceptible perturbations to benign inputs. After
achieving impressive attack success rates in the white-box setting, more focus
is shifted to black-box attacks. In either case, the common gradient-based
approaches generally use the $sign$ function to generate perturbations at the
end of the process. However, only a few works pay attention to the limitation
of the $sign$ function. Deviation between the original gradient and the
generated noises may lead to inaccurate gradient update estimation and
suboptimal solutions for adversarial transferability, which is crucial for
black-box attacks. To address this issue, we propose a Sampling-based Fast
Gradient Rescaling Method (S-FGRM) to improve the transferability of the
crafted adversarial examples. Specifically, we use data rescaling to substitute
the inefficient $sign$ function in gradient-based attacks without extra
computational cost. We also propose a Depth First Sampling method to eliminate
the fluctuation of rescaling and stabilize the gradient update. Our method can
be used in any gradient-based optimizations and is extensible to be integrated
with various input transformation or ensemble methods for further improving the
adversarial transferability. Extensive experiments on the standard ImageNet
dataset show that our S-FGRM could significantly boost the transferability of
gradient-based attacks and outperform the state-of-the-art baselines.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Boosting Decision-Based Black-Box Adversarial Attack with Gradient
Priors [37.987522238329554]
We propose a novel Decision-based Black-box Attack framework with Gradient Priors (DBA-GP)
DBA-GP seamlessly integrates the data-dependent gradient prior and time-dependent prior into the gradient estimation procedure.
Extensive experiments have demonstrated that the proposed method outperforms other strong baselines significantly.
arXiv Detail & Related papers (2023-10-29T15:05:39Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [18.05924632169541]
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM)
Specifically, we use data rescaling to substitute the sign function without extra computational cost.
Our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.
arXiv Detail & Related papers (2023-07-06T07:52:42Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Staircase Sign Method for Boosting Adversarial Attacks [123.19227129979943]
Crafting adversarial examples for the transfer-based attack is challenging and remains a research hot spot.
We propose a novel Staircase Sign Method (S$2$M) to alleviate this issue, thus boosting transfer-based attacks.
Our method can be generally integrated into any transfer-based attacks, and the computational overhead is negligible.
arXiv Detail & Related papers (2021-04-20T02:31:55Z) - Enhancing the Transferability of Adversarial Attacks through Variance
Tuning [6.5328074334512]
We propose a new method called variance tuning to enhance the class of iterative gradient based attack methods.
Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks.
arXiv Detail & Related papers (2021-03-29T12:41:55Z) - Boosting Adversarial Transferability through Enhanced Momentum [50.248076722464184]
Deep learning models are vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images.
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
We propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability.
arXiv Detail & Related papers (2021-03-19T03:10:32Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.