Improving Adversarial Transferability with Spatial Momentum
- URL: http://arxiv.org/abs/2203.13479v1
- Date: Fri, 25 Mar 2022 07:03:17 GMT
- Title: Improving Adversarial Transferability with Spatial Momentum
- Authors: Guoqiu Wang, Xingxing Wei, Huanqian Yan
- Abstract summary: Deep Neural Networks (DNN) are vulnerable to adversarial examples.
Momentum-based attack (MI-FGSM) is one effective method to improve transferability.
We propose a novel method named Spatial Momentum Iterative FGSM Attack.
- Score: 10.460296317901662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNN) are vulnerable to adversarial examples. Although
many adversarial attack methods achieve satisfactory attack success rates under
the white-box setting, they usually show poor transferability when attacking
other DNN models. Momentum-based attack (MI-FGSM) is one effective method to
improve transferability. It integrates the momentum term into the iterative
process, which can stabilize the update directions by adding the gradients'
temporal correlation for each pixel. We argue that only this temporal momentum
is not enough, the gradients from the spatial domain within an image, i.e.
gradients from the context pixels centered on the target pixel are also
important to the stabilization. For that, in this paper, we propose a novel
method named Spatial Momentum Iterative FGSM Attack (SMI-FGSM), which
introduces the mechanism of momentum accumulation from temporal domain to
spatial domain by considering the context gradient information from different
regions within the image. SMI-FGSM is then integrated with MI-FGSM to
simultaneously stabilize the gradients' update direction from both the temporal
and spatial domain. The final method is called SM$^2$I-FGSM. Extensive
experiments are conducted on the ImageNet dataset and results show that
SM$^2$I-FGSM indeed further enhances the transferability. It achieves the best
transferability success rate for multiple mainstream undefended and defended
models, which outperforms the state-of-the-art methods by a large margin.
Related papers
- SVasP: Self-Versatility Adversarial Style Perturbation for Cross-Domain Few-Shot Learning [21.588320570295835]
Cross-Domain Few-Shot Learning aims to transfer knowledge from seen source domains to unseen target domains.
Recent studies focus on utilizing visual styles to bridge the domain gap between different domains.
This paper proposes a novel crop-global style method, called underlinetextbfSelf-underlinetextbfVersatility.
arXiv Detail & Related papers (2024-12-12T08:58:42Z) - Improving Adversarial Transferability with Neighbourhood Gradient Information [20.55829486744819]
Deep neural networks (DNNs) are susceptible to adversarial examples, leading to significant performance degradation.
This work focuses on enhancing the transferability of adversarial examples to narrow this performance gap.
We propose the NGI-Attack, which incorporates Example Backtracking and Multiplex Mask strategies.
arXiv Detail & Related papers (2024-08-11T10:46:49Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [18.05924632169541]
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM)
Specifically, we use data rescaling to substitute the sign function without extra computational cost.
Our method could significantly boost the transferability of gradient-based attacks and outperform the state-of-the-art baselines.
arXiv Detail & Related papers (2023-07-06T07:52:42Z) - Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization [23.13302900115702]
adversarial examples are crafted by adding human-imperceptibles to benign inputs.
adversarial examples exhibit transferability across models, enabling practical black-box attacks.
We introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination.
GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4%.
arXiv Detail & Related papers (2022-11-21T07:59:22Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - Sampling-based Fast Gradient Rescaling Method for Highly Transferable
Adversarial Attacks [19.917677500613788]
gradient-based approaches generally use the $sign$ function to generate perturbations at the end of the process.
We propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM) to improve the transferability of crafted adversarial examples.
arXiv Detail & Related papers (2022-04-06T15:12:20Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Boosting Adversarial Transferability through Enhanced Momentum [50.248076722464184]
Deep learning models are vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images.
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
We propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability.
arXiv Detail & Related papers (2021-03-19T03:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.