Exploring Transferable and Robust Adversarial Perturbation Generation
from the Perspective of Network Hierarchy
- URL: http://arxiv.org/abs/2108.07033v1
- Date: Mon, 16 Aug 2021 11:52:41 GMT
- Title: Exploring Transferable and Robust Adversarial Perturbation Generation
from the Perspective of Network Hierarchy
- Authors: Ruikui Wang, Yuanfang Guo, Ruijie Yang and Yunhong Wang
- Abstract summary: The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.
We propose a transferable and robust adversarial generation (TRAP) method.
Our TRAP achieves impressive transferability and high robustness against certain interferences.
- Score: 52.153866313879924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The transferability and robustness of adversarial examples are two practical
yet important properties for black-box adversarial attacks. In this paper, we
explore effective mechanisms to boost both of them from the perspective of
network hierarchy, where a typical network can be hierarchically divided into
output stage, intermediate stage and input stage. Since over-specialization of
source model, we can hardly improve the transferability and robustness of the
adversarial perturbations in the output stage. Therefore, we focus on the
intermediate and input stages in this paper and propose a transferable and
robust adversarial perturbation generation (TRAP) method. Specifically, we
propose the dynamically guided mechanism to continuously calculate accurate
directional guidances for perturbation generation in the intermediate stage. In
the input stage, instead of the single-form transformation augmentations
adopted in the existing methods, we leverage multiform affine transformation
augmentations to further enrich the input diversity and boost the robustness
and transferability of the adversarial perturbations. Extensive experiments
demonstrate that our TRAP achieves impressive transferability and high
robustness against certain interferences.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - Towards Transferable Adversarial Attacks with Centralized Perturbation [4.689122927344728]
Adversa transferability enables black-box attacks on unknown victim deep neural networks (DNNs)
Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model.
We propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation.
arXiv Detail & Related papers (2023-12-11T08:25:50Z) - Improving Adversarial Transferability by Stable Diffusion [36.97548018603747]
adversarial examples introduce imperceptible perturbations to benign samples, deceiving predictions.
Deep neural networks (DNNs) are susceptible to adversarial examples, which introduce imperceptible perturbations to benign samples, deceiving predictions.
We introduce a novel attack method called Stable Diffusion Attack Method (SDAM), which incorporates samples generated by Stable Diffusion to augment input images.
arXiv Detail & Related papers (2023-11-18T09:10:07Z) - Why Does Little Robustness Help? Understanding and Improving Adversarial
Transferability from Surrogate Training [24.376314203167016]
Adversarial examples (AEs) for DNNs have been shown to be transferable.
In this paper, we take a further step towards understanding adversarial transferability.
arXiv Detail & Related papers (2023-07-15T19:20:49Z) - Cross-modal Orthogonal High-rank Augmentation for RGB-Event
Transformer-trackers [58.802352477207094]
We explore the great potential of a pre-trained vision Transformer (ViT) to bridge the vast distribution gap between two modalities.
We propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively.
Experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and two trackersstream to a large extent in terms of both tracking precision and success rate.
arXiv Detail & Related papers (2023-07-09T08:58:47Z) - Improving Adversarial Transferability via Intermediate-level
Perturbation Decay [79.07074710460012]
We develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization.
Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models.
arXiv Detail & Related papers (2023-04-26T09:49:55Z) - Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text
Generation via Concentrating Attention [85.5379146125199]
Powerful Transformer architectures have proven superior in generating high-quality sentences.
In this work, we find that sparser attention values in Transformer could improve diversity.
We introduce a novel attention regularization loss to control the sharpness of the attention distribution.
arXiv Detail & Related papers (2022-11-14T07:53:16Z) - XAI for Transformers: Better Explanations through Conservative
Propagation [60.67748036747221]
We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction.
Our proposal can be seen as a proper extension of the well-established LRP method to Transformers.
arXiv Detail & Related papers (2022-02-15T10:47:11Z) - Can we have it all? On the Trade-off between Spatial and Adversarial
Robustness of Neural Networks [21.664470275289403]
We prove a quantitative trade-off between spatial and adversarial robustness in a simple statistical setting.
We propose a method based on curriculum learning that trains gradually on more difficult perturbations (both spatial and adversarial) to improve spatial and adversarial robustness simultaneously.
arXiv Detail & Related papers (2020-02-26T06:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.