Structure Invariant Transformation for better Adversarial
Transferability
- URL: http://arxiv.org/abs/2309.14700v1
- Date: Tue, 26 Sep 2023 06:31:32 GMT
- Title: Structure Invariant Transformation for better Adversarial
Transferability
- Authors: Xiaosen Wang, Zeliang Zhang, Jianping Zhang
- Abstract summary: We propose a novel input transformation based attack, called Structure Invariant Attack (SIA)
SIA applies a random image transformation onto each image block to craft a set of diverse images for gradient calculation.
Experiments on the standard ImageNet dataset demonstrate that SIA exhibits much better transferability than the existing SOTA input transformation based attacks.
- Score: 9.272426833639615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the severe vulnerability of Deep Neural Networks (DNNs) against
adversarial examples, there is an urgent need for an effective adversarial
attack to identify the deficiencies of DNNs in security-sensitive applications.
As one of the prevalent black-box adversarial attacks, the existing
transfer-based attacks still cannot achieve comparable performance with the
white-box attacks. Among these, input transformation based attacks have shown
remarkable effectiveness in boosting transferability. In this work, we find
that the existing input transformation based attacks transform the input image
globally, resulting in limited diversity of the transformed images. We
postulate that the more diverse transformed images result in better
transferability. Thus, we investigate how to locally apply various
transformations onto the input image to improve such diversity while preserving
the structure of image. To this end, we propose a novel input transformation
based attack, called Structure Invariant Attack (SIA), which applies a random
image transformation onto each image block to craft a set of diverse images for
gradient calculation. Extensive experiments on the standard ImageNet dataset
demonstrate that SIA exhibits much better transferability than the existing
SOTA input transformation based attacks on CNN-based and transformer-based
models, showing its generality and superiority in boosting transferability.
Code is available at https://github.com/xiaosen-wang/SIT.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Improving the Transferability of Adversarial Examples with Arbitrary
Style Transfer [32.644062141738246]
A style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans.
We propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains.
Our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models.
arXiv Detail & Related papers (2023-08-21T09:58:13Z) - Boosting Adversarial Transferability by Block Shuffle and Rotation [25.603307815394764]
We propose a novel input transformation based attack called block shuffle and rotation (BSR)
BSR splits the input image into several blocks, then randomly shuffles and rotates these blocks to construct a set of new images for gradient calculation.
Empirical evaluations on the ImageNet dataset demonstrate that BSR could achieve significantly better transferability than the existing input transformation based methods.
arXiv Detail & Related papers (2023-08-20T15:38:40Z) - Diversifying the High-level Features for better Adversarial
Transferability [21.545976132427747]
We propose diversifying the high-level features (DHF) for more transferable adversarial examples.
DHF perturbs the high-level features by randomly transforming the high-level features and mixing them with the feature of benign samples.
Empirical evaluations on ImageNet dataset show that DHF could effectively improve the transferability of existing momentum-based attacks.
arXiv Detail & Related papers (2023-04-20T07:44:59Z) - Adaptive Image Transformations for Transfer-based Adversarial Attack [73.74904401540743]
We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
arXiv Detail & Related papers (2021-11-27T08:15:44Z) - Towards Transferable Adversarial Attacks on Vision Transformers [110.55845478440807]
Vision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples.
We introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs.
arXiv Detail & Related papers (2021-09-09T11:28:25Z) - Admix: Enhancing the Transferability of Adversarial Attacks [46.69028919537312]
We propose a new input transformation based attack called Admix Attack Method (AAM)
AAM considers both the original image and an image randomly picked from other categories.
Our method could further improve the transferability and outperform the state-of-the-art combination of input transformations by a clear margin of 3.4%.
arXiv Detail & Related papers (2021-01-31T11:40:50Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.