Adaptive Image Transformations for Transfer-based Adversarial Attack
- URL: http://arxiv.org/abs/2111.13844v1
- Date: Sat, 27 Nov 2021 08:15:44 GMT
- Title: Adaptive Image Transformations for Transfer-based Adversarial Attack
- Authors: Zheng Yuan, Jie Zhang, Shiguang Shan
- Abstract summary: We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
- Score: 73.74904401540743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks provide a good way to study the robustness of deep
learning models. One category of methods in transfer-based black-box attack
utilizes several image transformation operations to improve the transferability
of adversarial examples, which is effective, but fails to take the specific
characteristic of the input image into consideration. In this work, we propose
a novel architecture, called Adaptive Image Transformation Learner (AITL),
which incorporates different image transformation operations into a unified
framework to further improve the transferability of adversarial examples.
Unlike the fixed combinational transformations used in existing works, our
elaborately designed transformation learner adaptively selects the most
effective combination of image transformations specific to the input image.
Extensive experiments on ImageNet demonstrate that our method significantly
improves the attack success rates on both normally trained models and defense
models under various settings.
Related papers
- Learning to Transform Dynamically for Better Adversarial Transferability [32.267484632957576]
Adversarial examples, crafted by adding perturbations imperceptible to humans, can deceive neural networks.
We introduce a novel approach named Learning to Transform (L2T)
L2T increases the diversity of transformed images by selecting the optimal combination of operations from a pool of candidates.
arXiv Detail & Related papers (2024-05-23T00:46:53Z) - OT-Attack: Enhancing Adversarial Transferability of Vision-Language
Models via Optimal Transport Optimization [65.57380193070574]
Vision-language pre-training models are vulnerable to multi-modal adversarial examples.
Recent works have indicated that leveraging data augmentation and image-text modal interactions can enhance the transferability of adversarial examples.
We propose an Optimal Transport-based Adversarial Attack, dubbed OT-Attack.
arXiv Detail & Related papers (2023-12-07T16:16:50Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Structure Invariant Transformation for better Adversarial
Transferability [9.272426833639615]
We propose a novel input transformation based attack, called Structure Invariant Attack (SIA)
SIA applies a random image transformation onto each image block to craft a set of diverse images for gradient calculation.
Experiments on the standard ImageNet dataset demonstrate that SIA exhibits much better transferability than the existing SOTA input transformation based attacks.
arXiv Detail & Related papers (2023-09-26T06:31:32Z) - Improving the Transferability of Adversarial Examples with Arbitrary
Style Transfer [32.644062141738246]
A style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans.
We propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains.
Our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models.
arXiv Detail & Related papers (2023-08-21T09:58:13Z) - Improving Diffusion-based Image Translation using Asymmetric Gradient
Guidance [51.188396199083336]
We present an approach that guides the reverse process of diffusion sampling by applying asymmetric gradient guidance.
Our model's adaptability allows it to be implemented with both image-fusion and latent-dif models.
Experiments show that our method outperforms various state-of-the-art models in image translation tasks.
arXiv Detail & Related papers (2023-06-07T12:56:56Z) - Towards Understanding and Harnessing the Effect of Image Transformation
in Adversarial Detection [8.436194871428805]
Deep neural networks (DNNs) are under threat from adversarial examples.
Image transformation is one of the most effective approaches to detect adversarial examples.
We propose an improved approach by combining multiple image transformations.
arXiv Detail & Related papers (2022-01-04T10:58:59Z) - Random Transformation of Image Brightness for Adversarial Attack [5.405413975396116]
adversarial examples are crafted by adding small, human-imperceptibles to the original images.
Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptibles to the original images.
We propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method.
Our method has a higher success rate for black-box attacks than other attack methods based on data augmentation.
arXiv Detail & Related papers (2021-01-12T07:00:04Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.