Rethinking Mixup for Improving the Adversarial Transferability
- URL: http://arxiv.org/abs/2311.17087v1
- Date: Tue, 28 Nov 2023 03:10:44 GMT
- Title: Rethinking Mixup for Improving the Adversarial Transferability
- Authors: Xiaosen Wang, Zeyuan Yin
- Abstract summary: We propose a new input transformation-based attack called Mixing the Image but Separating the gradienT (MIST)
MIST randomly mixes the input image with a randomly shifted image and separates the gradient of each loss item for each mixed image.
Experiments on the ImageNet dataset demonstrate that MIST outperforms existing SOTA input transformation-based attacks.
- Score: 6.2867306093287905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixup augmentation has been widely integrated to generate adversarial
examples with superior adversarial transferability when immigrating from a
surrogate model to other models. However, the underlying mechanism influencing
the mixup's effect on transferability remains unexplored. In this work, we
posit that the adversarial examples located at the convergence of decision
boundaries across various categories exhibit better transferability and
identify that Admix tends to steer the adversarial examples towards such
regions. However, we find the constraint on the added image in Admix decays its
capability, resulting in limited transferability. To address such an issue, we
propose a new input transformation-based attack called Mixing the Image but
Separating the gradienT (MIST). Specifically, MIST randomly mixes the input
image with a randomly shifted image and separates the gradient of each loss
item for each mixed image. To counteract the imprecise gradient, MIST
calculates the gradient on several mixed images for each input sample.
Extensive experimental results on the ImageNet dataset demonstrate that MIST
outperforms existing SOTA input transformation-based attacks with a clear
margin on both Convolutional Neural Networks (CNNs) and Vision Transformers
(ViTs) w/wo defense mechanisms, supporting MIST's high effectiveness and
generality.
Related papers
- Boosting the Transferability of Adversarial Examples via Local Mixup and
Adaptive Step Size [5.04766995613269]
Adversarial examples are one critical security threat to various visual applications, where injected human-imperceptible perturbations can confuse the output.
Existing input-diversity-based methods adopt different image transformations, but may be inefficient due to insufficient input diversity and an identical perturbation step size.
This paper proposes a black-box adversarial generative framework by jointly designing enhanced input diversity and adaptive step sizes.
arXiv Detail & Related papers (2024-01-24T03:26:34Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Boost Adversarial Transferability by Uniform Scale and Mix Mask Method [10.604083938124463]
Adversarial examples generated from surrogate models often possess the ability to deceive other black-box models.
We propose a framework called Uniform Scale and Mix Mask Method (US-MM) for adversarial example generation.
US-MM achieves an average of 7% better transfer attack success rate compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-11-18T10:17:06Z) - Structure Invariant Transformation for better Adversarial
Transferability [9.272426833639615]
We propose a novel input transformation based attack, called Structure Invariant Attack (SIA)
SIA applies a random image transformation onto each image block to craft a set of diverse images for gradient calculation.
Experiments on the standard ImageNet dataset demonstrate that SIA exhibits much better transferability than the existing SOTA input transformation based attacks.
arXiv Detail & Related papers (2023-09-26T06:31:32Z) - Improving the Transferability of Adversarial Examples with Arbitrary
Style Transfer [32.644062141738246]
A style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans.
We propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains.
Our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models.
arXiv Detail & Related papers (2023-08-21T09:58:13Z) - SMMix: Self-Motivated Image Mixing for Vision Transformers [65.809376136455]
CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs)
Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels.
We propose an efficient and effective Self-Motivated image Mixing method (SMMix) which motivates both image and label enhancement by the model under training itself.
arXiv Detail & Related papers (2022-12-26T00:19:39Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - Unsupervised Misaligned Infrared and Visible Image Fusion via
Cross-Modality Image Generation and Registration [59.02821429555375]
We present a robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion.
To better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM)
arXiv Detail & Related papers (2022-05-24T07:51:57Z) - Admix: Enhancing the Transferability of Adversarial Attacks [46.69028919537312]
We propose a new input transformation based attack called Admix Attack Method (AAM)
AAM considers both the original image and an image randomly picked from other categories.
Our method could further improve the transferability and outperform the state-of-the-art combination of input transformations by a clear margin of 3.4%.
arXiv Detail & Related papers (2021-01-31T11:40:50Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.