Boost Adversarial Transferability by Uniform Scale and Mix Mask Method
- URL: http://arxiv.org/abs/2311.12051v1
- Date: Sat, 18 Nov 2023 10:17:06 GMT
- Title: Boost Adversarial Transferability by Uniform Scale and Mix Mask Method
- Authors: Tao Wang, Zijian Ying, Qianmu Li, zhichao Lian
- Abstract summary: Adversarial examples generated from surrogate models often possess the ability to deceive other black-box models.
We propose a framework called Uniform Scale and Mix Mask Method (US-MM) for adversarial example generation.
US-MM achieves an average of 7% better transfer attack success rate compared to state-of-the-art methods.
- Score: 10.604083938124463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples generated from surrogate models often possess the
ability to deceive other black-box models, a property known as transferability.
Recent research has focused on enhancing adversarial transferability, with
input transformation being one of the most effective approaches. However,
existing input transformation methods suffer from two issues. Firstly, certain
methods, such as the Scale-Invariant Method, employ exponentially decreasing
scale invariant parameters that decrease the adaptability in generating
effective adversarial examples across multiple scales. Secondly, most mixup
methods only linearly combine candidate images with the source image, leading
to reduced features blending effectiveness. To address these challenges, we
propose a framework called Uniform Scale and Mix Mask Method (US-MM) for
adversarial example generation. The Uniform Scale approach explores the upper
and lower boundaries of perturbation with a linear factor, minimizing the
negative impact of scale copies. The Mix Mask method introduces masks into the
mixing process in a nonlinear manner, significantly improving the effectiveness
of mixing strategies. Ablation experiments are conducted to validate the
effectiveness of each component in US-MM and explore the effect of
hyper-parameters. Empirical evaluations on standard ImageNet datasets
demonstrate that US-MM achieves an average of 7% better transfer attack success
rate compared to state-of-the-art methods.
Related papers
- Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum [13.305800254250789]
We propose a novel framework, Stable Diffusion-based Momentum Integrated Adversarial Examples (SD-MIAE)
It generates adversarial examples that can effectively mislead neural network classifiers while maintaining visual imperceptibility and preserving the semantic similarity to the original class label.
Experimental results demonstrate that SD-MIAE achieves a high misclassification rate of 79%, improving by 35% over the state-of-the-art method.
arXiv Detail & Related papers (2024-10-17T01:22:11Z) - Improving Transferable Targeted Adversarial Attack via Normalized Logit Calibration and Truncated Feature Mixing [26.159434438078968]
We propose two techniques for improving the targeted transferability from the loss and feature aspects.
In previous approaches, logit calibrations primarily focus on the logit margin between the targeted class and the untargeted classes among samples.
We introduce a new normalized logit calibration method that jointly considers the logit margin and the standard deviation of logits.
arXiv Detail & Related papers (2024-05-10T09:13:57Z) - GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Rethinking Mixup for Improving the Adversarial Transferability [6.2867306093287905]
We propose a new input transformation-based attack called Mixing the Image but Separating the gradienT (MIST)
MIST randomly mixes the input image with a randomly shifted image and separates the gradient of each loss item for each mixed image.
Experiments on the ImageNet dataset demonstrate that MIST outperforms existing SOTA input transformation-based attacks.
arXiv Detail & Related papers (2023-11-28T03:10:44Z) - AMPLIFY:Attention-based Mixup for Performance Improvement and Label Smoothing in Transformer [2.3072402651280517]
AMPLIFY uses the Attention mechanism of Transformer itself to reduce the influence of noises and aberrant values in the original samples on the prediction results.
The experimental results show that, under a smaller computational resource cost, AMPLIFY outperforms other Mixup methods in text classification tasks.
arXiv Detail & Related papers (2023-09-22T08:02:45Z) - Boosting Adversarial Transferability with Learnable Patch-wise Masks [16.46210182214551]
Adversarial examples have attracted widespread attention in security-critical applications because of their transferability across different models.
In this paper, we argue that the model-specific discriminative regions are a key factor causing overfitting to the source model, and thus reducing the transferability to the target model.
To accurately localize these regions, we present a learnable approach to automatically optimize the mask.
arXiv Detail & Related papers (2023-06-28T05:32:22Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Boosting Transferability of Targeted Adversarial Examples via
Hierarchical Generative Networks [56.96241557830253]
Transfer-based adversarial attacks can effectively evaluate model robustness in the black-box setting.
We propose a conditional generative attacking model, which can generate the adversarial examples targeted at different classes.
Our method improves the success rates of targeted black-box attacks by a significant margin over the existing methods.
arXiv Detail & Related papers (2021-07-05T06:17:47Z) - Staircase Sign Method for Boosting Adversarial Attacks [123.19227129979943]
Crafting adversarial examples for the transfer-based attack is challenging and remains a research hot spot.
We propose a novel Staircase Sign Method (S$2$M) to alleviate this issue, thus boosting transfer-based attacks.
Our method can be generally integrated into any transfer-based attacks, and the computational overhead is negligible.
arXiv Detail & Related papers (2021-04-20T02:31:55Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.