Going Far Boosts Attack Transferability, but Do Not Do It
- URL: http://arxiv.org/abs/2102.10343v1
- Date: Sat, 20 Feb 2021 13:19:31 GMT
- Title: Going Far Boosts Attack Transferability, but Do Not Do It
- Authors: Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang
- Abstract summary: We investigate the impacts of optimization on attack transferability by comprehensive experiments concerning 7 optimization algorithms, 4 surrogates, and 9 black-box models.
We surprisingly find that the varied transferability of AEs from optimization algorithms is strongly related to the Root Mean Square Error (RMSE) from their original samples.
Although LARA significantly improves transferability by 20%, it is insufficient to exploit the vulnerability of DNNs.
- Score: 16.901240544106948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples
(AEs) with an imperceptible difference to original ones in human eyes. Also,
the AEs from attacking one surrogate DNN tend to cheat other black-box DNNs as
well, i.e., the attack transferability. Existing works reveal that adopting
certain optimization algorithms in attack improves transferability, but the
underlying reasons have not been thoroughly studied. In this paper, we
investigate the impacts of optimization on attack transferability by
comprehensive experiments concerning 7 optimization algorithms, 4 surrogates,
and 9 black-box models. Through the thorough empirical analysis from three
perspectives, we surprisingly find that the varied transferability of AEs from
optimization algorithms is strongly related to the corresponding Root Mean
Square Error (RMSE) from their original samples. On such a basis, one could
simply approach high transferability by attacking until RMSE decreases, which
motives us to propose a LArge RMSE Attack (LARA). Although LARA significantly
improves transferability by 20%, it is insufficient to exploit the
vulnerability of DNNs, leading to a natural urge that the strength of all
attacks should be measured by both the widely used $\ell_\infty$ bound and the
RMSE addressed in this paper, so that tricky enhancement of transferability
would be avoided.
Related papers
- Enhancing Adversarial Transferability with Adversarial Weight Tuning [36.09966860069978]
adversarial examples (AEs) mislead the model while appearing benign to human observers.
AWT is a data-free tuning method that combines gradient-based and model-based attack methods to enhance the transferability of AEs.
arXiv Detail & Related papers (2024-08-18T13:31:26Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - LRS: Enhancing Adversarial Transferability through Lipschitz Regularized
Surrogate [8.248964912483912]
The transferability of adversarial examples is of central importance to transfer-based black-box adversarial attacks.
We propose Lipschitz Regularized Surrogate (LRS) for transfer-based black-box attacks.
We evaluate our proposed LRS approach by attacking state-of-the-art standard deep neural networks and defense models.
arXiv Detail & Related papers (2023-12-20T15:37:50Z) - Reliable Evaluation of Adversarial Transferability [17.112253436250946]
Adversarial examples (AEs) with small adversarial perturbations can mislead deep neural networks (DNNs) into wrong predictions.
We re-evaluate 12 representative transferability-enhancing attack methods where we test on 18 popular models from 4 types of neural networks.
Our reevaluation revealed that the adversarial transferability is often overestimated, and there is no single AE that can be transferred to all popular models.
arXiv Detail & Related papers (2023-06-14T15:17:51Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Logit Margin Matters: Improving Transferable Targeted Adversarial Attack
by Logit Calibration [85.71545080119026]
Cross-Entropy (CE) loss function is insufficient to learn transferable targeted adversarial examples.
We propose two simple and effective logit calibration methods, which are achieved by downscaling the logits with a temperature factor and an adaptive margin.
Experiments conducted on the ImageNet dataset validate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2023-03-07T06:42:52Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Transfer Attacks Revisited: A Large-Scale Empirical Study in Real
Computer Vision Settings [64.37621685052571]
We conduct the first systematic empirical study of transfer attacks against major cloud-based ML platforms.
The study leads to a number of interesting findings which are inconsistent to the existing ones.
We believe this work sheds light on the vulnerabilities of popular ML platforms and points to a few promising research directions.
arXiv Detail & Related papers (2022-04-07T12:16:24Z) - QueryNet: An Efficient Attack Framework with Surrogates Carrying
Multiple Identities [16.901240544106948]
Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks.
Black-box attacks require extensive queries on the victim to achieve high success rates.
For query-efficiency, surrogate models of the victim are adopted as transferable attackers.
We develop QueryNet, an efficient attack network that can significantly reduce queries.
arXiv Detail & Related papers (2021-05-31T14:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.