Transferable Adversarial Examples with Bayes Approach
- URL: http://arxiv.org/abs/2208.06538v2
- Date: Tue, 07 Jan 2025 08:52:30 GMT
- Title: Transferable Adversarial Examples with Bayes Approach
- Authors: Mingyuan Fan, Cen Chen, Wenmeng Zhou, Yinggui Wang,
- Abstract summary: Black-box adversarial attacks are one of the most heated topics in trustworthy AI.<n>In this paper, we explore the transferability of adversarial examples via the lens of Bayesian approach.<n>Experiments illustrate the significant effectiveness of BayAtk in crafting more transferable adversarial examples.
- Score: 15.35252941167733
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The vulnerability of deep neural networks (DNNs) to black-box adversarial attacks is one of the most heated topics in trustworthy AI. In such attacks, the attackers operate without any insider knowledge of the model, making the cross-model transferability of adversarial examples critical. Despite the potential for adversarial examples to be effective across various models, it has been observed that adversarial examples that are specifically crafted for a specific model often exhibit poor transferability. In this paper, we explore the transferability of adversarial examples via the lens of Bayesian approach. Specifically, we leverage Bayesian approach to probe the transferability and then study what constitutes a transferability-promoting prior. Following this, we design two concrete transferability-promoting priors, along with an adaptive dynamic weighting strategy for instances sampled from these priors. Employing these techniques, we present BayAtk. Extensive experiments illustrate the significant effectiveness of BayAtk in crafting more transferable adversarial examples against both undefended and defended black-box models compared to existing state-of-the-art attacks.
Related papers
- Towards Model Resistant to Transferable Adversarial Examples via Trigger Activation [95.3977252782181]
Adversarial examples, characterized by imperceptible perturbations, pose significant threats to deep neural networks by misleading their predictions.
We introduce a novel training paradigm aimed at enhancing robustness against transferable adversarial examples (TAEs) in a more efficient and effective way.
arXiv Detail & Related papers (2025-04-20T09:07:10Z) - Boosting the Targeted Transferability of Adversarial Examples via Salient Region & Weighted Feature Drop [2.176586063731861]
A prevalent approach for adversarial attacks relies on the transferability of adversarial examples.
A novel framework based on Salient region & Weighted Feature Drop (SWFD) designed to enhance the targeted transferability of adversarial examples.
arXiv Detail & Related papers (2024-11-11T08:23:37Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Enhancing Adversarial Transferability with Adversarial Weight Tuning [36.09966860069978]
adversarial examples (AEs) mislead the model while appearing benign to human observers.
AWT is a data-free tuning method that combines gradient-based and model-based attack methods to enhance the transferability of AEs.
arXiv Detail & Related papers (2024-08-18T13:31:26Z) - Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - Enhancing targeted transferability via feature space fine-tuning [21.131915084053894]
Adrial examples (AEs) have been extensively studied due to their potential for privacy protection and inspiring robust neural networks.
We propose fine-tuning an AE crafted by existing simple iterative attacks to make it transferable across unknown models.
arXiv Detail & Related papers (2024-01-05T09:46:42Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature
Space [6.238161846680642]
This work investigates efficient score-based black-box adversarial attacks with a high Attack Success Rate (ASR) and good generalizability.
We design a novel attack method based on a Disentangled Feature space, called DifAttack, which differs significantly from the existing ones operating over the entire feature space.
arXiv Detail & Related papers (2023-09-26T00:15:13Z) - GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty [14.82389560064876]
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models.
We propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP)
By attacking 11 state-of-the-art deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability.
arXiv Detail & Related papers (2023-07-09T05:21:31Z) - Generating Adversarial Examples with Better Transferability via Masking
Unimportant Parameters of Surrogate Model [6.737574282249396]
We propose to improve the transferability of adversarial examples in the transfer-based attack via unimportant masking parameters (MUP)
The key idea in MUP is to refine the pretrained surrogate models to boost the transfer-based attack.
arXiv Detail & Related papers (2023-04-14T03:06:43Z) - Rethinking Model Ensemble in Transfer-based Adversarial Attacks [46.82830479910875]
An effective strategy to improve the transferability is attacking an ensemble of models.
Previous works simply average the outputs of different models.
We propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples.
arXiv Detail & Related papers (2023-03-16T06:37:16Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - On the Transferability of Adversarial Examples between Encrypted Models [20.03508926499504]
We investigate the transferability of models encrypted for adversarially robust defense for the first time.
In an image-classification experiment, the use of encrypted models is confirmed not only to be robust against AEs but to also reduce the influence of AEs.
arXiv Detail & Related papers (2022-09-07T08:50:26Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Training Meta-Surrogate Model for Transferable Adversarial Attack [98.13178217557193]
We consider adversarial attacks to a black-box model when no queries are allowed.
In this setting, many methods directly attack surrogate models and transfer the obtained adversarial examples to fool the target model.
We show we can obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models.
arXiv Detail & Related papers (2021-09-05T03:27:46Z) - Direction-Aggregated Attack for Transferable Adversarial Examples [10.208465711975242]
A deep neural network is vulnerable to adversarial examples crafted by imposing imperceptible changes to the inputs.
adversarial examples are most successful in white-box settings where the model and its parameters are available.
We propose the Direction-Aggregated adversarial attacks that deliver transferable adversarial examples.
arXiv Detail & Related papers (2021-04-19T09:54:56Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - Decision-based Universal Adversarial Attack [55.76371274622313]
In black-box setting, current universal adversarial attack methods utilize substitute models to generate the perturbation.
We propose an efficient Decision-based Universal Attack (DUAttack)
The effectiveness of DUAttack is validated through comparisons with other state-of-the-art attacks.
arXiv Detail & Related papers (2020-09-15T12:49:03Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.