Introducing Foundation Models as Surrogate Models: Advancing Towards
More Practical Adversarial Attacks
- URL: http://arxiv.org/abs/2307.06608v1
- Date: Thu, 13 Jul 2023 08:10:48 GMT
- Title: Introducing Foundation Models as Surrogate Models: Advancing Towards
More Practical Adversarial Attacks
- Authors: Jiaming Zhang, Jitao Sang, Qi Yi
- Abstract summary: No-box adversarial attacks are becoming more practical and challenging for AI systems.
This paper recasts adversarial attack as a downstream task by introducing foundational models as surrogate models.
- Score: 15.882687207499373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the no-box adversarial attack, in which the attacker lacks access
to the model's architecture, weights, and training data, become the most
practical and challenging attack setup. However, there is an unawareness of the
potential and flexibility inherent in the surrogate model selection process on
no-box setting. Inspired by the burgeoning interest in utilizing foundational
models to address downstream tasks, this paper adopts an innovative idea that
1) recasting adversarial attack as a downstream task. Specifically, image noise
generation to meet the emerging trend and 2) introducing foundational models as
surrogate models. Harnessing the concept of non-robust features, we elaborate
on two guiding principles for surrogate model selection to explain why the
foundational model is an optimal choice for this role. However, paradoxically,
we observe that these foundational models underperform. Analyzing this
unexpected behavior within the feature space, we attribute the lackluster
performance of foundational models (e.g., CLIP) to their significant
representational capacity and, conversely, their lack of discriminative
prowess. To mitigate this issue, we propose the use of a margin-based loss
strategy for the fine-tuning of foundational models on target images. The
experimental results verify that our approach, which employs the basic Fast
Gradient Sign Method (FGSM) attack algorithm, outstrips the performance of
other, more convoluted algorithms. We conclude by advocating for the research
community to consider surrogate models as crucial determinants in the
effectiveness of adversarial attacks in no-box settings. The implications of
our work bear relevance for improving the efficacy of such adversarial attacks
and the overall robustness of AI systems.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Dynamic Label Adversarial Training for Deep Learning Robustness Against Adversarial Attacks [11.389689242531327]
Adversarial training is one of the most effective methods for enhancing model robustness.
Previous approaches primarily use static ground truth for adversarial training, but this often causes robust overfitting.
We propose a dynamic label adversarial training (DYNAT) algorithm that enables the target model to gain robustness from the guide model's decisions.
arXiv Detail & Related papers (2024-08-23T14:25:12Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable
Evasion Attacks [17.584752814352502]
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting input data.
We introduce a self-supervised, computationally economical method for generating adversarial examples.
Our experiments consistently demonstrate the method is effective across various models, unseen data categories, and even defended models.
arXiv Detail & Related papers (2023-10-05T17:34:47Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Thief, Beware of What Get You There: Towards Understanding Model
Extraction Attack [13.28881502612207]
In some scenarios, AI models are trained proprietarily, where neither pre-trained models nor sufficient in-distribution data is publicly available.
We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models.
We formulate model extraction attacks into an adaptive framework that captures these factors with deep reinforcement learning.
arXiv Detail & Related papers (2021-04-13T03:46:59Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Query-Free Adversarial Transfer via Undertrained Surrogates [14.112444998191698]
We introduce a new method for improving the efficacy of adversarial attacks in a black-box setting by undertraining the surrogate model which the attacks are generated on.
We show that this method transfers well across architectures and outperforms state-of-the-art methods by a wide margin.
arXiv Detail & Related papers (2020-07-01T23:12:22Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.