Delving into Data: Effectively Substitute Training for Black-box Attack
- URL: http://arxiv.org/abs/2104.12378v1
- Date: Mon, 26 Apr 2021 07:26:29 GMT
- Title: Delving into Data: Effectively Substitute Training for Black-box Attack
- Authors: Wenxuan Wang and Bangjie Yin and Taiping Yao and Li Zhang and Yanwei
Fu and Shouhong Ding and Jilin Li and Feiyue Huang and Xiangyang Xue
- Abstract summary: We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
- Score: 84.85798059317963
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep models have shown their vulnerability when processing adversarial
samples. As for the black-box attack, without access to the architecture and
weights of the attacked model, training a substitute model for adversarial
attacks has attracted wide attention. Previous substitute training approaches
focus on stealing the knowledge of the target model based on real training data
or synthetic data, without exploring what kind of data can further improve the
transferability between the substitute and target models. In this paper, we
propose a novel perspective substitute training that focuses on designing the
distribution of data used in the knowledge stealing process. More specifically,
a diverse data generation module is proposed to synthesize large-scale data
with wide distribution. And adversarial substitute training strategy is
introduced to focus on the data distributed near the decision boundary. The
combination of these two modules can further boost the consistency of the
substitute model and target model, which greatly improves the effectiveness of
adversarial attack. Extensive experiments demonstrate the efficacy of our
method against state-of-the-art competitors under non-target and target attack
settings. Detailed visualization and analysis are also provided to help
understand the advantage of our method.
Related papers
- OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable
Evasion Attacks [17.584752814352502]
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting input data.
We introduce a self-supervised, computationally economical method for generating adversarial examples.
Our experiments consistently demonstrate the method is effective across various models, unseen data categories, and even defended models.
arXiv Detail & Related papers (2023-10-05T17:34:47Z) - Boosting Model Inversion Attacks with Adversarial Examples [26.904051413441316]
We propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
First, we regularize the training process of the attack model with an added semantic loss function.
Second, we inject adversarial examples into the training data to increase the diversity of the class-related parts.
arXiv Detail & Related papers (2023-06-24T13:40:58Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - Data-Free Adversarial Perturbations for Practical Black-Box Attack [25.44755251319056]
We present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution.
Our method empirically shows that current deep learning models are still at risk even when the attackers do not have access to training data.
arXiv Detail & Related papers (2020-03-03T02:22:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.