Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models
with Transferred Evolutionary Strategies
- URL: http://arxiv.org/abs/2208.13182v1
- Date: Sun, 28 Aug 2022 09:23:56 GMT
- Title: Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models
with Transferred Evolutionary Strategies
- Authors: Yinghua Zhang, Yangqiu Song, Kun Bai, Qiang Yang
- Abstract summary: Fine-tuning can be vulnerable to adversarial attacks.
We propose two novel BAFT settings, cross-domain and cross-domain cross-architecture BAFT.
We show that the proposed attack method can effectively and efficiently attack the fine-tuned models.
- Score: 41.49982803774183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning can be vulnerable to adversarial attacks. Existing works about
black-box attacks on fine-tuned models (BAFT) are limited by strong
assumptions. To fill the gap, we propose two novel BAFT settings, cross-domain
and cross-domain cross-architecture BAFT, which only assume that (1) the target
model for attacking is a fine-tuned model, and (2) the source domain data is
known and accessible. To successfully attack fine-tuned models under both
settings, we propose to first train an adversarial generator against the source
model, which adopts an encoder-decoder architecture and maps a clean input to
an adversarial example. Then we search in the low-dimensional latent space
produced by the encoder of the adversarial generator. The search is conducted
under the guidance of the surrogate gradient obtained from the source model.
Experimental results on different domains and different network architectures
demonstrate that the proposed attack method can effectively and efficiently
attack the fine-tuned models.
Related papers
- Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence [34.35162562625252]
Black-box adversarial attacks have demonstrated strong potential to compromise machine learning models.
We study a new paradigm of black-box attacks with provable guarantees.
This new black-box attack unveils significant vulnerabilities of machine learning models.
arXiv Detail & Related papers (2023-04-10T01:12:09Z) - Art-Attack: Black-Box Adversarial Attack via Evolutionary Art [5.760976250387322]
Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples.
This paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples.
arXiv Detail & Related papers (2022-03-07T12:54:09Z) - Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the
Adversarial Transferability [20.255708227671573]
Black-box adversarial attacks can be transferred from one model to another.
In this work, we propose a novel ensemble attack method called the variance reduced ensemble attack.
Empirical results on the standard ImageNet demonstrate that the proposed method could boost the adversarial transferability and outperforms existing ensemble attacks significantly.
arXiv Detail & Related papers (2021-11-21T06:33:27Z) - Training Meta-Surrogate Model for Transferable Adversarial Attack [98.13178217557193]
We consider adversarial attacks to a black-box model when no queries are allowed.
In this setting, many methods directly attack surrogate models and transfer the obtained adversarial examples to fool the target model.
We show we can obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models.
arXiv Detail & Related papers (2021-09-05T03:27:46Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z) - DaST: Data-free Substitute Training for Adversarial Attacks [55.76371274622313]
We propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks.
To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models.
Experiments demonstrate the substitute models can achieve competitive performance compared with the baseline models.
arXiv Detail & Related papers (2020-03-28T04:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.