QueryNet: An Efficient Attack Framework with Surrogates Carrying
Multiple Identities
- URL: http://arxiv.org/abs/2105.15010v1
- Date: Mon, 31 May 2021 14:45:10 GMT
- Title: QueryNet: An Efficient Attack Framework with Surrogates Carrying
Multiple Identities
- Authors: Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang
- Abstract summary: Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks.
Black-box attacks require extensive queries on the victim to achieve high success rates.
For query-efficiency, surrogate models of the victim are adopted as transferable attackers.
We develop QueryNet, an efficient attack network that can significantly reduce queries.
- Score: 16.901240544106948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial
attacks, while the existing black-box attacks require extensive queries on the
victim DNN to achieve high success rates. For query-efficiency, surrogate
models of the victim are adopted as transferable attackers in consideration of
their Gradient Similarity (GS), i.e., surrogates' attack gradients are similar
to the victim's ones to some extent. However, it is generally neglected to
exploit their similarity on outputs, namely the Prediction Similarity (PS), to
filter out inefficient queries. To jointly utilize and also optimize
surrogates' GS and PS, we develop QueryNet, an efficient attack network that
can significantly reduce queries. QueryNet crafts several transferable
Adversarial Examples (AEs) by surrogates, and then decides also by surrogates
on the most promising AE, which is then sent to query the victim. That is to
say, in QueryNet, surrogates are not only exploited as transferable attackers,
but also as transferability evaluators for AEs. The AEs are generated using
surrogates' GS and evaluated based on their FS, and therefore, the query
results could be back-propagated to optimize surrogates' parameters and also
their architectures, enhancing both the GS and the FS. QueryNet has significant
query-efficiency, i.e., reduces queries by averagely about an order of
magnitude compared to recent SOTA methods according to our comprehensive and
real-world experiments: 11 victims (including 2 commercial models) on
MNIST/CIFAR10/ImageNet, allowing only 8-bit image queries, and no access to the
victim's training data.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Query Provenance Analysis: Efficient and Robust Defense against Query-based Black-box Attacks [11.32992178606254]
We propose a novel approach, Query Provenance Analysis (QPA), for more robust and efficient Stateful Defense Models (SDMs)
QPA encapsulates the historical relationships among queries as the sequence feature to capture the fundamental difference between benign and adversarial query sequences.
We evaluate QPA compared with two baselines, BlackLight and PIHA, on four widely used datasets with six query-based black-box attack algorithms.
arXiv Detail & Related papers (2024-05-31T06:56:54Z) - DTA: Distribution Transform-based Attack for Query-Limited Scenario [11.874670564015789]
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models.
This paper proposes a hard-label attack that simulates an attacked action being permitted to conduct a limited number of queries.
Experiments validate the effectiveness of the proposed idea and the superiority of DTA over the state-of-the-art.
arXiv Detail & Related papers (2023-12-12T13:21:03Z) - Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly [79.07074710460012]
adversarial vulnerability of deep neural networks (DNNs) has drawn great attention.
An increasing number of transfer-based methods have been developed to fool black-box DNN models.
We establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods.
arXiv Detail & Related papers (2023-11-02T15:35:58Z) - Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition [99.29804193431823]
Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
arXiv Detail & Related papers (2022-11-23T17:47:49Z) - Attacking deep networks with surrogate-based adversarial black-box
methods is easy [7.804269142923776]
A recent line of work on black-box adversarial attacks has revived the use of transfer from surrogate models.
Here, we provide a short and simple algorithm which achieves state-of-the-art results through a search.
The guiding assumption of the algorithm is that the studied networks are in a fundamental sense learning similar functions.
arXiv Detail & Related papers (2022-03-16T16:17:18Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Going Far Boosts Attack Transferability, but Do Not Do It [16.901240544106948]
We investigate the impacts of optimization on attack transferability by comprehensive experiments concerning 7 optimization algorithms, 4 surrogates, and 9 black-box models.
We surprisingly find that the varied transferability of AEs from optimization algorithms is strongly related to the Root Mean Square Error (RMSE) from their original samples.
Although LARA significantly improves transferability by 20%, it is insufficient to exploit the vulnerability of DNNs.
arXiv Detail & Related papers (2021-02-20T13:19:31Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.