Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition
- URL: http://arxiv.org/abs/2211.13171v1
- Date: Wed, 23 Nov 2022 17:47:49 GMT
- Title: Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition
- Authors: Rohit Gupta, Naveed Akhtar, Gaurav Kumar Nayak, Ajmal Mian and Mubarak
Shah
- Abstract summary: Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
- Score: 99.29804193431823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box adversarial attacks present a realistic threat to action
recognition systems. Existing black-box attacks follow either a query-based
approach where an attack is optimized by querying the target model, or a
transfer-based approach where attacks are generated using a substitute model.
While these methods can achieve decent fooling rates, the former tends to be
highly query-inefficient while the latter assumes extensive knowledge of the
black-box model's training data. In this paper, we propose a new attack on
action recognition that addresses these shortcomings by generating
perturbations to disrupt the features learned by a pre-trained substitute model
to reduce the number of queries. By using a nearly disjoint dataset to train
the substitute model, our method removes the requirement that the substitute
model be trained using the same dataset as the target model, and leverages
queries to the target model to retain the fooling rate benefits provided by
query-based methods. This ultimately results in attacks which are more
transferable than conventional black-box attacks. Through extensive
experiments, we demonstrate highly query-efficient black-box attacks with the
proposed framework. Our method achieves 8% and 12% higher deception rates
compared to state-of-the-art query-based and transfer-based attacks,
respectively.
Related papers
- Hard-label based Small Query Black-box Adversarial Attack [2.041108289731398]
We propose a new practical setting of hard label based attack with an optimisation process guided by a pretrained surrogate model.
We find the proposed method achieves approximately 5 times higher attack success rate compared to the benchmarks.
arXiv Detail & Related papers (2024-03-09T21:26:22Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Generalizable Black-Box Adversarial Attack with Meta Learning [54.196613395045595]
In black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful perturbation based on query feedback under a query budget.
We propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability.
The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance.
arXiv Detail & Related papers (2023-01-01T07:24:12Z) - T-SEA: Transfer-based Self-Ensemble Attack on Object Detection [9.794192858806905]
We propose a single-model transfer-based black-box attack on object detection, utilizing only one model to achieve a high-transferability adversarial attack on multiple black-box detectors.
We analogize patch optimization with regular model optimization, proposing a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch.
arXiv Detail & Related papers (2022-11-16T10:27:06Z) - Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based
Prior [50.393092185611536]
We consider the black-box adversarial setting, where the adversary needs to craft adversarial examples without access to the gradients of a target model.
Previous methods attempted to approximate the true gradient either by using the transfer gradient of a surrogate white-box model or based on the feedback of model queries.
We propose two prior-guided random gradient-free (PRGF) algorithms based on biased sampling and gradient averaging.
arXiv Detail & Related papers (2022-03-13T04:06:27Z) - Can Targeted Adversarial Examples Transfer When the Source and Target
Models Have No Label Space Overlap? [36.96777303738315]
We design blackbox transfer-based targeted adversarial attacks for an environment where the attacker's source model and the target blackbox model may have disjoint label spaces and training datasets.
Our methodology begins with the construction of a class correspondence matrix between the whitebox and blackbox label sets.
We show that our transfer attacks serve as powerful adversarial priors when integrated with query-based methods.
arXiv Detail & Related papers (2021-03-17T21:21:44Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.