Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation
- URL: http://arxiv.org/abs/2406.02064v1
- Date: Tue, 4 Jun 2024 07:45:27 GMT
- Title: Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation
- Authors: Yaohua Liu, Jiaxin Gao, Xuan Liu, Xianghao Jiao, Xin Fan, Risheng Liu,
- Abstract summary: Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
- Score: 49.480978190805125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer attacks generate significant interest for real-world black-box applications by crafting transferable adversarial examples through surrogate models. Whereas, existing works essentially directly optimize the single-level objective w.r.t. the surrogate model, which always leads to poor interpretability of attack mechanism and limited generalization performance over unknown victim models. In this work, we propose the \textbf{B}il\textbf{E}vel \textbf{T}ransfer \textbf{A}ttac\textbf{K} (BETAK) framework by establishing an initialization derived bilevel optimization paradigm, which explicitly reformulates the nested constraint relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker. Algorithmically, we introduce the Hyper Gradient Response (HGR) estimation as an effective feedback for the transferability over pseudo-victim attackers, and propose the Dynamic Sequence Truncation (DST) technique to dynamically adjust the back-propagation path for HGR and reduce computational overhead simultaneously. Meanwhile, we conduct detailed algorithmic analysis and provide convergence guarantee to support non-convexity of the LL surrogate attacker. Extensive evaluations demonstrate substantial improvement of BETAK (e.g., $\mathbf{53.41}$\% increase of attack success rates against IncRes-v$2_{ens}$) against different victims and defense methods in targeted and untargeted attack scenarios. The source code is available at https://github.com/callous-youth/BETAK.
Related papers
- Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - LRS: Enhancing Adversarial Transferability through Lipschitz Regularized
Surrogate [8.248964912483912]
The transferability of adversarial examples is of central importance to transfer-based black-box adversarial attacks.
We propose Lipschitz Regularized Surrogate (LRS) for transfer-based black-box attacks.
We evaluate our proposed LRS approach by attacking state-of-the-art standard deep neural networks and defense models.
arXiv Detail & Related papers (2023-12-20T15:37:50Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Backpropagation Path Search On Adversarial Transferability [35.71353415348786]
Transfer-based attackers craft adversarial examples against surrogate models and transfer them to victim models.
Structure-based attackers adjust the backpropagation path to avoid the attack from overfitting the surrogate model.
Existing structure-based attackers fail to explore the convolution module in CNNs and modify the backpropagation graph.
arXiv Detail & Related papers (2023-08-15T08:21:20Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Boosting the Transferability of Adversarial Attacks with Reverse
Adversarial Perturbation [32.81400759291457]
adversarial examples can produce erroneous predictions by injecting imperceptible perturbations.
In this work, we study the transferability of adversarial examples, which is significant due to its threat to real-world applications.
We propose a novel attack method, dubbed reverse adversarial perturbation (RAP)
arXiv Detail & Related papers (2022-10-12T07:17:33Z) - Regularization Can Help Mitigate Poisoning Attacks... with the Right
Hyperparameters [1.8570591025615453]
Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
We show that current approaches, which typically assume that regularization hyper parameters remain constant, lead to an overly pessimistic view of the algorithms' robustness.
We propose a novel optimal attack formulation that considers the effect of the attack on the hyper parameters, modelling the attack as a emphminimax bilevel optimization problem.
arXiv Detail & Related papers (2021-05-23T14:34:47Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.