Backpropagation Path Search On Adversarial Transferability
- URL: http://arxiv.org/abs/2308.07625v1
- Date: Tue, 15 Aug 2023 08:21:20 GMT
- Title: Backpropagation Path Search On Adversarial Transferability
- Authors: Zhuoer Xu, Zhangxuan Gu, Jianping Zhang, Shiwen Cui, Changhua Meng,
Weiqiang Wang
- Abstract summary: Transfer-based attackers craft adversarial examples against surrogate models and transfer them to victim models.
Structure-based attackers adjust the backpropagation path to avoid the attack from overfitting the surrogate model.
Existing structure-based attackers fail to explore the convolution module in CNNs and modify the backpropagation graph.
- Score: 35.71353415348786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are vulnerable to adversarial examples, dictating the
imperativeness to test the model's robustness before deployment. Transfer-based
attackers craft adversarial examples against surrogate models and transfer them
to victim models deployed in the black-box situation. To enhance the
adversarial transferability, structure-based attackers adjust the
backpropagation path to avoid the attack from overfitting the surrogate model.
However, existing structure-based attackers fail to explore the convolution
module in CNNs and modify the backpropagation graph heuristically, leading to
limited effectiveness. In this paper, we propose backPropagation pAth Search
(PAS), solving the aforementioned two problems. We first propose SkipConv to
adjust the backpropagation path of convolution by structural
reparameterization. To overcome the drawback of heuristically designed
backpropagation paths, we further construct a DAG-based search space, utilize
one-step approximation for path evaluation and employ Bayesian Optimization to
search for the optimal path. We conduct comprehensive experiments in a wide
range of transfer settings, showing that PAS improves the attack success rate
by a huge margin for both normally trained and defense models.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Improving Bayesian Inference in Deep Neural Networks with Variational
Structured Dropout [19.16094166903702]
We introduce a new variational structured approximation inspired by the interpretation of Dropout training as approximate inference in Bayesian networks.
We then propose a novel method called Variational Structured Dropout (VSD) to overcome this limitation.
We conduct experiments on standard benchmarks to demonstrate the effectiveness of VSD over state-of-the-art methods on both predictive accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-02-16T02:33:43Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.