Feature Importance-aware Transferable Adversarial Attacks
- URL: http://arxiv.org/abs/2107.14185v1
- Date: Thu, 29 Jul 2021 17:13:29 GMT
- Title: Feature Importance-aware Transferable Adversarial Attacks
- Authors: Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren
- Abstract summary: Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features.
We argue that such brute-force degradation would introduce model-specific local optimum into adversarial examples.
By contrast, we propose the Feature Importance-aware Attack (FIA), which disrupts important object-aware features.
- Score: 46.12026564065764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferability of adversarial examples is of central importance for
attacking an unknown model, which facilitates adversarial attacks in more
practical scenarios, e.g., blackbox attacks. Existing transferable attacks tend
to craft adversarial examples by indiscriminately distorting features to
degrade prediction accuracy in a source model without aware of intrinsic
features of objects in the images. We argue that such brute-force degradation
would introduce model-specific local optimum into adversarial examples, thus
limiting the transferability. By contrast, we propose the Feature
Importance-aware Attack (FIA), which disrupts important object-aware features
that dominate model decisions consistently. More specifically, we obtain
feature importance by introducing the aggregate gradient, which averages the
gradients with respect to feature maps of the source model, computed on a batch
of random transforms of the original clean image. The gradients will be highly
correlated to objects of interest, and such correlation presents invariance
across different models. Besides, the random transforms will preserve intrinsic
features of objects and suppress model-specific information. Finally, the
feature importance guides to search for adversarial examples towards disrupting
critical features, achieving stronger transferability. Extensive experimental
evaluation demonstrates the effectiveness and superior performance of the
proposed FIA, i.e., improving the success rate by 8.4% against normally trained
models and 11.7% against defense models as compared to the state-of-the-art
transferable attacks. Code is available at: https://github.com/hcguoO0/FIA
Related papers
- SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial
Transferability [26.39964737311377]
We propose an adaptive ensemble attack, dubbed AdaEA, to adaptively control the fusion of the outputs from each model.
We achieve considerable improvement over the existing ensemble attacks on various datasets.
arXiv Detail & Related papers (2023-08-05T15:12:36Z) - Rethinking Model Ensemble in Transfer-based Adversarial Attacks [46.82830479910875]
An effective strategy to improve the transferability is attacking an ensemble of models.
Previous works simply average the outputs of different models.
We propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples.
arXiv Detail & Related papers (2023-03-16T06:37:16Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - TREND: Transferability based Robust ENsemble Design [6.663641564969944]
We study the effect of network architecture, input, weight and activation quantization on transferability of adversarial samples.
We show that transferability is significantly hampered by input quantization between source and target.
We propose a new state-of-the-art ensemble attack to combat this.
arXiv Detail & Related papers (2020-08-04T13:38:14Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.