Towards Transferable Adversarial Attacks with Centralized Perturbation
- URL: http://arxiv.org/abs/2312.06199v2
- Date: Sat, 23 Dec 2023 06:35:54 GMT
- Title: Towards Transferable Adversarial Attacks with Centralized Perturbation
- Authors: Shangbo Wu, Yu-an Tan, Yajie Wang, Ruinan Ma, Wencong Ma and Yuanzhang
Li
- Abstract summary: Adversa transferability enables black-box attacks on unknown victim deep neural networks (DNNs)
Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model.
We propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation.
- Score: 4.689122927344728
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Adversarial transferability enables black-box attacks on unknown victim deep
neural networks (DNNs), rendering attacks viable in real-world scenarios.
Current transferable attacks create adversarial perturbation over the entire
image, resulting in excessive noise that overfit the source model.
Concentrating perturbation to dominant image regions that are model-agnostic is
crucial to improving adversarial efficacy. However, limiting perturbation to
local regions in the spatial domain proves inadequate in augmenting
transferability. To this end, we propose a transferable adversarial attack with
fine-grained perturbation optimization in the frequency domain, creating
centralized perturbation. We devise a systematic pipeline to dynamically
constrain perturbation optimization to dominant frequency coefficients. The
constraint is optimized in parallel at each iteration, ensuring the directional
alignment of perturbation optimization with model prediction. Our approach
allows us to centralize perturbation towards sample-specific important
frequency features, which are shared by DNNs, effectively mitigating source
model overfitting. Experiments demonstrate that by dynamically centralizing
perturbation on dominating frequency coefficients, crafted adversarial examples
exhibit stronger transferability, and allowing them to bypass various defenses.
Related papers
- A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Multiplicative Gamma noise remove is a critical research area in the application of synthetic aperture radar (SAR) imaging.
We propose a tunable, regularized neural network that unrolls a denoising unit and a regularization unit into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - Improving Transferable Targeted Attacks with Feature Tuning Mixup [12.707753562907534]
Deep neural networks exhibit vulnerability to examples that can transfer across different models.
We propose Feature Tuning Mixup (FTM) to enhance targeted attack transferability.
Our method achieves significant improvements over state-of-the-art methods while maintaining low computational cost.
arXiv Detail & Related papers (2024-11-23T13:18:25Z) - FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks [42.18755809782401]
Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples.
We propose a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both cross-domain and cross-model settings.
We demonstrate strong transferability of our generated adversarial perturbations through extensive cross-domain and cross-model experiments.
arXiv Detail & Related papers (2024-07-30T08:50:06Z) - Improving Adversarial Transferability by Stable Diffusion [36.97548018603747]
adversarial examples introduce imperceptible perturbations to benign samples, deceiving predictions.
Deep neural networks (DNNs) are susceptible to adversarial examples, which introduce imperceptible perturbations to benign samples, deceiving predictions.
We introduce a novel attack method called Stable Diffusion Attack Method (SDAM), which incorporates samples generated by Stable Diffusion to augment input images.
arXiv Detail & Related papers (2023-11-18T09:10:07Z) - Improving Adversarial Transferability via Intermediate-level
Perturbation Decay [79.07074710460012]
We develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization.
Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models.
arXiv Detail & Related papers (2023-04-26T09:49:55Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Diverse Generative Adversarial Perturbations on Attention Space for
Transferable Adversarial Attacks [29.034390810078172]
Adrial attacks with improved transferability have recently received much attention due to their practicality.
Existing transferable attacks craft perturbations in a deterministic manner and often fail to fully explore the loss surface.
We propose Attentive-Diversity Attack (ADA), which disrupts diverse salient features in a manner to improve transferability.
arXiv Detail & Related papers (2022-08-11T06:00:40Z) - Exploring Transferable and Robust Adversarial Perturbation Generation
from the Perspective of Network Hierarchy [52.153866313879924]
The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.
We propose a transferable and robust adversarial generation (TRAP) method.
Our TRAP achieves impressive transferability and high robustness against certain interferences.
arXiv Detail & Related papers (2021-08-16T11:52:41Z) - Removing Adversarial Noise in Class Activation Feature Space [160.78488162713498]
We propose to remove adversarial noise by implementing a self-supervised adversarial training mechanism in a class activation feature space.
We train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
Empirical evaluations demonstrate that our method could significantly enhance adversarial robustness in comparison to previous state-of-the-art approaches.
arXiv Detail & Related papers (2021-04-19T10:42:24Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.