Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability
- URL: http://arxiv.org/abs/2501.00707v1
- Date: Wed, 01 Jan 2025 03:06:03 GMT
- Title: Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability
- Authors: Hui Zeng, Sanshuai Cui, Biwei Chen, Anjie Peng,
- Abstract summary: We propose an everywhere scheme to boost targeted transferability.
We aim to optimize 'an army of targets' in every local image region.
Our approach is method-agnostic, which means it can be easily combined with existing transferable attacks.
- Score: 20.46894437876869
- License:
- Abstract: Adversarial examples' (AE) transferability refers to the phenomenon that AEs crafted with one surrogate model can also fool other models. Notwithstanding remarkable progress in untargeted transferability, its targeted counterpart remains challenging. This paper proposes an everywhere scheme to boost targeted transferability. Our idea is to attack a victim image both globally and locally. We aim to optimize 'an army of targets' in every local image region instead of the previous works that optimize a high-confidence target in the image. Specifically, we split a victim image into non-overlap blocks and jointly mount a targeted attack on each block. Such a strategy mitigates transfer failures caused by attention inconsistency between surrogate and victim models and thus results in stronger transferability. Our approach is method-agnostic, which means it can be easily combined with existing transferable attacks for even higher transferability. Extensive experiments on ImageNet demonstrate that the proposed approach universally improves the state-of-the-art targeted attacks by a clear margin, e.g., the transferability of the widely adopted Logit attack can be improved by 28.8%-300%.We also evaluate the crafted AEs on a real-world platform: Google Cloud Vision. Results further support the superiority of the proposed method.
Related papers
- AIM: Additional Image Guided Generation of Transferable Adversarial Attacks [72.24101555828256]
Transferable adversarial examples highlight the vulnerability of deep neural networks (DNNs) to imperceptible perturbations across various real-world applications.
In this work, we focus on generative approaches for targeted transferable attacks.
We introduce a novel plug-and-play module into the general generator architecture to enhance adversarial transferability.
arXiv Detail & Related papers (2025-01-02T07:06:49Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Logit Margin Matters: Improving Transferable Targeted Adversarial Attack
by Logit Calibration [85.71545080119026]
Cross-Entropy (CE) loss function is insufficient to learn transferable targeted adversarial examples.
We propose two simple and effective logit calibration methods, which are achieved by downscaling the logits with a temperature factor and an adaptive margin.
Experiments conducted on the ImageNet dataset validate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2023-03-07T06:42:52Z) - Discrete Point-wise Attack Is Not Enough: Generalized Manifold
Adversarial Attack for Face Recognition [10.03652348636603]
We introduce a new pipeline of Generalized Manifold Adversarial Attack (GMAA) to achieve a better attack performance.
GMAA expands the target to be attacked from one to many to encourage a good generalization ability for the generated adversarial examples.
We demonstrate the effectiveness of our method based on extensive experiments, and reveal that GMAA promises a semantic continuous adversarial space with a higher generalization ability and visual quality.
arXiv Detail & Related papers (2022-12-19T02:57:55Z) - Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization [23.13302900115702]
adversarial examples are crafted by adding human-imperceptibles to benign inputs.
adversarial examples exhibit transferability across models, enabling practical black-box attacks.
We introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination.
GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4%.
arXiv Detail & Related papers (2022-11-21T07:59:22Z) - Towards Understanding and Boosting Adversarial Transferability from a
Distribution Perspective [80.02256726279451]
adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years.
We propose a novel method that crafts adversarial examples by manipulating the distribution of the image.
Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios.
arXiv Detail & Related papers (2022-10-09T09:58:51Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - On Generating Transferable Targeted Perturbations [102.3506210331038]
We propose a new generative approach for highly transferable targeted perturbations.
Our approach matches the perturbed image distribution' with that of the target class, leading to high targeted transferability rates.
arXiv Detail & Related papers (2021-03-26T17:55:28Z) - On Success and Simplicity: A Second Look at Transferable Targeted
Attacks [6.276791657895803]
We show that transferable targeted attacks converge slowly to optimal transferability and improve considerably when given more iterations.
An attack that simply maximizes the target logit performs surprisingly well, surpassing more complex losses and even achieving performance comparable to the state of the art.
arXiv Detail & Related papers (2020-12-21T09:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.