Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks
- URL: http://arxiv.org/abs/2212.12995v1
- Date: Mon, 26 Dec 2022 02:48:37 GMT
- Title: Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks
- Authors: Xingxing Wei, Ying Guo, Jie Yu, Bo Zhang
- Abstract summary: Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks.
Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content.
We propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting.
- Score: 13.19708582519833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patch is an important form of real-world adversarial attack that
brings serious risks to the robustness of deep neural networks. Previous
methods generate adversarial patches by either optimizing their perturbation
values while fixing the pasting position or manipulating the position while
fixing the patch's content. This reveals that the positions and perturbations
are both important to the adversarial attack. For that, in this paper, we
propose a novel method to simultaneously optimize the position and perturbation
for an adversarial patch, and thus obtain a high attack success rate in the
black-box setting. Technically, we regard the patch's position, the
pre-designed hyper-parameters to determine the patch's perturbations as the
variables, and utilize the reinforcement learning framework to simultaneously
solve for the optimal solution based on the rewards obtained from the target
model with a small number of queries. Extensive experiments are conducted on
the Face Recognition (FR) task, and results on four representative FR models
show that our method can significantly improve the attack success rate and
query efficiency. Besides, experiments on the commercial FR service and
physical environments confirm its practical application value. We also extend
our method to the traffic sign recognition task to verify its generalization
ability.
Related papers
- Real-world Adversarial Defense against Patch Attacks based on Diffusion Model [34.86098237949215]
This paper introduces DIFFender, a novel DIFfusion-based DeFender framework to counter adversarial patch attacks.
At the core of our approach is the discovery of the Adversarial Anomaly Perception (AAP) phenomenon.
DIFFender seamlessly integrates the tasks of patch localization and restoration within a unified diffusion model framework.
arXiv Detail & Related papers (2024-09-14T10:38:35Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches
on Face Recognition [13.618387142029663]
Face recognition systems powered by deep learning are vulnerable to adversarial attacks.
We propose RADAP, a robust and adaptive defense mechanism against diverse adversarial patches.
We conduct comprehensive experiments to validate the effectiveness of RADAP.
arXiv Detail & Related papers (2023-11-29T03:37:14Z) - Distributional Modeling for Location-Aware Adversarial Patches [28.466804363780557]
Distribution-d Adversarial Patch (DOPatch) is a novel method that optimize a multimodal distribution of adversarial locations.
DOPatch can generate diverse adversarial samples by characterizing the distribution of adversarial locations.
We evaluate DOPatch on various face recognition and image recognition tasks and demonstrate its superiority and efficiency over existing methods.
arXiv Detail & Related papers (2023-06-28T12:01:50Z) - DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks [34.86098237949214]
Adversarial attacks, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models.
This paper introduces DIFFender, a novel defense framework that harnesses the capabilities of a text-guided diffusion model to combat patch attacks.
DIFFender integrates dual tasks of patch localization and restoration within a single diffusion model framework.
arXiv Detail & Related papers (2023-06-15T13:33:27Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Jacks of All Trades, Masters Of None: Addressing Distributional Shift
and Obtrusiveness via Transparent Patch Attacks [16.61388475767519]
We focus on the development of effective adversarial patch attacks.
We jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.
arXiv Detail & Related papers (2020-05-01T23:50:37Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.