Distributional Modeling for Location-Aware Adversarial Patches
- URL: http://arxiv.org/abs/2306.16131v1
- Date: Wed, 28 Jun 2023 12:01:50 GMT
- Title: Distributional Modeling for Location-Aware Adversarial Patches
- Authors: Xingxing Wei, Shouwei Ruan, Yinpeng Dong and Hang Su
- Abstract summary: Distribution-d Adversarial Patch (DOPatch) is a novel method that optimize a multimodal distribution of adversarial locations.
DOPatch can generate diverse adversarial samples by characterizing the distribution of adversarial locations.
We evaluate DOPatch on various face recognition and image recognition tasks and demonstrate its superiority and efficiency over existing methods.
- Score: 28.466804363780557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patch is one of the important forms of performing adversarial
attacks in the physical world. To improve the naturalness and aggressiveness of
existing adversarial patches, location-aware patches are proposed, where the
patch's location on the target object is integrated into the optimization
process to perform attacks. Although it is effective, efficiently finding the
optimal location for placing the patches is challenging, especially under the
black-box attack settings. In this paper, we propose the Distribution-Optimized
Adversarial Patch (DOPatch), a novel method that optimizes a multimodal
distribution of adversarial locations instead of individual ones. DOPatch has
several benefits: Firstly, we find that the locations' distributions across
different models are pretty similar, and thus we can achieve efficient
query-based attacks to unseen models using a distributional prior optimized on
a surrogate model. Secondly, DOPatch can generate diverse adversarial samples
by characterizing the distribution of adversarial locations. Thus we can
improve the model's robustness to location-aware patches via carefully designed
Distributional-Modeling Adversarial Training (DOP-DMAT). We evaluate DOPatch on
various face recognition and image recognition tasks and demonstrate its
superiority and efficiency over existing methods. We also conduct extensive
ablation studies and analyses to validate the effectiveness of our method and
provide insights into the distribution of adversarial locations.
Related papers
- Real-world Adversarial Defense against Patch Attacks based on Diffusion Model [34.86098237949215]
This paper introduces DIFFender, a novel DIFfusion-based DeFender framework to counter adversarial patch attacks.
At the core of our approach is the discovery of the Adversarial Anomaly Perception (AAP) phenomenon.
DIFFender seamlessly integrates the tasks of patch localization and restoration within a unified diffusion model framework.
arXiv Detail & Related papers (2024-09-14T10:38:35Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Boosting Adversarial Transferability with Learnable Patch-wise Masks [16.46210182214551]
Adversarial examples have attracted widespread attention in security-critical applications because of their transferability across different models.
In this paper, we argue that the model-specific discriminative regions are a key factor causing overfitting to the source model, and thus reducing the transferability to the target model.
To accurately localize these regions, we present a learnable approach to automatically optimize the mask.
arXiv Detail & Related papers (2023-06-28T05:32:22Z) - DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks [34.86098237949214]
Adversarial attacks, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models.
This paper introduces DIFFender, a novel defense framework that harnesses the capabilities of a text-guided diffusion model to combat patch attacks.
DIFFender integrates dual tasks of patch localization and restoration within a single diffusion model framework.
arXiv Detail & Related papers (2023-06-15T13:33:27Z) - Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks [13.19708582519833]
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks.
Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content.
We propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting.
arXiv Detail & Related papers (2022-12-26T02:48:37Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Defensive Patches for Robust Recognition in the Physical World [111.46724655123813]
Data-end defense improves robustness by operations on input data instead of modifying models.
Previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models.
We propose a defensive patch generation framework to address these problems by helping models better exploit these features.
arXiv Detail & Related papers (2022-04-13T07:34:51Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.