Brightness-Restricted Adversarial Attack Patch
- URL: http://arxiv.org/abs/2307.00421v1
- Date: Sat, 1 Jul 2023 20:08:55 GMT
- Title: Brightness-Restricted Adversarial Attack Patch
- Authors: Mingzhen Shao
- Abstract summary: We introduce a brightness-restricted patch (BrPatch) that uses optical characteristics to reduce conspicuousness while preserving image independence.
Our experiments show that attack patches exhibit strong redundancy to brightness and are resistant to color transfer and noise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attack patches have gained increasing attention due to their
practical applicability in physical-world scenarios. However, the bright colors
used in attack patches represent a significant drawback, as they can be easily
identified by human observers. Moreover, even though these attacks have been
highly successful in deceiving target networks, which specific features of the
attack patch contribute to its success are still unknown. Our paper introduces
a brightness-restricted patch (BrPatch) that uses optical characteristics to
effectively reduce conspicuousness while preserving image independence. We also
conducted an analysis of the impact of various image features (such as color,
texture, noise, and size) on the effectiveness of an attack patch in
physical-world deployment. Our experiments show that attack patches exhibit
strong redundancy to brightness and are resistant to color transfer and noise.
Based on our findings, we propose some additional methods to further reduce the
conspicuousness of BrPatch. Our findings also explain the robustness of attack
patches observed in physical-world scenarios.
Related papers
- CapGen:An Environment-Adaptive Generator of Adversarial Patches [12.042510965650205]
Adversarial patches, often used to provide physical stealth protection for critical assets, usually neglect the need for visual harmony with the background environment.
We introduce the Camouflaged Adrialversa Pattern Generator (CAPGen), a novel approach that leverages specific base colors from the surrounding environment.
This paper is the first to comprehensively examine the roles played by patterns and colors in the context of adversarial patches.
arXiv Detail & Related papers (2024-12-10T07:24:24Z) - DiffPatch: Generating Customizable Adversarial Patches using Diffusion Models [89.39483815957236]
We propose DiffPatch, a novel diffusion-based framework for generating naturalistic adversarial patches.
Our approach allows users to start from a reference image and incorporates masks to create patches of various shapes, not limited to squares.
Our method achieves attack performance comparable to state-of-the-art non-naturalistic patches while maintaining a natural appearance.
arXiv Detail & Related papers (2024-12-02T12:30:35Z) - CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World [8.826711009649133]
Patch-based physical attacks have increasingly aroused concerns.
Most existing methods focus on obscuring targets captured on the ground, and some of these methods are simply extended to deceive aerial detectors.
We propose Contextual Background Attack (CBA), a novel physical attack framework against aerial detection, which can achieve strong attack efficacy and transferability in the physical world even without smudging the interested objects at all.
arXiv Detail & Related papers (2023-02-27T05:10:27Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - PatchGuard: A Provably Robust Defense against Adversarial Patches via
Small Receptive Fields and Masking [46.03749650789915]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image.
We propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
arXiv Detail & Related papers (2020-05-17T03:38:34Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z) - PatchAttack: A Black-box Texture-based Attack with Reinforcement
Learning [31.255179167694887]
Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification.
Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks.
arXiv Detail & Related papers (2020-04-12T19:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.