Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models
- URL: http://arxiv.org/abs/2206.12725v1
- Date: Sat, 25 Jun 2022 20:05:11 GMT
- Title: Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models
- Authors: Gavin S. Hartnett, Li Ang Zhang, Caolionn O'Connell, Andrew J. Lohn,
Jair Aguirre
- Abstract summary: Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
- Score: 2.2588953434934416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patches are images designed to fool otherwise well-performing
neural network-based computer vision models. Although these attacks were
initially conceived of and studied digitally, in that the raw pixel values of
the image were perturbed, recent work has demonstrated that these attacks can
successfully transfer to the physical world. This can be accomplished by
printing out the patch and adding it into scenes of newly captured images or
video footage. In this work we further test the efficacy of adversarial patch
attacks in the physical world under more challenging conditions. We consider
object detection models trained on overhead imagery acquired through aerial or
satellite cameras, and we test physical adversarial patches inserted into
scenes of a desert environment. Our main finding is that it is far more
difficult to successfully implement the adversarial patch attacks under these
conditions than in the previously considered conditions. This has important
implications for AI safety as the real-world threat posed by adversarial
examples may be overstated.
Related papers
- Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Adversarial Camera Patch: An Effective and Robust Physical-World Attack
on Object Detectors [0.0]
Researchers are exploring patch-based physical attacks, yet traditional approaches, while effective, often result in conspicuous patches covering target objects.
Recent camera-based physical attacks have emerged, leveraging camera patches to execute stealthy attacks.
We propose an Adversarial Camera Patch (ADCP) to address this issue.
arXiv Detail & Related papers (2023-12-11T06:56:50Z) - AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems [17.03646903905082]
Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
arXiv Detail & Related papers (2023-11-20T13:28:42Z) - CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World [8.826711009649133]
Patch-based physical attacks have increasingly aroused concerns.
Most existing methods focus on obscuring targets captured on the ground, and some of these methods are simply extended to deceive aerial detectors.
We propose Contextual Background Attack (CBA), a novel physical attack framework against aerial detection, which can achieve strong attack efficacy and transferability in the physical world even without smudging the interested objects at all.
arXiv Detail & Related papers (2023-02-27T05:10:27Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Physical Adversarial Attacks on an Aerial Imagery Object Detector [32.99554861896277]
In this work, we demonstrate one of the first efforts on physical adversarial attacks on aerial imagery.
We devised novel experiments and metrics to evaluate the efficacy of physical adversarial attacks against object detectors in aerial scenes.
Our results indicate the palpable threat posed by physical adversarial attacks towards deep neural networks for processing satellite imagery.
arXiv Detail & Related papers (2021-08-26T12:53:41Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.