REAP: A Large-Scale Realistic Adversarial Patch Benchmark
- URL: http://arxiv.org/abs/2212.05680v2
- Date: Fri, 18 Aug 2023 10:46:35 GMT
- Title: REAP: A Large-Scale Realistic Adversarial Patch Benchmark
- Authors: Nabeel Hingun, Chawin Sitawarin, Jerry Li, David Wagner
- Abstract summary: Adrial patch attacks present a critical threat to cyber-physical systems that rely on cameras such as autonomous cars.
We propose the REAP benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images.
Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs.
- Score: 14.957616218042594
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning models are known to be susceptible to adversarial
perturbation. One famous attack is the adversarial patch, a sticker with a
particularly crafted pattern that makes the model incorrectly predict the
object it is placed on. This attack presents a critical threat to
cyber-physical systems that rely on cameras such as autonomous cars. Despite
the significance of the problem, conducting research in this setting has been
difficult; evaluating attacks and defenses in the real world is exceptionally
costly while synthetic data are unrealistic. In this work, we propose the REAP
(REalistic Adversarial Patch) benchmark, a digital benchmark that allows the
user to evaluate patch attacks on real images, and under real-world conditions.
Built on top of the Mapillary Vistas dataset, our benchmark contains over
14,000 traffic signs. Each sign is augmented with a pair of geometric and
lighting transformations, which can be used to apply a digitally generated
patch realistically onto the sign. Using our benchmark, we perform the first
large-scale assessments of adversarial patch attacks under realistic
conditions. Our experiments suggest that adversarial patch attacks may present
a smaller threat than previously believed and that the success rate of an
attack on simpler digital simulations is not predictive of its actual
effectiveness in practice. We release our benchmark publicly at
https://github.com/wagner-group/reap-benchmark.
Related papers
- TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models [2.2588953434934416]
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
arXiv Detail & Related papers (2022-06-25T20:05:11Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.