Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices
- URL: http://arxiv.org/abs/2106.15202v1
- Date: Tue, 29 Jun 2021 09:39:34 GMT
- Title: Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices
- Authors: Tao Bai, Jinqi Luo, Jun Zhao
- Abstract summary: A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
- Score: 8.437172062224034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based image recognition systems have been widely deployed on
mobile devices in today's world. In recent studies, however, deep learning
models are shown vulnerable to adversarial examples. One variant of adversarial
examples, called adversarial patch, draws researchers' attention due to its
strong attack abilities. Though adversarial patches achieve high attack success
rates, they are easily being detected because of the visual inconsistency
between the patches and the original images. Besides, it usually requires a
large amount of data for adversarial patch generation in the literature, which
is computationally expensive and time-consuming. To tackle these challenges, we
propose an approach to generate inconspicuous adversarial patches with one
single image. In our approach, we first decide the patch locations basing on
the perceptual sensitivity of victim models, then produce adversarial patches
in a coarse-to-fine way by utilizing multiple-scale generators and
discriminators. The patches are encouraged to be consistent with the background
images with adversarial training while preserving strong attack abilities. Our
approach shows the strong attack abilities in white-box settings and the
excellent transferability in black-box settings through extensive experiments
on various models with different architectures and training methods. Compared
to other adversarial patches, our adversarial patches hold the most negligible
risks to be detected and can evade human observations, which is supported by
the illustrations of saliency maps and results of user evaluations. Lastly, we
show that our adversarial patches can be applied in the physical world.
Related papers
- Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models [2.2588953434934416]
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
arXiv Detail & Related papers (2022-06-25T20:05:11Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors [12.946967210071032]
Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
arXiv Detail & Related papers (2021-09-30T14:47:29Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.