Unified Adversarial Patch for Cross-modal Attacks in the Physical World
- URL: http://arxiv.org/abs/2307.07859v2
- Date: Wed, 19 Jul 2023 03:04:50 GMT
- Title: Unified Adversarial Patch for Cross-modal Attacks in the Physical World
- Authors: Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu
- Abstract summary: We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
- Score: 11.24237636482709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, physical adversarial attacks have been presented to evade
DNNs-based object detectors. To ensure the security, many scenarios are
simultaneously deployed with visible sensors and infrared sensors, leading to
the failures of these single-modal physical attacks. To show the potential
risks under such scenes, we propose a unified adversarial patch to perform
cross-modal physical attacks, i.e., fooling visible and infrared object
detectors at the same time via a single patch. Considering different imaging
mechanisms of visible and infrared sensors, our work focuses on modeling the
shapes of adversarial patches, which can be captured in different modalities
when they change. To this end, we design a novel boundary-limited shape
optimization to achieve the compact and smooth shapes, and thus they can be
easily implemented in the physical world. In addition, to balance the fooling
degree between visible detector and infrared detector during the optimization
process, we propose a score-aware iterative evaluation, which can guide the
adversarial patch to iteratively reduce the predicted scores of the multi-modal
sensors. We finally test our method against the one-stage detector: YOLOv3 and
the two-stage detector: Faster RCNN. Results show that our unified patch
achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively. More
importantly, we verify the effective attacks in the physical world when visible
and infrared sensors shoot the objects under various settings like different
angles, distances, postures, and scenes.
Related papers
- DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation [12.995762461474856]
We introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the person'' category.
By adopting adversarial training, we construct a dynamically optimized ensemble model.
We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models.
arXiv Detail & Related papers (2023-12-28T08:58:13Z) - Two-stage optimized unified adversarial patch for attacking
visible-infrared cross-modal detectors in the physical world [0.0]
This work introduces the Two-stage Optimized Unified Adversarial Patch (TOUAP) designed for performing attacks against visible-infrared cross-modal detectors in real-world, black-box settings.
arXiv Detail & Related papers (2023-12-04T10:25:34Z) - Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World [11.24237636482709]
We design a unified adversarial patch that can perform cross-modal physical attacks, achieving evasion in both modalities simultaneously with a single patch.
We propose a novel boundary-limited shape optimization approach that aims to achieve compact and smooth shapes for the adversarial patch.
Our method is evaluated against several state-of-the-art object detectors, achieving an Attack Success Rate (ASR) of over 80%.
arXiv Detail & Related papers (2023-07-27T08:14:22Z) - Physically Adversarial Infrared Patches with Learnable Shapes and
Locations [1.1172382217477126]
We propose a physically feasible infrared attack method called "adversarial infrared patches"
Considering the imaging mechanism of infrared cameras by capturing objects' thermal radiation, adversarial infrared patches conduct attacks by attaching a patch of thermal insulation materials on the target object to manipulate its thermal distribution.
We verify adversarial infrared patches in different object detection tasks with various object detectors.
arXiv Detail & Related papers (2023-03-24T09:11:36Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Threatening Patch Attacks on Object Detection in Optical Remote Sensing
Images [55.09446477517365]
Advanced Patch Attacks (PAs) on object detection in natural images have pointed out the great safety vulnerability in methods based on deep neural networks.
We propose a more Threatening PA without the scarification of the visual quality, dubbed TPA.
To the best of our knowledge, this is the first attempt to study the PAs on object detection in O-RSIs, and we hope this work can get our readers interested in studying this topic.
arXiv Detail & Related papers (2023-02-13T02:35:49Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.