Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection
- URL: http://arxiv.org/abs/2405.07595v1
- Date: Mon, 13 May 2024 09:56:57 GMT
- Title: Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection
- Authors: Dehong Kong, Siyuan Liang, Wenqi Ren,
- Abstract summary: Object detection techniques for Unmanned Aerial Vehicles rely on Deep Neural Networks (DNNs)
adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches.
We propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color.
- Score: 37.77615360932841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection techniques for Unmanned Aerial Vehicles (UAVs) rely on Deep Neural Networks (DNNs), which are vulnerable to adversarial attacks. Nonetheless, adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches. Moreover, imposing constraints directly on adversarial patches makes it difficult to generate patches that appear natural to the human eye while ensuring a high attack success rate. We notice that patches are natural looking when their overall color is consistent with the environment. Therefore, we propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color. To the best of our knowledge, this paper is the first to consider natural patches in the domain of UAVs. The EMA method exploits strong prior knowledge of a pretrained stable diffusion to guide the optimization direction of the adversarial patch, where the text guidance can restrict the color of the patch. To better match the environment, the contrast and brightness of the patch are appropriately adjusted. Instead of optimizing the adversarial patch itself, we optimize an adversarial perturbation patch which initializes to zero so that the model can better trade off attacking performance and naturalness. Experiments conducted on the DroneVehicle and Carpk datasets have shown that our work can reach nearly the same attack performance in the digital attack(no greater than 2 in mAP$\%$), surpass the baseline method in the physical specific scenarios, and exhibit a significant advantage in terms of naturalness in visualization and color difference with the environment.
Related papers
- Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Random Position Adversarial Patch for Vision Transformers [0.0]
This paper proposes a novel method for generating an adversarial patch (G-Patch)
Instead of directly optimizing the patch using gradients, we employ a GAN-like structure to generate the adversarial patch.
Experiments show the effectiveness of the adversarial patch in achieving universal attacks on vision transformers, both in digital and physical-world scenarios.
arXiv Detail & Related papers (2023-07-09T00:08:34Z) - AdvART: Adversarial Art for Camouflaged Object Detection Attacks [7.7889972735711925]
We propose a novel approach to generate naturalistic and inconspicuous adversarial patches.
Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space.
Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge.
arXiv Detail & Related papers (2023-03-03T06:28:05Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Feasibility of Inconspicuous GAN-generated Adversarial Patches against
Object Detection [3.395452700023097]
In this work, we have evaluated the existing approaches to generate inconspicuous patches.
We have evaluated two approaches to generate naturalistic patches: by incorporating patch generation into the GAN training process and by using the pretrained GAN.
Our experiments have shown, that using a pre-trained GAN helps to gain realistic-looking patches while preserving the performance similar to conventional adversarial patches.
arXiv Detail & Related papers (2022-07-15T08:48:40Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.