The Weaknesses of Adversarial Camouflage in Overhead Imagery
- URL: http://arxiv.org/abs/2207.02963v1
- Date: Wed, 6 Jul 2022 20:39:21 GMT
- Title: The Weaknesses of Adversarial Camouflage in Overhead Imagery
- Authors: Adam Van Etten
- Abstract summary: We build a library of 24 adversarial patches to disguise four different object classes: bus, car, truck, van.
We show that while adversarial patches may fool object detectors, the presence of such patches is often easily uncovered.
This raises the question of whether such patches truly constitute camouflage.
- Score: 7.724233098666892
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning is increasingly critical for analysis of the ever-growing
corpora of overhead imagery. Advanced computer vision object detection
techniques have demonstrated great success in identifying objects of interest
such as ships, automobiles, and aircraft from satellite and drone imagery. Yet
relying on computer vision opens up significant vulnerabilities, namely, the
susceptibility of object detection algorithms to adversarial attacks. In this
paper we explore the efficacy and drawbacks of adversarial camouflage in an
overhead imagery context. While a number of recent papers have demonstrated the
ability to reliably fool deep learning classifiers and object detectors with
adversarial patches, most of this work has been performed on relatively uniform
datasets and only a single class of objects. In this work we utilize the
VisDrone dataset, which has a large range of perspectives and object sizes. We
explore four different object classes: bus, car, truck, van. We build a library
of 24 adversarial patches to disguise these objects, and introduce a patch
translucency variable to our patches. The translucency (or alpha value) of the
patches is highly correlated to their efficacy. Further, we show that while
adversarial patches may fool object detectors, the presence of such patches is
often easily uncovered, with patches on average 24% more detectable than the
objects the patches were meant to hide. This raises the question of whether
such patches truly constitute camouflage. Source code is available at
https://github.com/IQTLabs/camolo.
Related papers
- Network transferability of adversarial patches in real-time object detection [3.237380113935023]
Adversarial patches in computer vision can be used to fool deep neural networks and manipulate their decision-making process.
This paper investigates the transferability across numerous object detector architectures.
arXiv Detail & Related papers (2024-08-28T14:47:34Z) - Towards Deeper Understanding of Camouflaged Object Detection [64.81987999832032]
We argue that the binary segmentation setting fails to fully understand the concept of camouflage.
We present the first triple-task learning framework to simultaneously localize, segment and rank camouflaged objects.
arXiv Detail & Related papers (2022-05-23T14:26:18Z) - Developing Imperceptible Adversarial Patches to Camouflage Military
Assets From Computer Vision Enabled Technologies [0.0]
Convolutional neural networks (CNNs) have demonstrated rapid progress and a high level of success in object detection.
Recent evidence has highlighted their vulnerability to adversarial attacks.
We present a unique method that produces imperceptible patches capable of camouflaging large military assets.
arXiv Detail & Related papers (2022-02-17T20:31:51Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors [12.946967210071032]
Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
arXiv Detail & Related papers (2021-09-30T14:47:29Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z) - Adversarial Patch Camouflage against Aerial Detection [2.3268622345249796]
Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage.
In this work, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance.
Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities.
arXiv Detail & Related papers (2020-08-31T15:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.