Developing Imperceptible Adversarial Patches to Camouflage Military
Assets From Computer Vision Enabled Technologies
- URL: http://arxiv.org/abs/2202.08892v1
- Date: Thu, 17 Feb 2022 20:31:51 GMT
- Title: Developing Imperceptible Adversarial Patches to Camouflage Military
Assets From Computer Vision Enabled Technologies
- Authors: Christopher Wise, Jo Plested
- Abstract summary: Convolutional neural networks (CNNs) have demonstrated rapid progress and a high level of success in object detection.
Recent evidence has highlighted their vulnerability to adversarial attacks.
We present a unique method that produces imperceptible patches capable of camouflaging large military assets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have demonstrated rapid progress and a
high level of success in object detection. However, recent evidence has
highlighted their vulnerability to adversarial attacks. These attacks are
calculated image perturbations or adversarial patches that result in object
misclassification or detection suppression. Traditional camouflage methods are
impractical when applied to disguise aircraft and other large mobile assets
from autonomous detection in intelligence, surveillance and reconnaissance
technologies and fifth generation missiles. In this paper we present a unique
method that produces imperceptible patches capable of camouflaging large
military assets from computer vision-enabled technologies. We developed these
patches by maximising object detection loss whilst limiting the patch's colour
perceptibility. This work also aims to further the understanding of adversarial
examples and their effects on object detection algorithms.
Related papers
- TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Adversarial Camera Patch: An Effective and Robust Physical-World Attack
on Object Detectors [0.0]
Researchers are exploring patch-based physical attacks, yet traditional approaches, while effective, often result in conspicuous patches covering target objects.
Recent camera-based physical attacks have emerged, leveraging camera patches to execute stealthy attacks.
We propose an Adversarial Camera Patch (ADCP) to address this issue.
arXiv Detail & Related papers (2023-12-11T06:56:50Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - The Weaknesses of Adversarial Camouflage in Overhead Imagery [7.724233098666892]
We build a library of 24 adversarial patches to disguise four different object classes: bus, car, truck, van.
We show that while adversarial patches may fool object detectors, the presence of such patches is often easily uncovered.
This raises the question of whether such patches truly constitute camouflage.
arXiv Detail & Related papers (2022-07-06T20:39:21Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks [67.58887471137436]
We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
arXiv Detail & Related papers (2021-12-15T06:57:01Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Adversarial Patch Camouflage against Aerial Detection [2.3268622345249796]
Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage.
In this work, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance.
Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities.
arXiv Detail & Related papers (2020-08-31T15:21:50Z) - CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection [16.384831731988204]
We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
arXiv Detail & Related papers (2020-08-19T06:16:10Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.