Adversarial Camouflage: Hiding Physical-World Attacks with Natural
Styles
- URL: http://arxiv.org/abs/2003.08757v2
- Date: Mon, 22 Jun 2020 05:15:12 GMT
- Title: Adversarial Camouflage: Hiding Physical-World Attacks with Natural
Styles
- Authors: Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. K. Qin, Yun Yang
- Abstract summary: We propose a novel approach to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers.
emphAdvCam can also be used to protect private information from being detected by deep learning systems.
- Score: 40.57099683047126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples. Existing works have mostly focused on either digital adversarial
examples created via small and imperceptible perturbations, or physical-world
adversarial examples created with large and less realistic distortions that are
easily identified by human observers. In this paper, we propose a novel
approach, called Adversarial Camouflage (\emph{AdvCam}), to craft and
camouflage physical-world adversarial examples into natural styles that appear
legitimate to human observers. Specifically, \emph{AdvCam} transfers large
adversarial perturbations into customized styles, which are then "hidden"
on-target object or off-target background. Experimental evaluation shows that,
in both digital and physical-world scenarios, adversarial examples crafted by
\emph{AdvCam} are well camouflaged and highly stealthy, while remaining
effective in fooling state-of-the-art DNN image classifiers. Hence,
\emph{AdvCam} is a flexible approach that can help craft stealthy attacks to
evaluate the robustness of DNNs. \emph{AdvCam} can also be used to protect
private information from being detected by deep learning systems.
Related papers
- Imperceptible Adversarial Examples in the Physical World [10.981325924844167]
We make adversarial examples imperceptible in the physical world using a straight-through estimator (STE, a.k.a. BPDA)
Our differentiable rendering extension to STE also enables imperceptible adversarial patches in the physical world.
To the best of our knowledge, this is the first work demonstrating imperceptible adversarial examples bounded by small norms in the physical world.
arXiv Detail & Related papers (2024-11-25T18:02:23Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
Perturbation and AI Techniques [1.0718756132502771]
adversarial examples are subtle perturbations artfully injected into clean images or videos.
Deepfakes have emerged as a potent tool to manipulate public opinion and tarnish the reputations of public figures.
This article delves into the multifaceted world of adversarial examples, elucidating the underlying principles behind their capacity to deceive deep learning algorithms.
arXiv Detail & Related papers (2023-02-22T23:48:19Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Preemptive Image Robustification for Protecting Users against
Man-in-the-Middle Adversarial Attacks [16.017328736786922]
A Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online.
This type of attack can raise severe ethical concerns on top of simple performance degradation.
We devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations.
arXiv Detail & Related papers (2021-12-10T16:06:03Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations
with Perceptual Similarity [5.03315505352304]
Adversarial examples are malicious images with visually imperceptible perturbations.
We propose Demiguise Attack, crafting unrestricted'' perturbations with Perceptual Similarity.
We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility.
arXiv Detail & Related papers (2021-07-03T10:14:01Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Dual Attention Suppression Attack: Generate Adversarial Camouflage in
Physical World [33.63565658548095]
Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, this paper proposes the Dual Attention Suppression (DAS) attack.
We generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions.
Based on the fact that human visual attention always focuses on salient items, we evade the human-specific bottom-up attention to generate visually-natural camouflages.
arXiv Detail & Related papers (2021-03-01T14:46:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.