Adversarial Color Film: Effective Physical-World Attack to DNNs
- URL: http://arxiv.org/abs/2209.02430v2
- Date: Tue, 23 May 2023 12:29:21 GMT
- Title: Adversarial Color Film: Effective Physical-World Attack to DNNs
- Authors: Chengyin Hu, Weiwen Shi
- Abstract summary: We propose a camera-based physical attack called Adversarial Color Film (AdvCF)
Experiments show the effectiveness of the proposed method in both digital and physical environments.
We look into AdvCF's threat to future vision-based systems and propose some promising mentality for camera-based physical attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well known that the performance of deep neural networks (DNNs) is
susceptible to subtle interference. So far, camera-based physical adversarial
attacks haven't gotten much attention, but it is the vacancy of physical
attack. In this paper, we propose a simple and efficient camera-based physical
attack called Adversarial Color Film (AdvCF), which manipulates the physical
parameters of color film to perform attacks. Carefully designed experiments
show the effectiveness of the proposed method in both digital and physical
environments. In addition, experimental results show that the adversarial
samples generated by AdvCF have excellent performance in attack
transferability, which enables AdvCF effective black-box attacks. At the same
time, we give the guidance of defense against AdvCF by means of adversarial
training. Finally, we look into AdvCF's threat to future vision-based systems
and propose some promising mentality for camera-based physical attacks.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Survey on Physical Adversarial Attack in Computer Vision [7.053905447737444]
Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
arXiv Detail & Related papers (2022-09-28T17:23:52Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs [0.0]
In this paper, we demonstrate a novel physical adversarial attack technique called Adrial Zoom Lens (AdvZL)
AdvZL uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object.
In a digital environment, we construct a data set based on AdvZL to verify the antagonism of equal-scale enlarged images to DNNs.
In the physical environment, we manipulate the zoom lens to zoom in and out of the target object, and generate adversarial samples.
arXiv Detail & Related papers (2022-06-23T13:03:08Z) - Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation [123.33816363589506]
We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
arXiv Detail & Related papers (2022-03-09T09:51:00Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Adversarial training may be a double-edged sword [50.09831237090801]
We show that some geometric consequences of adversarial training on the decision boundary of deep networks give an edge to certain types of black-box attacks.
In particular, we define a metric called robustness gain to show that while adversarial training is an effective method to dramatically improve the robustness in white-box scenarios, it may not provide such a good robustness gain against the more realistic decision-based black-box attacks.
arXiv Detail & Related papers (2021-07-24T19:09:16Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.