Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNs
- URL: http://arxiv.org/abs/2206.01034v2
- Date: Tue, 23 May 2023 09:39:05 GMT
- Title: Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNs
- Authors: Chengyin Hu, Yilong Wang, Kalibinuer Tiliwalidi, Wen Li
- Abstract summary: We propose a light-based physical attack, called adversarial laser spot (AdvLS)
It optimize the physical parameters of laser spots through genetic algorithm to perform physical attacks.
It is the first light-based physical attack that perform physical attacks in the daytime.
- Score: 15.620269826381437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing deep neural networks (DNNs) are easily disturbed by slight
noise. However, there are few researches on physical attacks by deploying
lighting equipment. The light-based physical attacks has excellent covertness,
which brings great security risks to many vision-based applications (such as
self-driving). Therefore, we propose a light-based physical attack, called
adversarial laser spot (AdvLS), which optimizes the physical parameters of
laser spots through genetic algorithm to perform physical attacks. It realizes
robust and covert physical attack by using low-cost laser equipment. As far as
we know, AdvLS is the first light-based physical attack that perform physical
attacks in the daytime. A large number of experiments in the digital and
physical environments show that AdvLS has excellent robustness and covertness.
In addition, through in-depth analysis of the experimental data, we find that
the adversarial perturbations generated by AdvLS have superior adversarial
attack migration. The experimental results show that AdvLS impose serious
interference to advanced DNNs, we call for the attention of the proposed AdvLS.
The code of AdvLS is available at: https://github.com/ChengYinHu/AdvLS
Related papers
- The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs [0.0]
In this study, we introduce a novel physical attack, adversarial catoptric light (AdvCL), where adversarial perturbations are generated using a common natural phenomenon, catoptric light.
We evaluate the proposed method in three aspects: effectiveness, stealthiness, and robustness.
We achieve an attack success rate of 83.5%, surpassing the baseline.
arXiv Detail & Related papers (2022-09-19T12:33:46Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Adversarial Color Film: Effective Physical-World Attack to DNNs [0.0]
We propose a camera-based physical attack called Adversarial Color Film (AdvCF)
Experiments show the effectiveness of the proposed method in both digital and physical environments.
We look into AdvCF's threat to future vision-based systems and propose some promising mentality for camera-based physical attacks.
arXiv Detail & Related papers (2022-09-02T08:22:32Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial Neon Beam: A Light-based Physical Attack to DNNs [17.555617901536404]
In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB)
Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness.
By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural.
arXiv Detail & Related papers (2022-04-02T12:57:00Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
Blink [15.54571899946818]
We show by simply using a laser beam that deep neural networks (DNNs) are easily fooled.
We propose a novel attack method called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser beam's physical parameters to perform adversarial attack.
arXiv Detail & Related papers (2021-03-11T07:03:21Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - RayS: A Ray Searching Method for Hard-label Adversarial Attack [99.72117609513589]
We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
arXiv Detail & Related papers (2020-06-23T07:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.