RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical
World
- URL: http://arxiv.org/abs/2307.07653v1
- Date: Fri, 14 Jul 2023 23:10:56 GMT
- Title: RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical
World
- Authors: Donghua Wang, Wen Yao, Tingsong Jiang, Chao Li, Xiaoqian Chen
- Abstract summary: We propose a novel Reflected Light Attack (RFLA) against deep neural networks (DNNs)
RFLA is implemented by placing the color transparent plastic sheet and a paper cut of a specific shape in front of the mirror to create different colored geometries on the target object.
Experiment results suggest that the proposed method achieves over 99% success rate on different datasets and models in the digital world.
- Score: 6.347998407798736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical adversarial attacks against deep neural networks (DNNs) have
recently gained increasing attention. The current mainstream physical attacks
use printed adversarial patches or camouflage to alter the appearance of the
target object. However, these approaches generate conspicuous adversarial
patterns that show poor stealthiness. Another physical deployable attack is the
optical attack, featuring stealthiness while exhibiting weakly in the daytime
with sunlight. In this paper, we propose a novel Reflected Light Attack (RFLA),
featuring effective and stealthy in both the digital and physical world, which
is implemented by placing the color transparent plastic sheet and a paper cut
of a specific shape in front of the mirror to create different colored
geometries on the target object. To achieve these goals, we devise a general
framework based on the circle to model the reflected light on the target
object. Specifically, we optimize a circle (composed of a coordinate and
radius) to carry various geometrical shapes determined by the optimized angle.
The fill color of the geometry shape and its corresponding transparency are
also optimized. We extensively evaluate the effectiveness of RFLA on different
datasets and models. Experiment results suggest that the proposed method
achieves over 99% success rate on different datasets and models in the digital
world. Additionally, we verify the effectiveness of the proposed method in
different physical environments by using sunlight or a flashlight.
Related papers
- Hard-Label Black-Box Attacks on 3D Point Clouds [66.52447238776482]
We introduce a novel 3D attack method based on a new spectrum-aware decision boundary algorithm to generate high-quality adversarial samples.
Experiments demonstrate that our attack competitively outperforms existing white/black-box attackers in terms of attack performance and adversary quality.
arXiv Detail & Related papers (2024-11-30T09:05:02Z) - Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World [11.24237636482709]
We design a unified adversarial patch that can perform cross-modal physical attacks, achieving evasion in both modalities simultaneously with a single patch.
We propose a novel boundary-limited shape optimization approach that aims to achieve compact and smooth shapes for the adversarial patch.
Our method is evaluated against several state-of-the-art object detectors, achieving an Attack Success Rate (ASR) of over 80%.
arXiv Detail & Related papers (2023-07-27T08:14:22Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Patch of Invisibility: Naturalistic Physical Black-Box Adversarial Attacks on Object Detectors [0.0]
We propose a direct, black-box, gradient-free method to generate naturalistic physical adversarial patches for object detectors.
To our knowledge this is the first and only method that performs black-box physical attacks directly on object-detection models.
arXiv Detail & Related papers (2023-03-07T21:03:48Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Decision-based Universal Adversarial Attack [55.76371274622313]
In black-box setting, current universal adversarial attack methods utilize substitute models to generate the perturbation.
We propose an efficient Decision-based Universal Attack (DUAttack)
The effectiveness of DUAttack is validated through comparisons with other state-of-the-art attacks.
arXiv Detail & Related papers (2020-09-15T12:49:03Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.