Optical Adversarial Attack
- URL: http://arxiv.org/abs/2108.06247v2
- Date: Mon, 16 Aug 2021 02:50:24 GMT
- Title: Optical Adversarial Attack
- Authors: Abhiram Gnanasambandam, Alex M. Sherman, Stanley H. Chan
- Abstract summary: OPtical ADversarial attack (OPAD) is an adversarial attack in the physical space aiming to fool image classifiers without physically touching the objects.
The proposed solution incorporates the projector-camera model into the adversarial attack optimization, where a new attack formulation is derived.
It is demonstrated that OPAD can optically attack a real 3D object in the presence of background lighting for white-box, black-box, targeted, and untargeted attacks.
- Score: 18.709597361380727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce OPtical ADversarial attack (OPAD). OPAD is an adversarial attack
in the physical space aiming to fool image classifiers without physically
touching the objects (e.g., moving or painting the objects). The principle of
OPAD is to use structured illumination to alter the appearance of the target
objects. The system consists of a low-cost projector, a camera, and a computer.
The challenge of the problem is the non-linearity of the radiometric response
of the projector and the spatially varying spectral response of the scene.
Attacks generated in a conventional approach do not work in this setting unless
they are calibrated to compensate for such a projector-camera model. The
proposed solution incorporates the projector-camera model into the adversarial
attack optimization, where a new attack formulation is derived. Experimental
results prove the validity of the solution. It is demonstrated that OPAD can
optically attack a real 3D object in the presence of background lighting for
white-box, black-box, targeted, and untargeted attacks. Theoretical analysis is
presented to quantify the fundamental performance limit of the system.
Related papers
- Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving [12.302132670292316]
We present LensAttack, a physical attack that strategically places optical lenses on the camera of an autonomous vehicle to manipulate the perceived object depths.
We first develop a mathematical model that outlines the parameters of the attack, followed by simulations and real-world evaluations to assess its efficacy on state-of-the-art MDE models.
The results reveal that LensAttack can significantly disrupt the depth estimation processes in AD systems, posing a serious threat to their reliability and safety.
arXiv Detail & Related papers (2024-10-31T20:23:27Z) - Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers [7.858656052565242]
An adversarial attack perturbs SAR images of on-ground targets such that the classifiers are misled into making incorrect predictions.
We propose the On-Target Scatterer Attack (OTSA), a scatterer-based physical adversarial attack.
We show that our attack obtains significantly higher success rates under the positioning constraint compared with the existing method.
arXiv Detail & Related papers (2023-12-05T17:36:34Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - A Physical-World Adversarial Attack Against 3D Face Recognition [10.577749566854626]
structured light imaging is a common method to measure the 3D shape.
This method could be easily attacked, leading to inaccurate 3D face recognition.
We propose a novel, physically-achievable attack on the fringe structured light system, named structured light attack.
arXiv Detail & Related papers (2022-05-26T15:06:14Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.