SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers
- URL: http://arxiv.org/abs/2012.05858v1
- Date: Thu, 10 Dec 2020 18:14:03 GMT
- Title: SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers
- Authors: Bingyao Huang, Haibin Ling
- Abstract summary: A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
- Score: 82.19722134082645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light-based adversarial attacks aim to fool deep learning-based image
classifiers by altering the physical light condition using a controllable light
source, e.g., a projector. Compared with physical attacks that place carefully
designed stickers or printed adversarial objects, projector-based ones obviate
modifying the physical entities. Moreover, projector-based attacks can be
performed transiently and dynamically by altering the projection pattern.
However, existing approaches focus on projecting adversarial patterns that
result in clearly perceptible camera-captured perturbations, while the more
interesting yet challenging goal, stealthy projector-based attack, remains an
open problem. In this paper, for the first time, we formulate this problem as
an end-to-end differentiable process and propose Stealthy Projector-based
Adversarial Attack (SPAA). In SPAA, we approximate the real project-and-capture
operation using a deep neural network named PCNet, then we include PCNet in the
optimization of projector-based attacks such that the generated adversarial
projection is physically plausible. Finally, to generate robust and stealthy
adversarial projections, we propose an optimization algorithm that uses minimum
perturbation and adversarial confidence thresholds to alternate between the
adversarial loss and stealthiness loss optimization. Our experimental
evaluations show that the proposed SPAA clearly outperforms other methods by
achieving higher attack success rates and meanwhile being stealthier.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Realistic Scatterer Based Adversarial Attacks on SAR Image Classifiers [7.858656052565242]
An adversarial attack perturbs SAR images of on-ground targets such that the classifiers are misled into making incorrect predictions.
We propose the On-Target Scatterer Attack (OTSA), a scatterer-based physical adversarial attack.
We show that our attack obtains significantly higher success rates under the positioning constraint compared with the existing method.
arXiv Detail & Related papers (2023-12-05T17:36:34Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Optical Adversarial Attack [18.709597361380727]
OPtical ADversarial attack (OPAD) is an adversarial attack in the physical space aiming to fool image classifiers without physically touching the objects.
The proposed solution incorporates the projector-camera model into the adversarial attack optimization, where a new attack formulation is derived.
It is demonstrated that OPAD can optically attack a real 3D object in the presence of background lighting for white-box, black-box, targeted, and untargeted attacks.
arXiv Detail & Related papers (2021-08-13T13:55:33Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.