SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations
- URL: http://arxiv.org/abs/2007.04137v3
- Date: Wed, 6 Jan 2021 16:17:39 GMT
- Title: SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations
- Authors: Giulio Lovisotto, Henry Turner, Ivo Sluganovic, Martin Strohmeier,
Ivan Martinovic
- Abstract summary: Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
- Score: 19.14079118174123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research into adversarial examples (AE) has developed rapidly, yet static
adversarial patches are still the main technique for conducting attacks in the
real world, despite being obvious, semi-permanent and unmodifiable once
deployed.
In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a
novel technique that allows adversaries to realize physically robust real-world
AE by using a light projector. Attackers can project a specifically crafted
adversarial perturbation onto a real-world object, transforming it into an AE.
This allows the adversary greater control over the attack compared to
adversarial patches: (i) projections can be dynamically turned on and off or
modified at will, (ii) projections do not suffer from the locality constraint
imposed by patches, making them harder to detect.
We study the feasibility of SLAP in the self-driving scenario, targeting both
object detector and traffic sign recognition tasks, focusing on the detection
of stop signs. We conduct experiments in a variety of ambient light conditions,
including outdoors, showing how in non-bright settings the proposed method
generates AE that are extremely robust, causing misclassifications on
state-of-the-art networks with up to 99% success rate for a variety of angles
and distances. We also demostrate that SLAP-generated AE do not present
detectable behaviours seen in adversarial patches and therefore bypass
SentiNet, a physical AE detection method. We evaluate other defences including
an adaptive defender using adversarial learning which is able to thwart the
attack effectiveness up to 80% even in favourable attacker conditions.
Related papers
- Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World [11.24237636482709]
We design a unified adversarial patch that can perform cross-modal physical attacks, achieving evasion in both modalities simultaneously with a single patch.
We propose a novel boundary-limited shape optimization approach that aims to achieve compact and smooth shapes for the adversarial patch.
Our method is evaluated against several state-of-the-art object detectors, achieving an Attack Success Rate (ASR) of over 80%.
arXiv Detail & Related papers (2023-07-27T08:14:22Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - MixDefense: A Defense-in-Depth Framework for Adversarial Example
Detection Based on Statistical and Semantic Analysis [14.313178290347293]
We propose a multilayer defense-in-depth framework for AE detection, namely MixDefense.
We leverage the noise' features extracted from the inputs to discover the statistical difference between natural images and tampered ones for AE detection.
We show that the proposed MixDefense solution outperforms the existing AE detection techniques by a considerable margin.
arXiv Detail & Related papers (2021-04-20T15:57:07Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.