Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks
- URL: http://arxiv.org/abs/2108.06179v1
- Date: Fri, 13 Aug 2021 11:49:09 GMT
- Title: Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks
- Authors: Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi,
Giorgio Buttazzo
- Abstract summary: In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
- Score: 62.87459235819762
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning and convolutional neural networks allow achieving impressive
performance in computer vision tasks, such as object detection and semantic
segmentation (SS). However, recent studies have shown evident weaknesses of
such models against adversarial perturbations. In a real-world scenario
instead, like autonomous driving, more attention should be devoted to
real-world adversarial examples (RWAEs), which are physical objects (e.g.,
billboards and printable patches) optimized to be adversarial to the entire
perception pipeline. This paper presents an in-depth evaluation of the
robustness of popular SS models by testing the effects of both digital and
real-world adversarial patches. These patches are crafted with powerful attacks
enriched with a novel loss function. Firstly, an investigation on the
Cityscapes dataset is conducted by extending the Expectation Over
Transformation (EOT) paradigm to cope with SS. Then, a novel attack
optimization, called scene-specific attack, is proposed. Such an attack
leverages the CARLA driving simulator to improve the transferability of the
proposed EOT-based attack to a real 3D environment. Finally, a printed physical
billboard containing an adversarial patch was tested in an outdoor driving
scenario to assess the feasibility of the studied attacks in the real world.
Exhaustive experiments revealed that the proposed attack formulations
outperform previous work to craft both digital and real-world adversarial
patches for SS. At the same time, the experimental results showed how these
attacks are notably less effective in the real world, hence questioning the
practical relevance of adversarial attacks to SS models for autonomous/assisted
driving.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Rethinking Targeted Adversarial Attacks For Neural Machine Translation [56.10484905098989]
This paper presents a new setting for NMT targeted adversarial attacks that could lead to reliable attacking results.
Under the new setting, it then proposes a Targeted Word Gradient adversarial Attack (TWGA) method to craft adversarial examples.
Experimental results demonstrate that our proposed setting could provide faithful attacking results for targeted adversarial attacks on NMT systems.
arXiv Detail & Related papers (2024-07-07T10:16:06Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - REAP: A Large-Scale Realistic Adversarial Patch Benchmark [14.957616218042594]
Adrial patch attacks present a critical threat to cyber-physical systems that rely on cameras such as autonomous cars.
We propose the REAP benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images.
Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs.
arXiv Detail & Related papers (2022-12-12T03:35:05Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.