On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving
- URL: http://arxiv.org/abs/2201.01850v1
- Date: Wed, 5 Jan 2022 22:33:43 GMT
- Title: On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving
- Authors: Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair,
Alessandro Biondi and Giorgio Buttazzo
- Abstract summary: The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
- Score: 59.33715889581687
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The existence of real-world adversarial examples (commonly in the form of
patches) poses a serious threat for the use of deep learning models in
safety-critical computer vision tasks such as visual perception in autonomous
driving. This paper presents an extensive evaluation of the robustness of
semantic segmentation models when attacked with different types of adversarial
patches, including digital, simulated, and physical ones. A novel loss function
is proposed to improve the capabilities of attackers in inducing a
misclassification of pixels. Also, a novel attack strategy is presented to
improve the Expectation Over Transformation method for placing a patch in the
scene. Finally, a state-of-the-art method for detecting adversarial patch is
first extended to cope with semantic segmentation models, then improved to
obtain real-time performance, and eventually evaluated in real-world scenarios.
Experimental results reveal that, even though the adversarial effect is visible
with both digital and real-world attacks, its impact is often spatially
confined to areas of the image around the patch. This opens to further
questions about the spatial robustness of real-time semantic segmentation
models.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Generating Visually Realistic Adversarial Patch [5.41648734119775]
A high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world.
We propose an effective attack called VRAP, to generate visually realistic adversarial patches.
VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimize the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information.
arXiv Detail & Related papers (2023-12-05T11:07:39Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - Interpretable Computer Vision Models through Adversarial Training:
Unveiling the Robustness-Interpretability Connection [0.0]
Interpretability is as essential as robustness when we deploy the models to the real world.
Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans.
arXiv Detail & Related papers (2023-07-04T13:51:55Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.