The Vulnerability of Semantic Segmentation Networks to Adversarial
Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
- URL: http://arxiv.org/abs/2101.03924v2
- Date: Wed, 13 Jan 2021 08:42:00 GMT
- Title: The Vulnerability of Semantic Segmentation Networks to Adversarial
Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
- Authors: Andreas B\"ar, Jonas L\"ohdefink, Nikhil Kapoor, Serin J. Varghese,
Fabian H\"uger, Peter Schlicht, Tim Fingscheidt
- Abstract summary: This article aims to illuminate the vulnerability aspects of CNNs used for semantic segmentation with respect to adversarial attacks.
We aim to clarify the advantages and disadvantages associated with applying CNNs for environment perception in autonomous driving.
- Score: 25.354929620151367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enabling autonomous driving (AD) can be considered one of the biggest
challenges in today's technology. AD is a complex task accomplished by several
functionalities, with environment perception being one of its core functions.
Environment perception is usually performed by combining the semantic
information captured by several sensors, i.e., lidar or camera. The semantic
information from the respective sensor can be extracted by using convolutional
neural networks (CNNs) for dense prediction. In the past, CNNs constantly
showed state-of-the-art performance on several vision-related tasks, such as
semantic segmentation of traffic scenes using nothing but the red-green-blue
(RGB) images provided by a camera. Although CNNs obtain state-of-the-art
performance on clean images, almost imperceptible changes to the input,
referred to as adversarial perturbations, may lead to fatal deception. The goal
of this article is to illuminate the vulnerability aspects of CNNs used for
semantic segmentation with respect to adversarial attacks, and share insights
into some of the existing known adversarial defense strategies. We aim to
clarify the advantages and disadvantages associated with applying CNNs for
environment perception in AD to serve as a motivation for future research in
this field.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Adversarial Attacks on Multi-task Visual Perception for Autonomous
Driving [0.5735035463793008]
adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection.
Experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all the others.
We conclude this paper by comparing and discussing the experimental results, proposing insights and future work.
arXiv Detail & Related papers (2021-07-15T16:53:48Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Investigating the significance of adversarial attacks and their relation
to interpretability for radar-based human activity recognition systems [2.081492937901262]
We show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks.
We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs.
arXiv Detail & Related papers (2021-01-26T05:16:16Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.