Deceiving Image-to-Image Translation Networks for Autonomous Driving
with Adversarial Perturbations
- URL: http://arxiv.org/abs/2001.01506v1
- Date: Mon, 6 Jan 2020 11:51:04 GMT
- Title: Deceiving Image-to-Image Translation Networks for Autonomous Driving
with Adversarial Perturbations
- Authors: Lin Wang, Wonjune Cho, and Kuk-Jin Yoon
- Abstract summary: This paper examines different types of adversarial perturbations that can fool Im2Im frameworks for autonomous driving purpose.
We propose both quasi-physical and digital adversarial perturbations that can make Im2Im models yield unexpected results.
- Score: 30.280424503644486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have achieved impressive performance on handling
computer vision problems, however, it has been found that DNNs are vulnerable
to adversarial examples. For such reason, adversarial perturbations have been
recently studied in several respects. However, most previous works have focused
on image classification tasks, and it has never been studied regarding
adversarial perturbations on Image-to-image (Im2Im) translation tasks, showing
great success in handling paired and/or unpaired mapping problems in the field
of autonomous driving and robotics. This paper examines different types of
adversarial perturbations that can fool Im2Im frameworks for autonomous driving
purpose. We propose both quasi-physical and digital adversarial perturbations
that can make Im2Im models yield unexpected results. We then empirically
analyze these perturbations and show that they generalize well under both
paired for image synthesis and unpaired settings for style transfer. We also
validate that there exist some perturbation thresholds over which the Im2Im
mapping is disrupted or impossible. The existence of these perturbations
reveals that there exist crucial weaknesses in Im2Im models. Lastly, we show
that our methods illustrate how perturbations affect the quality of outputs,
pioneering the improvement of the robustness of current SOTA networks for
autonomous driving.
Related papers
- On Inherent Adversarial Robustness of Active Vision Systems [7.803487547944363]
We show that two active vision methods - GFNet and FALcon - achieve (2-3) times greater robustness compared to a standard passive convolutional network under state-of-the-art adversarial attacks.
More importantly, we provide illustrative and interpretable visualization analysis that demonstrates how performing inference from distinct fixation points makes active vision methods less vulnerable to malicious inputs.
arXiv Detail & Related papers (2024-03-29T22:51:45Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - Detecting Adversarial Perturbations in Multi-Task Perception [32.9951531295576]
We propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks.
adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output.
We show that under an assumption of a 5% false positive rate up to 100% of images are correctly detected as adversarially perturbed, depending on the strength of the perturbation.
arXiv Detail & Related papers (2022-03-02T15:25:17Z) - Robust SleepNets [7.23389716633927]
In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness.
We develop two models to detect eye closedness: first model on eye images and a second model on face images.
We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate.
arXiv Detail & Related papers (2021-02-24T20:48:13Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.