Adversarial Attacks on Monocular Depth Estimation
- URL: http://arxiv.org/abs/2003.10315v1
- Date: Mon, 23 Mar 2020 15:04:30 GMT
- Title: Adversarial Attacks on Monocular Depth Estimation
- Authors: Ziqi Zhang, Xinge Zhu, Yingwei Li, Xiangqun Chen, Yao Guo
- Abstract summary: We present the first systematic study of adversarial attacks on monocular depth estimation.
In this paper, we first define a taxonomy of different attack scenarios for depth estimation.
We then adapt several state-of-the-art attack methods for classification on the field of depth estimation.
- Score: 27.657287164064687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances of deep learning have brought exceptional performance on many
computer vision tasks such as semantic segmentation and depth estimation.
However, the vulnerability of deep neural networks towards adversarial examples
have caused grave concerns for real-world deployment. In this paper, we present
to the best of our knowledge the first systematic study of adversarial attacks
on monocular depth estimation, an important task of 3D scene understanding in
scenarios such as autonomous driving and robot navigation. In order to
understand the impact of adversarial attacks on depth estimation, we first
define a taxonomy of different attack scenarios for depth estimation, including
non-targeted attacks, targeted attacks and universal attacks. We then adapt
several state-of-the-art attack methods for classification on the field of
depth estimation. Besides, multi-task attacks are introduced to further improve
the attack performance for universal attacks. Experimental results show that it
is possible to generate significant errors on depth estimation. In particular,
we demonstrate that our methods can conduct targeted attacks on given objects
(such as a car), resulting in depth estimation 3-4x away from the ground truth
(e.g., from 20m to 80m).
Related papers
- Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Deviations in Representations Induced by Adversarial Attacks [0.0]
Research has shown that deep learning models are vulnerable to adversarial attacks.
This finding brought about a new direction in research, whereby algorithms were developed to attack and defend vulnerable networks.
We present a method for measuring and analyzing the deviations in representations induced by adversarial attacks.
arXiv Detail & Related papers (2022-11-07T17:40:08Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - The Space of Adversarial Strategies [6.295859509997257]
Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade.
We propose a systematic approach to characterize worst-case (i.e., optimal) adversaries.
arXiv Detail & Related papers (2022-09-09T20:53:11Z) - Adversarial Attacks on Monocular Pose Estimation [13.7258515433446]
We study the relation between adversarial perturbations targeting monocular depth and pose estimation networks.
Our experiments show how the generated perturbations lead to notable errors in relative rotation and translation predictions.
arXiv Detail & Related papers (2022-07-14T16:12:31Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Adversarial Patch Attacks on Monocular Depth Estimation Networks [7.089737454146505]
We propose a method of adversarial patch attack on monocular depth estimation.
We generate artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed.
Our method can be implemented in the real world by physically placing the printed patterns in real scenes.
arXiv Detail & Related papers (2020-10-06T22:56:22Z) - Adversarial Attacks and Detection on Reinforcement Learning-Based
Interactive Recommender Systems [47.70973322193384]
Adversarial attacks pose significant challenges for detecting them at an early stage.
We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems.
We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks.
arXiv Detail & Related papers (2020-06-14T15:41:47Z) - Monocular Depth Estimators: Vulnerabilities and Attacks [6.821598757786515]
Recent advancements of neural networks lead to reliable monocular depth estimation.
Deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation.
In this paper, we investigate the annihilation of the most state-of-the-art monocular depth estimation networks against adversarial attacks.
arXiv Detail & Related papers (2020-05-28T21:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.