Monocular Depth Estimators: Vulnerabilities and Attacks
- URL: http://arxiv.org/abs/2005.14302v1
- Date: Thu, 28 May 2020 21:25:21 GMT
- Title: Monocular Depth Estimators: Vulnerabilities and Attacks
- Authors: Alwyn Mathew, Aditya Prakash Patra, Jimson Mathew
- Abstract summary: Recent advancements of neural networks lead to reliable monocular depth estimation.
Deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation.
In this paper, we investigate the annihilation of the most state-of-the-art monocular depth estimation networks against adversarial attacks.
- Score: 6.821598757786515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements of neural networks lead to reliable monocular depth
estimation. Monocular depth estimated techniques have the upper hand over
traditional depth estimation techniques as it only needs one image during
inference. Depth estimation is one of the essential tasks in robotics, and
monocular depth estimation has a wide variety of safety-critical applications
like in self-driving cars and surgical devices. Thus, the robustness of such
techniques is very crucial. It has been shown in recent works that these deep
neural networks are highly vulnerable to adversarial samples for tasks like
classification, detection and segmentation. These adversarial samples can
completely ruin the output of the system, making their credibility in real-time
deployment questionable. In this paper, we investigate the robustness of the
most state-of-the-art monocular depth estimation networks against adversarial
attacks. Our experiments show that tiny perturbations on an image that are
invisible to the naked eye (perturbation attack) and corruption less than about
1% of an image (patch attack) can affect the depth estimation drastically. We
introduce a novel deep feature annihilation loss that corrupts the hidden
feature space representation forcing the decoder of the network to output poor
depth maps. The white-box and black-box test compliments the effectiveness of
the proposed attack. We also perform adversarial example transferability tests,
mainly cross-data transferability.
Related papers
- Uncertainty and Self-Supervision in Single-View Depth [0.8158530638728501]
Single-view depth estimation is an ill-posed problem because there are multiple solutions that explain 3D geometry from a single view.
Deep neural networks have been shown to be effective at capturing depth from a single view, but the majority of current methodologies are deterministic in nature.
We have addressed this problem by quantifying the uncertainty of supervised single-view depth for Bayesian deep neural networks.
arXiv Detail & Related papers (2024-06-20T11:46:17Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Detecting Adversarial Perturbations in Multi-Task Perception [32.9951531295576]
We propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks.
adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output.
We show that under an assumption of a 5% false positive rate up to 100% of images are correctly detected as adversarially perturbed, depending on the strength of the perturbation.
arXiv Detail & Related papers (2022-03-02T15:25:17Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Adversarial Patch Attacks on Monocular Depth Estimation Networks [7.089737454146505]
We propose a method of adversarial patch attack on monocular depth estimation.
We generate artificial patterns that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed.
Our method can be implemented in the real world by physically placing the printed patterns in real scenes.
arXiv Detail & Related papers (2020-10-06T22:56:22Z) - Calibrating Self-supervised Monocular Depth Estimation [77.77696851397539]
In the recent years, many methods demonstrated the ability of neural networks to learn depth and pose changes in a sequence of images, using only self-supervision as the training signal.
We show that incorporating prior information about the camera configuration and the environment, we can remove the scale ambiguity and predict depth directly, still using the self-supervised formulation and not relying on any additional sensors.
arXiv Detail & Related papers (2020-09-16T14:35:45Z) - Adversarial Attacks on Monocular Depth Estimation [27.657287164064687]
We present the first systematic study of adversarial attacks on monocular depth estimation.
In this paper, we first define a taxonomy of different attack scenarios for depth estimation.
We then adapt several state-of-the-art attack methods for classification on the field of depth estimation.
arXiv Detail & Related papers (2020-03-23T15:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.