APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation
- URL: http://arxiv.org/abs/2303.01351v3
- Date: Mon, 5 Aug 2024 16:37:37 GMT
- Title: APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation
- Authors: Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique,
- Abstract summary: monocular depth estimation (MDE) has experienced significant advancements in performance, largely attributed to the integration of innovative architectures, i.e., convolutional neural networks (CNNs) and Transformers.
The susceptibility of these models to adversarial attacks has emerged as a noteworthy concern, especially in domains where safety and security are paramount.
This concern holds particular weight for MDE due to its critical role in applications like autonomous driving and robotic navigation, where accurate scene understanding is pivotal.
- Score: 8.187375378049353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent times, monocular depth estimation (MDE) has experienced significant advancements in performance, largely attributed to the integration of innovative architectures, i.e., convolutional neural networks (CNNs) and Transformers. Nevertheless, the susceptibility of these models to adversarial attacks has emerged as a noteworthy concern, especially in domains where safety and security are paramount. This concern holds particular weight for MDE due to its critical role in applications like autonomous driving and robotic navigation, where accurate scene understanding is pivotal. To assess the vulnerability of CNN-based depth prediction methods, recent work tries to design adversarial patches against MDE. However, the existing approaches fall short of inducing a comprehensive and substantially disruptive impact on the vision system. Instead, their influence is partial and confined to specific local areas. These methods lead to erroneous depth predictions only within the overlapping region with the input image, without considering the characteristics of the target object, such as its size, shape, and position. In this paper, we introduce a novel adversarial patch named APARATE. This patch possesses the ability to selectively undermine MDE in two distinct ways: by distorting the estimated distances or by creating the illusion of an object disappearing from the perspective of the autonomous system. Notably, APARATE is designed to be sensitive to the shape and scale of the target object, and its influence extends beyond immediate proximity. APARATE, results in a mean depth estimation error surpassing $0.5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models. Furthermore, it yields a significant error of $0.34$ and exerts substantial influence over $94\%$ of the target region in the context of Transformer-based MDE.
Related papers
- Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch Attack [1.4272256806865107]
This paper presents a novel adversarial attack using practical patches that mimic manhole covers to deceive MDE and SS models.
We use Depth Planar Mapping to precisely position these patches on road surfaces, enhancing the attack's effectiveness.
Our experiments show that these adversarial patches cause a 43% relative error in MDE and achieve a 96% attack success rate in SS.
arXiv Detail & Related papers (2024-08-27T08:48:21Z) - Physical Adversarial Attack on Monocular Depth Estimation via Shape-Varying Patches [8.544722337960359]
We propose a physics-based adversarial attack on monocular depth estimation, employing a framework called Attack with Shape-Varying Patches (ASP)
We introduce various mask shapes, including quadrilateral, rectangular, and circular masks, to enhance the flexibility and efficiency of the attack.
Experimental results demonstrate that our attack method generates an average depth error of 18 meters on the target car with a patch area of 1/9, affecting over 98% of the target area.
arXiv Detail & Related papers (2024-07-24T14:29:05Z) - SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications [7.631454773779265]
We introduce SSAP (Shape-Sensitive Adrial Patch), a novel approach designed to disrupt monocular depth estimation (MDE) in autonomous navigation applications.
Our patch is crafted to selectively undermine MDE in two distinct ways: by distorting estimated distances or by creating the illusion of an object disappearing from the system's perspective.
Our approach induces a mean depth estimation error surpassing 0.5, impacting up to 99% of the targeted region for CNN-based MDE models.
arXiv Detail & Related papers (2024-03-18T07:01:21Z) - On Robustness and Generalization of ML-Based Congestion Predictors to
Valid and Imperceptible Perturbations [9.982978359852494]
Recent work has demonstrated that neural networks are generally vulnerable to small, carefully chosen perturbations of their input.
We show that state-of-the-art CNN and GNN-based congestion models exhibit brittleness to imperceptible perturbations.
Our work indicates that CAD engineers should be cautious when integrating neural network-based mechanisms in EDA flows.
arXiv Detail & Related papers (2024-02-29T20:11:47Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation [5.476763798688862]
We propose a novel underlineStealthy underlineAdversarial underlineAttacks on underlineMDE (SAAM)
It compromises MDE by either corrupting the estimated distance or causing an object to seamlessly blend into its surroundings.
We believe that this work sheds light on the threat of adversarial attacks in the context of MDE on edge devices.
arXiv Detail & Related papers (2023-08-06T13:29:42Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Wasserstein Distances for Stereo Disparity Estimation [62.09272563885437]
Existing approaches to depth or disparity estimation output a distribution over a set of pre-defined discrete values.
This leads to inaccurate results when the true depth or disparity does not match any of these values.
We address these issues using a new neural network architecture that is capable of outputting arbitrary depth values.
arXiv Detail & Related papers (2020-07-06T21:37:50Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.