SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation
- URL: http://arxiv.org/abs/2308.03108v2
- Date: Wed, 20 Dec 2023 07:32:44 GMT
- Title: SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation
- Authors: Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
- Abstract summary: We propose a novel underlineStealthy underlineAdversarial underlineAttacks on underlineMDE (SAAM)
It compromises MDE by either corrupting the estimated distance or causing an object to seamlessly blend into its surroundings.
We believe that this work sheds light on the threat of adversarial attacks in the context of MDE on edge devices.
- Score: 5.476763798688862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the vulnerability of MDE to adversarial
patches. We propose a novel \underline{S}tealthy \underline{A}dversarial
\underline{A}ttacks on \underline{M}DE (SAAM) that compromises MDE by either
corrupting the estimated distance or causing an object to seamlessly blend into
its surroundings. Our experiments, demonstrate that the designed stealthy patch
successfully causes a DNN-based MDE to misestimate the depth of objects. In
fact, our proposed adversarial patch achieves a significant 60\% depth error
with 99\% ratio of the affected region. Importantly, despite its adversarial
nature, the patch maintains a naturalistic appearance, making it inconspicuous
to human observers. We believe that this work sheds light on the threat of
adversarial attacks in the context of MDE on edge devices. We hope it raises
awareness within the community about the potential real-life harm of such
attacks and encourages further research into developing more robust and
adaptive defense mechanisms.
Related papers
- Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch Attack [1.4272256806865107]
This paper presents a novel adversarial attack using practical patches that mimic manhole covers to deceive MDE and SS models.
We use Depth Planar Mapping to precisely position these patches on road surfaces, enhancing the attack's effectiveness.
Our experiments show that these adversarial patches cause a 43% relative error in MDE and achieve a 96% attack success rate in SS.
arXiv Detail & Related papers (2024-08-27T08:48:21Z) - Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks [36.16206095819624]
Monocular Depth Estimation plays a vital role in applications such as autonomous driving.
Traditional adversarial training methods, which require ground-truth labels, are not directly applicable to MDE models that lack ground-truth depth.
We introduce a novel self-supervised adversarial training approach for MDE models, leveraging view synthesis without the need for ground-truth depth.
arXiv Detail & Related papers (2024-06-09T17:02:28Z) - SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications [7.631454773779265]
We introduce SSAP (Shape-Sensitive Adrial Patch), a novel approach designed to disrupt monocular depth estimation (MDE) in autonomous navigation applications.
Our patch is crafted to selectively undermine MDE in two distinct ways: by distorting estimated distances or by creating the illusion of an object disappearing from the system's perspective.
Our approach induces a mean depth estimation error surpassing 0.5, impacting up to 99% of the targeted region for CNN-based MDE models.
arXiv Detail & Related papers (2024-03-18T07:01:21Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation [8.187375378049353]
monocular depth estimation (MDE) has experienced significant advancements in performance, largely attributed to the integration of innovative architectures, i.e., convolutional neural networks (CNNs) and Transformers.
The susceptibility of these models to adversarial attacks has emerged as a noteworthy concern, especially in domains where safety and security are paramount.
This concern holds particular weight for MDE due to its critical role in applications like autonomous driving and robotic navigation, where accurate scene understanding is pivotal.
arXiv Detail & Related papers (2023-03-02T15:31:53Z) - Adversarial Training of Self-supervised Monocular Depth Estimation
against Physical-World Attacks [17.28712660119884]
We propose a novel adversarial training method for self-supervised MDE models based on synthesis view without using ground-truth depth.
We improve adversarial robustness against physical-world attacks using L0-norm-bounded perturbation in training.
Results on two representative MDE networks show that we achieve better robustness against various adversarial attacks with nearly no benign performance degradation.
arXiv Detail & Related papers (2023-01-31T09:12:16Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Perceptual Adversarial Robustness: Defense Against Unseen Threat Models [58.47179090632039]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception.
Under the neural perceptual threat model, we develop novel perceptual adversarial attacks and defenses.
Because the NPTM is very broad, we find that Perceptual Adrial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks.
arXiv Detail & Related papers (2020-06-22T22:40:46Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.