Physical Attack on Monocular Depth Estimation with Optimal Adversarial
Patches
- URL: http://arxiv.org/abs/2207.04718v1
- Date: Mon, 11 Jul 2022 08:59:09 GMT
- Title: Physical Attack on Monocular Depth Estimation with Optimal Adversarial
Patches
- Authors: Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao,
Dongfang Liu and Xiangyu Zhang
- Abstract summary: We develop an attack against learning-based Monocular Depth Estimation (MDE)
We balance the stealth and effectiveness of our attack with object-oriented adversarial design, sensitive region localization, and natural style camouflage.
Experimental results show that our method can generate stealthy, effective, and robust adversarial patches for different target objects and models.
- Score: 18.58673451901394
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has substantially boosted the performance of Monocular Depth
Estimation (MDE), a critical component in fully vision-based autonomous driving
(AD) systems (e.g., Tesla and Toyota). In this work, we develop an attack
against learning-based MDE. In particular, we use an optimization-based method
to systematically generate stealthy physical-object-oriented adversarial
patches to attack depth estimation. We balance the stealth and effectiveness of
our attack with object-oriented adversarial design, sensitive region
localization, and natural style camouflage. Using real-world driving scenarios,
we evaluate our attack on concurrent MDE models and a representative downstream
task for AD (i.e., 3D object detection). Experimental results show that our
method can generate stealthy, effective, and robust adversarial patches for
different target objects and models and achieves more than 6 meters mean depth
estimation error and 93% attack success rate (ASR) in object detection with a
patch of 1/9 of the vehicle's rear area. Field tests on three different driving
routes with a real vehicle indicate that we cause over 6 meters mean depth
estimation error and reduce the object detection rate from 90.70% to 5.16% in
continuous video frames.
Related papers
- Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving [12.302132670292316]
We present LensAttack, a physical attack that strategically places optical lenses on the camera of an autonomous vehicle to manipulate the perceived object depths.
We first develop a mathematical model that outlines the parameters of the attack, followed by simulations and real-world evaluations to assess its efficacy on state-of-the-art MDE models.
The results reveal that LensAttack can significantly disrupt the depth estimation processes in AD systems, posing a serious threat to their reliability and safety.
arXiv Detail & Related papers (2024-10-31T20:23:27Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch Attack [1.4272256806865107]
This paper presents a novel adversarial attack using practical patches that mimic manhole covers to deceive MDE and SS models.
We use Depth Planar Mapping to precisely position these patches on road surfaces, enhancing the attack's effectiveness.
Our experiments show that these adversarial patches cause a 43% relative error in MDE and achieve a 96% attack success rate in SS.
arXiv Detail & Related papers (2024-08-27T08:48:21Z) - ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications [7.631454773779265]
We introduce SSAP (Shape-Sensitive Adrial Patch), a novel approach designed to disrupt monocular depth estimation (MDE) in autonomous navigation applications.
Our patch is crafted to selectively undermine MDE in two distinct ways: by distorting estimated distances or by creating the illusion of an object disappearing from the system's perspective.
Our approach induces a mean depth estimation error surpassing 0.5, impacting up to 99% of the targeted region for CNN-based MDE models.
arXiv Detail & Related papers (2024-03-18T07:01:21Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.