On the Adversarial Robustness of Camera-based 3D Object Detection
- URL: http://arxiv.org/abs/2301.10766v2
- Date: Fri, 19 Jan 2024 01:51:45 GMT
- Title: On the Adversarial Robustness of Camera-based 3D Object Detection
- Authors: Shaoyuan Xie, Zichao Li, Zeyu Wang, Cihang Xie
- Abstract summary: We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
- Score: 21.091078268929667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, camera-based 3D object detection has gained widespread
attention for its ability to achieve high performance with low computational
cost. However, the robustness of these methods to adversarial attacks has not
been thoroughly examined, especially when considering their deployment in
safety-critical domains like autonomous driving. In this study, we conduct the
first comprehensive investigation of the robustness of leading camera-based 3D
object detection approaches under various adversarial conditions. We
systematically analyze the resilience of these models under two attack
settings: white-box and black-box; focusing on two primary objectives:
classification and localization. Additionally, we delve into two types of
adversarial attack techniques: pixel-based and patch-based. Our experiments
yield four interesting findings: (a) bird's-eye-view-based representations
exhibit stronger robustness against localization attacks; (b)
depth-estimation-free approaches have the potential to show stronger
robustness; (c) accurate depth estimation effectively improves robustness for
depth-estimation-based methods; (d) incorporating multi-frame benign inputs can
effectively mitigate adversarial attacks. We hope our findings can steer the
development of future camera-based object detection models with enhanced
adversarial robustness.
Related papers
- Instance-aware Multi-Camera 3D Object Detection with Structural Priors
Mining and Self-Boosting Learning [93.71280187657831]
Camera-based bird-eye-view (BEV) perception paradigm has made significant progress in the autonomous driving field.
We propose IA-BEV, which integrates image-plane instance awareness into the depth estimation process within a BEV-based detector.
arXiv Detail & Related papers (2023-12-13T09:24:42Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Towards Domain Generalization for Multi-view 3D Object Detection in
Bird-Eye-View [11.958753088613637]
We first analyze the causes of the domain gap for the MV3D-Det task.
To acquire a robust depth prediction, we propose to decouple the depth estimation from intrinsic parameters of the camera.
We modify the focal length values to create multiple pseudo-domains and construct an adversarial training loss to encourage the feature representation to be more domain-agnostic.
arXiv Detail & Related papers (2023-03-03T02:59:13Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.