Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in
Autonomous Driving
- URL: http://arxiv.org/abs/2312.01468v2
- Date: Tue, 9 Jan 2024 06:36:23 GMT
- Title: Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in
Autonomous Driving
- Authors: Bo Yang, Xiaoyu Ji, Zizhi Jin, Yushi Cheng, Wenyuan Xu
- Abstract summary: This study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection.
We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model.
- Score: 17.618527727914163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our study assesses the adversarial robustness of LiDAR-camera fusion models
in 3D object detection. We introduce an attack technique that, by simply adding
a limited number of physically constrained adversarial points above a car, can
make the car undetectable by the fusion model. Experimental results reveal that
even without changes to the image data channel, the fusion model can be
deceived solely by manipulating the LiDAR data channel. This finding raises
safety concerns in the field of autonomous driving. Further, we explore how the
quantity of adversarial points, the distance between the front-near car and the
LiDAR-equipped car, and various angular factors affect the attack success rate.
We believe our research can contribute to the understanding of multi-sensor
robustness, offering insights and guidance to enhance the safety of autonomous
driving.
Related papers
- A Survey of Deep Learning Based Radar and Vision Fusion for 3D Object Detection in Autonomous Driving [9.962648957398923]
This paper focuses on a comprehensive survey of radar-vision (RV) fusion based on deep learning methods for 3D object detection in autonomous driving.
As the most promising fusion strategy at present, we provide a deeper classification of end-to-end fusion methods, including those 3D bounding box prediction based and BEV based approaches.
arXiv Detail & Related papers (2024-06-02T11:37:50Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - ShaSTA-Fuse: Camera-LiDAR Sensor Fusion to Model Shape and
Spatio-Temporal Affinities for 3D Multi-Object Tracking [26.976216624424385]
3D multi-object tracking (MOT) is essential for an autonomous mobile agent to safely navigate a scene.
We aim to develop a 3D MOT framework that fuses camera and LiDAR sensor information.
arXiv Detail & Related papers (2023-10-04T02:17:59Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Real-Time And Robust 3D Object Detection with Roadside LiDARs [20.10416681832639]
We design a 3D object detection model that can detect traffic participants in roadside LiDARs in real-time.
Our model uses an existing 3D detector as a baseline and improves its accuracy.
We make a significant contribution with our LiDAR-based 3D detector that can be used for smart city applications.
arXiv Detail & Related papers (2022-07-11T21:33:42Z) - 3D Object Detection for Autonomous Driving: A Comprehensive Survey [48.30753402458884]
3D object detection, which intelligently predicts the locations, sizes, and categories of the critical 3D objects near an autonomous vehicle, is an important part of a perception system.
This paper reviews the advances in 3D object detection for autonomous driving.
arXiv Detail & Related papers (2022-06-19T19:43:11Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - MmWave Radar and Vision Fusion based Object Detection for Autonomous
Driving: A Survey [15.316597644398188]
Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection.
This article presents a detailed survey on mmWave radar and vision fusion based obstacle detection methods.
arXiv Detail & Related papers (2021-08-06T08:38:42Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.