Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
- URL: http://arxiv.org/abs/2103.09448v1
- Date: Wed, 17 Mar 2021 05:24:48 GMT
- Title: Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
- Authors: Mazen Abdelfattah, Kaiwen Yuan, Z. Jane Wang, and Rabab Ward
- Abstract summary: Most autonomous vehicles rely on LiDAR and RGB camera sensors for perception.
Deep neural nets (DNNs) have achieved state-of-the-art performance in 3D detection.
We propose a universal and physically realizable adversarial attack for each type, and study and contrast their respective vulnerabilities to attacks.
- Score: 15.323682536206574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most autonomous vehicles (AVs) rely on LiDAR and RGB camera sensors for
perception. Using these point cloud and image data, perception models based on
deep neural nets (DNNs) have achieved state-of-the-art performance in 3D
detection. The vulnerability of DNNs to adversarial attacks have been heavily
investigated in the RGB image domain and more recently in the point cloud
domain, but rarely in both domains simultaneously. Multi-modal perception
systems used in AVs can be divided into two broad types: cascaded models which
use each modality independently, and fusion models which learn from different
modalities simultaneously. We propose a universal and physically realizable
adversarial attack for each type, and study and contrast their respective
vulnerabilities to attacks. We place a single adversarial object with specific
shape and texture on top of a car with the objective of making this car evade
detection. Evaluating on the popular KITTI benchmark, our adversarial object
made the host vehicle escape detection by each model type nearly 50% of the
time. The dense RGB input contributed more to the success of the adversarial
attacks on both cascaded and fusion models. We found that the fusion model was
relatively more robust to adversarial attacks than the cascaded model.
Related papers
- Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D
Object Detection [33.0406308223244]
We propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks.
Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks.
arXiv Detail & Related papers (2023-04-28T03:39:00Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D
Object Detection [17.526914782562528]
We propose AutoAlignV2, a faster and stronger multi-modal 3D detection framework, built on top of AutoAlign.
Our best model reaches 72.4 NDS on nuScenes test leaderboard, achieving new state-of-the-art results.
arXiv Detail & Related papers (2022-07-21T06:17:23Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Automating Defense Against Adversarial Attacks: Discovery of
Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed
Models [0.0]
We evaluate the use of multi-spectral image arrays and ensemble learners to combat adversarial attacks.
In rough analogy to defending cyber-networks, we combine techniques from both offensive ("red team) and defensive ("blue team") approaches.
arXiv Detail & Related papers (2021-03-29T19:07:55Z) - Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object
Detection Models [16.7400223249581]
We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN)
We show that the proposed universal multi-modal attack was successful in reducing the model's ability to detect a car by nearly 73%.
arXiv Detail & Related papers (2021-01-26T12:40:34Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.