Detecting and Identifying Optical Signal Attacks on Autonomous Driving
Systems
- URL: http://arxiv.org/abs/2110.10523v1
- Date: Wed, 20 Oct 2021 12:21:04 GMT
- Title: Detecting and Identifying Optical Signal Attacks on Autonomous Driving
Systems
- Authors: Jindi Zhang, Yifan Zhang, Kejie Lu, Jianping Wang, Kui Wu, Xiaohua
Jia, Bin Liu
- Abstract summary: We propose a framework to detect and identify sensors that are under attack.
Specifically, we first develop a new technique to detect attacks on a system that consists of three sensors.
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme.
- Score: 25.32946739108013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For autonomous driving, an essential task is to detect surrounding objects
accurately. To this end, most existing systems use optical devices, including
cameras and light detection and ranging (LiDAR) sensors, to collect environment
data in real time. In recent years, many researchers have developed advanced
machine learning models to detect surrounding objects. Nevertheless, the
aforementioned optical devices are vulnerable to optical signal attacks, which
could compromise the accuracy of object detection. To address this critical
issue, we propose a framework to detect and identify sensors that are under
attack. Specifically, we first develop a new technique to detect attacks on a
system that consists of three sensors. Our main idea is to: 1) use data from
three sensors to obtain two versions of depth maps (i.e., disparity) and 2)
detect attacks by analyzing the distribution of disparity errors. In our study,
we use real data sets and the state-of-the-art machine learning model to
evaluate our attack detection scheme and the results confirm the effectiveness
of our detection method. Based on the detection scheme, we further develop an
identification model that is capable of identifying up to n-2 attacked sensors
in a system with one LiDAR and n cameras. We prove the correctness of our
identification scheme and conduct experiments to show the accuracy of our
identification method. Finally, we investigate the overall sensitivity of our
framework.
Related papers
- Run-time Introspection of 2D Object Detection in Automated Driving
Systems Using Learning Representations [13.529124221397822]
We introduce a novel introspection solution for 2D object detection based on Deep Neural Networks (DNNs)
We implement several state-of-the-art (SOTA) introspection mechanisms for error detection in 2D object detection, using one-stage and two-stage object detectors evaluated on KITTI and BDD datasets.
Our performance evaluation shows that the proposed introspection solution outperforms SOTA methods, achieving an absolute reduction in the missed error ratio of 9% to 17% in the BDD dataset.
arXiv Detail & Related papers (2024-03-02T10:56:14Z) - AdvGPS: Adversarial GPS for Multi-Agent Perception Attack [47.59938285740803]
This study investigates whether specific GPS signals can easily mislead the multi-agent perception system.
We introduce textscAdvGPS, a method capable of generating adversarial GPS signals which are also stealthy for individual agents within the system.
Our experiments on the OPV2V dataset demonstrate that these attacks substantially undermine the performance of state-of-the-art methods.
arXiv Detail & Related papers (2024-01-30T23:13:41Z) - Joint object detection and re-identification for 3D obstacle
multi-camera systems [47.87501281561605]
This research paper introduces a novel modification to an object detection network that uses camera and lidar information.
It incorporates an additional branch designed for the task of re-identifying objects across adjacent cameras within the same vehicle.
The results underscore the superiority of this method over traditional Non-Maximum Suppression (NMS) techniques.
arXiv Detail & Related papers (2023-10-09T15:16:35Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Comparative study of 3D object detection frameworks based on LiDAR data
and sensor fusion techniques [0.0]
The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time.
Deep learning techniques transform the huge amount of data from the sensors into semantic information.
3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object.
arXiv Detail & Related papers (2022-02-05T09:34:58Z) - Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection
Sensor Fusion Models [16.823829387723524]
We analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks.
We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks.
arXiv Detail & Related papers (2021-09-13T23:38:42Z) - Radar Voxel Fusion for 3D Object Detection [0.0]
This paper develops a low-level sensor fusion network for 3D object detection.
The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes.
arXiv Detail & Related papers (2021-06-26T20:34:12Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.