Spatial-Temporal Anomaly Detection for Sensor Attacks in Autonomous
Vehicles
- URL: http://arxiv.org/abs/2212.07757v1
- Date: Thu, 15 Dec 2022 12:21:27 GMT
- Title: Spatial-Temporal Anomaly Detection for Sensor Attacks in Autonomous
Vehicles
- Authors: Martin Higgins, Devki Jha, David Wallom
- Abstract summary: Time-of-flight (ToF) distance measurement devices are vulnerable to spoofing, triggering and false data injection attacks.
We propose a spatial-temporal anomaly detection model textitSTAnDS which incorporates a residual error spatial detector, with a time-based expected change detection.
- Score: 1.7188280334580195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time-of-flight (ToF) distance measurement devices such as ultrasonics, LiDAR
and radar are widely used in autonomous vehicles for environmental perception,
navigation and assisted braking control. Despite their relative importance in
making safer driving decisions, these devices are vulnerable to multiple attack
types including spoofing, triggering and false data injection. When these
attacks are successful they can compromise the security of autonomous vehicles
leading to severe consequences for the driver, nearby vehicles and pedestrians.
To handle these attacks and protect the measurement devices, we propose a
spatial-temporal anomaly detection model \textit{STAnDS} which incorporates a
residual error spatial detector, with a time-based expected change detection.
This approach is evaluated using a simulated quantitative environment and the
results show that \textit{STAnDS} is effective at detecting multiple attack
types.
Related papers
- Experimental Validation of Sensor Fusion-based GNSS Spoofing Attack
Detection Framework for Autonomous Vehicles [5.624009710240032]
We present a sensor fusion-based spoofing attack detection framework for Autonomous Vehicles.
Experiments are conducted in Tuscaloosa, AL, mimicking urban road structures.
Results demonstrate the framework's ability to detect various sophisticated spoofing attacks, even including slow drifting attacks.
arXiv Detail & Related papers (2024-01-02T17:30:46Z) - Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach [5.036807309572884]
More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
arXiv Detail & Related papers (2023-10-26T01:22:10Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Temporal Consistency Checks to Detect LiDAR Spoofing Attacks on
Autonomous Vehicle Perception [4.092959254671909]
Recent work has serious LiDAR spoofing attacks with alarming consequences.
In this work, we explore the use of motion as a physical invariant of genuine objects for detecting such attacks.
Preliminary design and implementation of a 3D-TC2 prototype demonstrates very promising performance.
arXiv Detail & Related papers (2021-06-15T01:36:40Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.