Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach
- URL: http://arxiv.org/abs/2310.17091v1
- Date: Thu, 26 Oct 2023 01:22:10 GMT
- Title: Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach
- Authors: Tianyi Li, Mingfeng Shang, Shian Wang, Raphael Stern
- Abstract summary: More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
- Score: 5.036807309572884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of vehicles equipped with advanced driver-assistance systems,
such as adaptive cruise control (ACC) and other automated driving features, the
potential for cyberattacks on these automated vehicles (AVs) has emerged. While
overt attacks that force vehicles to collide may be easily identified, more
insidious attacks, which only slightly alter driving behavior, can result in
network-wide increases in congestion, fuel consumption, and even crash risk
without being easily detected. To address the detection of such attacks, we
first present a traffic model framework for three types of potential
cyberattacks: malicious manipulation of vehicle control commands, false data
injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
We then investigate the impacts of these attacks at both the individual vehicle
(micro) and traffic flow (macro) levels. A novel generative adversarial network
(GAN)-based anomaly detection model is proposed for real-time identification of
such attacks using vehicle trajectory data. We provide numerical evidence {to
demonstrate} the efficacy of our machine learning approach in detecting
cyberattacks on ACC-equipped vehicles. The proposed method is compared against
some recently proposed neural network models and observed to have higher
accuracy in identifying anomalous driving behaviors of ACC vehicles.
Related papers
- Navigating Connected Car Cybersecurity: Location Anomaly Detection with RAN Data [2.147995542780459]
Cyber-attacks, including hijacking and spoofing, pose significant threats to connected cars.
This paper presents a novel approach for identifying potential attacks through Radio Access Network (RAN) event monitoring.
The major contribution of this paper is a location anomaly detection module that identifies devices that appear in multiple locations simultaneously.
arXiv Detail & Related papers (2024-07-02T22:42:45Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Spatial-Temporal Anomaly Detection for Sensor Attacks in Autonomous
Vehicles [1.7188280334580195]
Time-of-flight (ToF) distance measurement devices are vulnerable to spoofing, triggering and false data injection attacks.
We propose a spatial-temporal anomaly detection model textitSTAnDS which incorporates a residual error spatial detector, with a time-based expected change detection.
arXiv Detail & Related papers (2022-12-15T12:21:27Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - LCCDE: A Decision-Based Ensemble Framework for Intrusion Detection in
The Internet of Vehicles [7.795462813462946]
Intrusion Detection Systems (IDSs) that can identify malicious cyber-attacks have been developed.
We propose a novel ensemble IDS framework named Leader Class and Confidence Decision Ensemble (LCCDE)
LCCDE is constructed by determining the best-performing ML model among three advanced algorithms.
arXiv Detail & Related papers (2022-08-05T22:30:34Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an
In-Vehicle CAN Bus Based on Deep Features of Voltage Signals [48.813942331065206]
We propose a security hardening system for in-vehicle networks.
The proposed system includes two mechanisms that process deep features extracted from voltage signals measured on the CAN bus.
arXiv Detail & Related papers (2021-06-15T06:12:33Z) - An Adversarial Attack Defending System for Securing In-Vehicle Networks [6.288673794889309]
We propose an Adversarial Attack Defending System (AADS) for securing an in-vehicle network.
Our experimental results demonstrate that adversaries can easily attack the LSTM-based detection model with a success rate of over 98%.
arXiv Detail & Related papers (2020-08-25T21:23:49Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.