Detecting Masquerade Attacks in Controller Area Networks Using Graph Machine Learning
- URL: http://arxiv.org/abs/2408.05427v1
- Date: Sat, 10 Aug 2024 04:17:58 GMT
- Title: Detecting Masquerade Attacks in Controller Area Networks Using Graph Machine Learning
- Authors: William Marfo, Pablo Moriano, Deepak K. Tosh, Shirley V. Moore,
- Abstract summary: This paper introduces a novel framework for detecting masquerade attacks in the CAN bus using graph machine learning (ML)
We show that by representing CAN bus frames as message sequence graphs (MSGs) and enriching each node with contextual statistical attributes from time series, we can enhance detection capabilities.
Our method ensures a comprehensive and dynamic analysis of CAN frame interactions, improving robustness and efficiency.
- Score: 0.2812395851874055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern vehicles rely on a myriad of electronic control units (ECUs) interconnected via controller area networks (CANs) for critical operations. Despite their ubiquitous use and reliability, CANs are susceptible to sophisticated cyberattacks, particularly masquerade attacks, which inject false data that mimic legitimate messages at the expected frequency. These attacks pose severe risks such as unintended acceleration, brake deactivation, and rogue steering. Traditional intrusion detection systems (IDS) often struggle to detect these subtle intrusions due to their seamless integration into normal traffic. This paper introduces a novel framework for detecting masquerade attacks in the CAN bus using graph machine learning (ML). We hypothesize that the integration of shallow graph embeddings with time series features derived from CAN frames enhances the detection of masquerade attacks. We show that by representing CAN bus frames as message sequence graphs (MSGs) and enriching each node with contextual statistical attributes from time series, we can enhance detection capabilities across various attack patterns compared to using only graph-based features. Our method ensures a comprehensive and dynamic analysis of CAN frame interactions, improving robustness and efficiency. Extensive experiments on the ROAD dataset validate the effectiveness of our approach, demonstrating statistically significant improvements in the detection rates of masquerade attacks compared to a baseline that uses only graph-based features, as confirmed by Mann-Whitney U and Kolmogorov-Smirnov tests (p < 0.05).
Related papers
- Benchmarking Unsupervised Online IDS for Masquerade Attacks in CAN [4.263056416993091]
Vehicular controller area networks (CANs) are susceptible to masquerade attacks by malicious adversaries.
We introduce a benchmark study of four different non-deep learning (DL)-based unsupervised online intrusion detection systems (IDS) for masquerade attacks in CAN.
We show that although benchmarked IDS are not effective at detecting every attack type, the method that relies on detecting changes at the hierarchical structure of clusters of time series produces the best results.
arXiv Detail & Related papers (2024-06-19T19:04:51Z) - Real-Time Zero-Day Intrusion Detection System for Automotive Controller
Area Network on FPGAs [13.581341206178525]
This paper presents an unsupervised-learning-based convolutional autoencoder architecture for detecting zero-day attacks.
We quantise the model using Vitis-AI tools from AMD/Xilinx targeting a resource-constrained Zynq Ultrascale platform.
The proposed model successfully achieves equal or higher classification accuracy (> 99.5%) on unseen DoS, fuzzing, and spoofing attacks.
arXiv Detail & Related papers (2024-01-19T14:36:01Z) - Effective In-vehicle Intrusion Detection via Multi-view Statistical
Graph Learning on CAN Messages [9.04771951523525]
In-vehicle network (IVN) is facing a wide variety of complex and changing external cyber-attacks.
Only coarse-grained recognition can be achieved in current mainstream intrusion detection mechanisms.
We propose StatGraph: an Effective Multi-view Statistical Graph Learning Intrusion Detection.
arXiv Detail & Related papers (2023-11-13T03:49:55Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Detecting CAN Masquerade Attacks with Signal Clustering Similarity [2.2881898195409884]
Fabrication attacks are the easiest to administer and the easiest to detect because they disrupt frame frequency.
masquerade attacks can be detected by computing time series clustering similarity using hierarchical clustering on the vehicle's CAN signals.
We develop a forensic tool as a proof of concept to demonstrate the potential of the proposed approach for detecting CAN masquerade attacks.
arXiv Detail & Related papers (2022-01-07T20:25:40Z) - Detecting message modification attacks on the CAN bus with Temporal
Convolutional Networks [0.3441021278275805]
We present a novel machine learning based intrusion detection method for CAN networks.
Our proposed temporal convolutional network-based solution can learn the normal behavior of CAN signals and differentiate them from malicious ones.
arXiv Detail & Related papers (2021-06-16T10:51:58Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.