Detecting message modification attacks on the CAN bus with Temporal
Convolutional Networks
- URL: http://arxiv.org/abs/2106.08692v1
- Date: Wed, 16 Jun 2021 10:51:58 GMT
- Title: Detecting message modification attacks on the CAN bus with Temporal
Convolutional Networks
- Authors: Irina Chiscop, Andr\'as Gazdag, Joost Bosman, Gergely Bicz\'ok
- Abstract summary: We present a novel machine learning based intrusion detection method for CAN networks.
Our proposed temporal convolutional network-based solution can learn the normal behavior of CAN signals and differentiate them from malicious ones.
- Score: 0.3441021278275805
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multiple attacks have shown that in-vehicle networks have vulnerabilities
which can be exploited. Securing the Controller Area Network (CAN) for modern
vehicles has become a necessary task for car manufacturers. Some attacks inject
potentially large amount of fake messages into the CAN network; however, such
attacks are relatively easy to detect. In more sophisticated attacks, the
original messages are modified, making the de- tection a more complex problem.
In this paper, we present a novel machine learning based intrusion detection
method for CAN networks. We focus on detecting message modification attacks,
which do not change the timing patterns of communications. Our proposed
temporal convolutional network-based solution can learn the normal behavior of
CAN signals and differentiate them from malicious ones. The method is evaluated
on multiple CAN-bus message IDs from two public datasets including different
types of attacks. Performance results show that our lightweight approach
compares favorably to the state-of-the-art unsupervised learning approach,
achieving similar or better accuracy for a wide range of scenarios with a
significantly lower false positive rate.
Related papers
- Detecting Masquerade Attacks in Controller Area Networks Using Graph Machine Learning [0.2812395851874055]
This paper introduces a novel framework for detecting masquerade attacks in the CAN bus using graph machine learning (ML)
We show that by representing CAN bus frames as message sequence graphs (MSGs) and enriching each node with contextual statistical attributes from time series, we can enhance detection capabilities.
Our method ensures a comprehensive and dynamic analysis of CAN frame interactions, improving robustness and efficiency.
arXiv Detail & Related papers (2024-08-10T04:17:58Z) - Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks [9.86830550255822]
Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) make them vulnerable to increasing vectors of security and privacy attacks.
We propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern.
Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs privacy, and minimizing the communication overhead.
arXiv Detail & Related papers (2024-07-03T12:42:31Z) - Exploring Highly Quantised Neural Networks for Intrusion Detection in
Automotive CAN [13.581341206178525]
Machine learning-based intrusion detection models have been shown to successfully detect multiple targeted attack vectors.
In this paper, we present a case for custom-quantised literature (CQMLP) as a multi-class classification model.
We show that the 2-bit CQMLP model, when integrated as the IDS, can detect malicious attack messages with a very high accuracy of 99.9%.
arXiv Detail & Related papers (2024-01-19T21:11:02Z) - Real-Time Zero-Day Intrusion Detection System for Automotive Controller
Area Network on FPGAs [13.581341206178525]
This paper presents an unsupervised-learning-based convolutional autoencoder architecture for detecting zero-day attacks.
We quantise the model using Vitis-AI tools from AMD/Xilinx targeting a resource-constrained Zynq Ultrascale platform.
The proposed model successfully achieves equal or higher classification accuracy (> 99.5%) on unseen DoS, fuzzing, and spoofing attacks.
arXiv Detail & Related papers (2024-01-19T14:36:01Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Detecting CAN Masquerade Attacks with Signal Clustering Similarity [2.2881898195409884]
Fabrication attacks are the easiest to administer and the easiest to detect because they disrupt frame frequency.
masquerade attacks can be detected by computing time series clustering similarity using hierarchical clustering on the vehicle's CAN signals.
We develop a forensic tool as a proof of concept to demonstrate the potential of the proposed approach for detecting CAN masquerade attacks.
arXiv Detail & Related papers (2022-01-07T20:25:40Z) - CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an
In-Vehicle CAN Bus Based on Deep Features of Voltage Signals [48.813942331065206]
We propose a security hardening system for in-vehicle networks.
The proposed system includes two mechanisms that process deep features extracted from voltage signals measured on the CAN bus.
arXiv Detail & Related papers (2021-06-15T06:12:33Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.