CAN-BERT do it? Controller Area Network Intrusion Detection System based
on BERT Language Model
- URL: http://arxiv.org/abs/2210.09439v1
- Date: Mon, 17 Oct 2022 21:21:37 GMT
- Title: CAN-BERT do it? Controller Area Network Intrusion Detection System based
on BERT Language Model
- Authors: Natasha Alkhatib, Maria Mushtaq, Hadi Ghauch, Jean-Luc Danger
- Abstract summary: We propose CAN-BERT", a deep learning based network intrusion detection system.
We show that the BERT model can learn the sequence of arbitration identifiers (IDs) in the CAN bus for anomaly detection.
In addition to being able to identify in-vehicle intrusions in real-time within 0.8 ms to 3 ms w.r.t CAN ID sequence length, it can also detect a wide variety of cyberattacks with an F1-score of between 0.81 and 0.99.
- Score: 2.415997479508991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the rising number of sophisticated customer functionalities,
electronic control units (ECUs) are increasingly integrated into modern
automotive systems. However, the high connectivity between the in-vehicle and
the external networks paves the way for hackers who could exploit in-vehicle
network protocols' vulnerabilities. Among these protocols, the Controller Area
Network (CAN), known as the most widely used in-vehicle networking technology,
lacks encryption and authentication mechanisms, making the communications
delivered by distributed ECUs insecure. Inspired by the outstanding performance
of bidirectional encoder representations from transformers (BERT) for improving
many natural language processing tasks, we propose in this paper ``CAN-BERT", a
deep learning based network intrusion detection system, to detect cyber attacks
on CAN bus protocol. We show that the BERT model can learn the sequence of
arbitration identifiers (IDs) in the CAN bus for anomaly detection using the
``masked language model" unsupervised training objective. The experimental
results on the ``Car Hacking: Attack \& Defense Challenge 2020" dataset show
that ``CAN-BERT" outperforms state-of-the-art approaches. In addition to being
able to identify in-vehicle intrusions in real-time within 0.8 ms to 3 ms w.r.t
CAN ID sequence length, it can also detect a wide variety of cyberattacks with
an F1-score of between 0.81 and 0.99.
Related papers
- A Robust Multi-Stage Intrusion Detection System for In-Vehicle Network Security using Hierarchical Federated Learning [0.0]
In-vehicle intrusion detection systems (IDSs) must detect seen attacks and provide a robust defense against new, unseen attacks.
Previous work has relied solely on the CAN ID feature or has used traditional machine learning (ML) approaches with manual feature extraction.
This paper introduces a cutting-edge, novel, lightweight, in-vehicle, IDS-leveraging, deep learning (DL) algorithm to address these limitations.
arXiv Detail & Related papers (2024-08-15T21:51:56Z) - SISSA: Real-time Monitoring of Hardware Functional Safety and
Cybersecurity with In-vehicle SOME/IP Ethernet Traffic [49.549771439609046]
We propose SISSA, a SOME/IP communication traffic-based approach for modeling and analyzing in-vehicle functional safety and cyber security.
Specifically, SISSA models hardware failures with the Weibull distribution and addresses five potential attacks on SOME/IP communication.
Extensive experimental results show the effectiveness and efficiency of SISSA.
arXiv Detail & Related papers (2024-02-21T03:31:40Z) - Exploring Highly Quantised Neural Networks for Intrusion Detection in
Automotive CAN [13.581341206178525]
Machine learning-based intrusion detection models have been shown to successfully detect multiple targeted attack vectors.
In this paper, we present a case for custom-quantised literature (CQMLP) as a multi-class classification model.
We show that the 2-bit CQMLP model, when integrated as the IDS, can detect malicious attack messages with a very high accuracy of 99.9%.
arXiv Detail & Related papers (2024-01-19T21:11:02Z) - A Lightweight Multi-Attack CAN Intrusion Detection System on Hybrid
FPGAs [13.581341206178525]
Intrusion detection and mitigation approaches have shown promising results in detecting multiple attack vectors in Controller Area Network (CAN)
We present a lightweight multi-attack quantised machine learning model that is deployed using Xilinx's Deep Learning Processing Unit IP on a Zynq Ultrascale+ (XCZU3EG) FPGA.
The model detects denial of service and fuzzing attacks with an accuracy of above 99 % and a false positive rate of 0.07%, which are comparable to the state-of-the-art techniques in the literature.
arXiv Detail & Related papers (2024-01-19T13:39:05Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - X-CANIDS: Signal-Aware Explainable Intrusion Detection System for Controller Area Network-Based In-Vehicle Network [6.68111081144141]
X-CANIDS dissects the payloads in CAN messages into human-understandable signals using a CAN database.
X-CANIDS can detect zero-day attacks because it does not require any labeled dataset in the training phase.
arXiv Detail & Related papers (2023-03-22T03:11:02Z) - Anomaly Detection in Intra-Vehicle Networks [0.0]
Modern vehicles are connected to a range of networks, including intra-vehicle networks and external networks.
With the loopholes in the existing traditional protocols, cyber-attacks on the vehicle network are rising drastically.
This paper discusses the security issues of the CAN bus protocol and proposes an Intrusion Detection System (IDS) that detects known attacks.
arXiv Detail & Related papers (2022-05-07T03:38:26Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an
In-Vehicle CAN Bus Based on Deep Features of Voltage Signals [48.813942331065206]
We propose a security hardening system for in-vehicle networks.
The proposed system includes two mechanisms that process deep features extracted from voltage signals measured on the CAN bus.
arXiv Detail & Related papers (2021-06-15T06:12:33Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.