Change Point Models for Real-time Cyber Attack Detection in Connected
Vehicle Environment
- URL: http://arxiv.org/abs/2003.04185v1
- Date: Thu, 5 Mar 2020 21:19:42 GMT
- Title: Change Point Models for Real-time Cyber Attack Detection in Connected
Vehicle Environment
- Authors: Gurcan Comert, Mizanur Rahman, Mhafuzul Islam, and Mashrur Chowdhury
- Abstract summary: This study investigates the efficacy of two change point models, Expectation Maximization (EM) and two forms of Cumulative Summation (CUSUM) algorithms for real-time V2I cyber attack detection in a CV Environment.
- Score: 7.863458801839857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected vehicle (CV) systems are cognizant of potential cyber attacks
because of increasing connectivity between its different components such as
vehicles, roadside infrastructure, and traffic management centers. However, it
is a challenge to detect security threats in real-time and develop appropriate
or effective countermeasures for a CV system because of the dynamic behavior of
such attacks, high computational power requirement, and a historical data
requirement for training detection models. To address these challenges,
statistical models, especially change point models, have potentials for
real-time anomaly detections. Thus, the objective of this study is to
investigate the efficacy of two change point models, Expectation Maximization
(EM) and two forms of Cumulative Summation (CUSUM) algorithms (i.e., typical
and adaptive), for real-time V2I cyber attack detection in a CV Environment. To
prove the efficacy of these models, we evaluated these two models for three
different type of cyber attack, denial of service (DOS), impersonation, and
false information, using basic safety messages (BSMs) generated from CVs
through simulation. Results from numerical analysis revealed that EM, CUSUM,
and adaptive CUSUM could detect these cyber attacks, DOS, impersonation, and
false information, with an accuracy of (99%, 100%, 100%), (98%, 10%, 100%), and
(100%, 98%, 100%) respectively.
Related papers
- Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Evaluating Predictive Models in Cybersecurity: A Comparative Analysis of Machine and Deep Learning Techniques for Threat Detection [0.0]
This paper examines and compares various machine learning as well as deep learning models to choose the most suitable ones for detecting and fighting against cybersecurity risks.
The two datasets are used in the study to assess models like Naive Bayes, SVM, Random Forest, and deep learning architectures, i.e., VGG16, in the context of accuracy, precision, recall, and F1-score.
arXiv Detail & Related papers (2024-07-08T15:05:59Z) - Secure Hierarchical Federated Learning in Vehicular Networks Using Dynamic Client Selection and Anomaly Detection [10.177917426690701]
Hierarchical Federated Learning (HFL) faces the challenge of adversarial or unreliable vehicles in vehicular networks.
Our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms.
Our proposed algorithm demonstrates remarkable resilience even under intense attack conditions.
arXiv Detail & Related papers (2024-05-25T18:31:20Z) - Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - A Dependable Hybrid Machine Learning Model for Network Intrusion
Detection [1.222622290392729]
We propose a new hybrid model that combines machine learning and deep learning to increase detection rates while securing dependability.
Our method produces excellent results when tested on two datasets, KDDCUP'99 and CIC-MalMem-2022.
arXiv Detail & Related papers (2022-12-08T20:19:27Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Efficacy of Statistical and Artificial Intelligence-based False
Information Cyberattack Detection Models for Connected Vehicles [4.058429227214047]
Connected vehicles (CVs) are vulnerable to cyberattacks that can instantly compromise the safety of the vehicle itself and other connected vehicles and roadway infrastructure.
In this paper, we have evaluated three change point-based statistical models for cyberattack detection in the CV data.
We have used six AI models to detect false information attacks and compared the performance for detecting the attacks with our developed change point models.
arXiv Detail & Related papers (2021-08-02T18:50:12Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.