Efficacy of Statistical and Artificial Intelligence-based False
Information Cyberattack Detection Models for Connected Vehicles
- URL: http://arxiv.org/abs/2108.01124v1
- Date: Mon, 2 Aug 2021 18:50:12 GMT
- Title: Efficacy of Statistical and Artificial Intelligence-based False
Information Cyberattack Detection Models for Connected Vehicles
- Authors: Sakib Mahmud Khan, Gurcan Comert, Mashrur Chowdhury
- Abstract summary: Connected vehicles (CVs) are vulnerable to cyberattacks that can instantly compromise the safety of the vehicle itself and other connected vehicles and roadway infrastructure.
In this paper, we have evaluated three change point-based statistical models for cyberattack detection in the CV data.
We have used six AI models to detect false information attacks and compared the performance for detecting the attacks with our developed change point models.
- Score: 4.058429227214047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Connected vehicles (CVs), because of the external connectivity with other CVs
and connected infrastructure, are vulnerable to cyberattacks that can instantly
compromise the safety of the vehicle itself and other connected vehicles and
roadway infrastructure. One such cyberattack is the false information attack,
where an external attacker injects inaccurate information into the connected
vehicles and eventually can cause catastrophic consequences by compromising
safety-critical applications like the forward collision warning. The occurrence
and target of such attack events can be very dynamic, making real-time and
near-real-time detection challenging. Change point models, can be used for
real-time anomaly detection caused by the false information attack. In this
paper, we have evaluated three change point-based statistical models;
Expectation Maximization, Cumulative Summation, and Bayesian Online Change
Point Algorithms for cyberattack detection in the CV data. Also, data-driven
artificial intelligence (AI) models, which can be used to detect known and
unknown underlying patterns in the dataset, have the potential of detecting a
real-time anomaly in the CV data. We have used six AI models to detect false
information attacks and compared the performance for detecting the attacks with
our developed change point models. Our study shows that change points models
performed better in real-time false information attack detection compared to
the performance of the AI models. Change point models having the advantage of
no training requirements can be a feasible and computationally efficient
alternative to AI models for false information attack detection in connected
vehicles.
Related papers
- Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - STC-IDS: Spatial-Temporal Correlation Feature Analyzing based Intrusion
Detection System for Intelligent Connected Vehicles [7.301018758489822]
We present a novel model for automotive intrusion detection by spatial-temporal correlation features of in-vehicle communication traffic (STC-IDS)
Specifically, the proposed model exploits an encoding-detection architecture. In the encoder part, spatial and temporal relations are encoded simultaneously.
The encoded information is then passed to the detector for generating forceful spatial-temporal attention features and enabling anomaly classification.
arXiv Detail & Related papers (2022-04-23T04:22:58Z) - Are Your Sensitive Attributes Private? Novel Model Inversion Attribute
Inference Attacks on Classification Models [22.569705869469814]
We focus on model inversion attacks where the adversary knows non-sensitive attributes about records in the training data.
We devise a novel confidence score-based model inversion attribute inference attack that significantly outperforms the state-of-the-art.
We also extend our attacks to the scenario where some of the other (non-sensitive) attributes of a target record are unknown to the adversary.
arXiv Detail & Related papers (2022-01-23T21:27:20Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Black-box Model Inversion Attribute Inference Attacks on Classification
Models [32.757792981935815]
We focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data.
We devise two novel model inversion attribute inference attacks -- confidence modeling-based attack and confidence score-based attack.
We evaluate our attacks on two types of machine learning models, decision tree and deep neural network, trained with two real datasets.
arXiv Detail & Related papers (2020-12-07T01:14:19Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Change Point Models for Real-time Cyber Attack Detection in Connected
Vehicle Environment [7.863458801839857]
This study investigates the efficacy of two change point models, Expectation Maximization (EM) and two forms of Cumulative Summation (CUSUM) algorithms for real-time V2I cyber attack detection in a CV Environment.
arXiv Detail & Related papers (2020-03-05T21:19:42Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.