A Survey on Machine Learning-based Misbehavior Detection Systems for 5G
and Beyond Vehicular Networks
- URL: http://arxiv.org/abs/2201.10500v1
- Date: Tue, 25 Jan 2022 17:48:57 GMT
- Title: A Survey on Machine Learning-based Misbehavior Detection Systems for 5G
and Beyond Vehicular Networks
- Authors: Abdelwahab Boualouache and Thomas Engel
- Abstract summary: Integrating V2X with 5G has enabled ultra-low latency and high-reliability V2X communications.
Attacks have become more aggressive, and attackers have become more strategic.
Many V2X Misbehavior Detection Systems (MDSs) have adopted this paradigm.
Yet, analyzing these systems is a research gap, and developing effective ML-based MDSs is still an open issue.
- Score: 4.410803831098062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Significant progress has been made towards deploying Vehicle-to-Everything
(V2X) technology. Integrating V2X with 5G has enabled ultra-low latency and
high-reliability V2X communications. However, while communication performance
has enhanced, security and privacy issues have increased. Attacks have become
more aggressive, and attackers have become more strategic. Public Key
Infrastructure proposed by standardization bodies cannot solely defend against
these attacks. Thus, in complementary of that, sophisticated systems should be
designed to detect such attacks and attackers. Machine Learning (ML) has
recently emerged as a key enabler to secure our future roads. Many V2X
Misbehavior Detection Systems (MDSs) have adopted this paradigm. Yet, analyzing
these systems is a research gap, and developing effective ML-based MDSs is
still an open issue. To this end, this paper present a comprehensive survey and
classification of ML-based MDSs. We analyze and discuss them from both security
and ML perspectives. Then, we give some learned lessons and recommendations
helping in developing, validating, and deploying ML-based MDSs. Finally, we
highlight open research and standardization issues with some future directions.
Related papers
- Jailbreaking and Mitigation of Vulnerabilities in Large Language Models [4.564507064383306]
Large Language Models (LLMs) have transformed artificial intelligence by advancing natural language understanding and generation.
Despite these advancements, LLMs have shown considerable vulnerabilities, particularly to prompt injection and jailbreaking attacks.
This review analyzes the state of research on these vulnerabilities and presents available defense strategies.
arXiv Detail & Related papers (2024-10-20T00:00:56Z) - A Life-long Learning Intrusion Detection System for 6G-Enabled IoV [3.2284427438223013]
6G technology will revolutionize the Internet of Vehicles (IoV) with ultra-high data rates and seamless network coverage.
6G will likely increase the IoV's susceptibility to a spectrum of emerging cyber threats.
This paper presents a novel intrusion detection system leveraging the paradigm of life-long (or continual) learning.
arXiv Detail & Related papers (2024-07-22T15:07:27Z) - A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends [78.3201480023907]
Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across a wide range of multimodal understanding and reasoning tasks.
The vulnerability of LVLMs is relatively underexplored, posing potential security risks in daily usage.
In this paper, we provide a comprehensive review of the various forms of existing LVLM attacks.
arXiv Detail & Related papers (2024-07-10T06:57:58Z) - Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks [9.86830550255822]
Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) make them vulnerable to increasing vectors of security and privacy attacks.
We propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern.
Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs privacy, and minimizing the communication overhead.
arXiv Detail & Related papers (2024-07-03T12:42:31Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - On Evaluating Adversarial Robustness of Large Vision-Language Models [64.66104342002882]
We evaluate the robustness of large vision-language models (VLMs) in the most realistic and high-risk setting.
In particular, we first craft targeted adversarial examples against pretrained models such as CLIP and BLIP.
Black-box queries on these VLMs can further improve the effectiveness of targeted evasion.
arXiv Detail & Related papers (2023-05-26T13:49:44Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.