SoK: Stealing Cars Since Remote Keyless Entry Introduction and How to Defend From It
- URL: http://arxiv.org/abs/2505.02713v1
- Date: Mon, 05 May 2025 15:07:23 GMT
- Title: SoK: Stealing Cars Since Remote Keyless Entry Introduction and How to Defend From It
- Authors: Tommaso Bianchi, Alessandro Brighente, Mauro Conti, Edoardo Pavan,
- Abstract summary: This paper provides a Systematization Of Knowledge (SOK) on Remote Keyless Entry (RKE) and Passive Keyless Entry and Start (PKES)<n>To the best of our knowledge, this is the first comprehensive SOK on RKE systems, and we address specific research questions to understand the evolution and security status of such systems.
- Score: 57.22545280370174
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Remote Keyless Entry (RKE) systems have been the target of thieves since their introduction in automotive industry. Robberies targeting vehicles and their remote entry systems are booming again without a significant advancement from the industrial sector being able to protect against them. Researchers and attackers continuously play cat and mouse to implement new methodologies to exploit weaknesses and defense strategies for RKEs. In this fragment, different attacks and defenses have been discussed in research and industry without proper bridging. In this paper, we provide a Systematization Of Knowledge (SOK) on RKE and Passive Keyless Entry and Start (PKES), focusing on their history and current situation, ranging from legacy systems to modern web-based ones. We provide insight into vehicle manufacturers' technologies and attacks and defense mechanisms involving them. To the best of our knowledge, this is the first comprehensive SOK on RKE systems, and we address specific research questions to understand the evolution and security status of such systems. By identifying the weaknesses RKE still faces, we provide future directions for security researchers and companies to find viable solutions to address old attacks, such as Relay and RollJam, as well as new ones, like API vulnerabilities.
Related papers
- Vehicular Intrusion Detection System for Controller Area Network: A Comprehensive Survey and Evaluation [23.68939806308534]
This paper examines existing vehicular attacks and defense strategies employed against the Controller Area Network (CAN) and specialized Electronic Control Units (ECUs)<n>The findings of our investigation reveal that the examined VIDS primarily concentrate on particular categories of attacks, neglecting the broader spectrum of potential threats.<n>We put forth several defense recommendations based on our study findings, aiming to inform and guide the future design of VIDS in the context of vehicular security.
arXiv Detail & Related papers (2025-05-22T20:42:57Z) - Modern DDoS Threats and Countermeasures: Insights into Emerging Attacks and Detection Strategies [49.57278643040602]
Distributed Denial of Service (DDoS) attacks persist as significant threats to online services and infrastructure.<n>This paper offers a comprehensive survey of emerging DDoS attacks and detection strategies over the past decade.
arXiv Detail & Related papers (2025-02-27T11:22:25Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.<n>To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles [0.0]
This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
arXiv Detail & Related papers (2024-11-21T01:26:52Z) - SoK: What don't we know? Understanding Security Vulnerabilities in SNARKs [8.190612719134606]
Zero-knowledge proofs (ZKPs) have evolved from being a theoretical concept providing privacy and verifiability to having practical, real-world implementations.
SNARKs (Succinct Non-Interactive Argument of Knowledge) emerging as one of the most significant innovations.
This paper focuses on assessing end-to-end security properties of real-life SNARK implementations.
arXiv Detail & Related papers (2024-02-23T12:41:28Z) - Robust Recommender System: A Survey and Future Directions [58.87305602959857]
We first present a taxonomy to organize current techniques for withstanding malicious attacks and natural noise.<n>We then explore state-of-the-art methods in each category, including fraudster detection, adversarial training, certifiable robust training for defending against malicious attacks.<n>We discuss robustness across varying recommendation scenarios and its interplay with other properties like accuracy, interpretability, privacy, and fairness.
arXiv Detail & Related papers (2023-09-05T08:58:46Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.