Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers?
- URL: http://arxiv.org/abs/2405.08466v1
- Date: Tue, 14 May 2024 09:42:21 GMT
- Title: Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers?
- Authors: Francesco Marchiori, Alessandro Brighente, Mauro Conti,
- Abstract summary: This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
- Score: 60.51287814584477
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous driving is a research direction that has gained enormous traction in the last few years thanks to advancements in Artificial Intelligence (AI). Depending on the level of independence from the human driver, several studies show that Autonomous Vehicles (AVs) can reduce the number of on-road crashes and decrease overall fuel emissions by improving efficiency. However, security research on this topic is mixed and presents some gaps. On one hand, these studies often neglect the intrinsic vulnerabilities of AI algorithms, which are known to compromise the security of these systems. On the other, the most prevalent attacks towards AI rely on unrealistic assumptions, such as access to the model parameters or the training dataset. As such, it is unclear if autonomous driving can still claim several advantages over human driving in real-world applications. This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs and establishing a pragmatic threat model. Through our analysis, we develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios. Our evaluation serves as a foundation for providing essential takeaway messages, guiding both researchers and practitioners at various stages of the automation pipeline. In doing so, we contribute valuable insights to advance the discourse on the security and viability of autonomous driving in real-world applications.
Related papers
- Attack End-to-End Autonomous Driving through Module-Wise Noise [4.281151553151594]
In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
arXiv Detail & Related papers (2024-09-12T02:19:16Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Safety Implications of Explainable Artificial Intelligence in End-to-End Autonomous Driving [4.1636282808157254]
The end-to-end learning pipeline is gradually creating a paradigm shift in the ongoing development of highly autonomous vehicles.
A lack of interpretability in real-time decisions with contemporary learning methods impedes user trust and attenuates the widespread deployment and commercialization of such vehicles.
This survey seeks to answer the question: When and how can explanations improve safety of end-to-end autonomous driving?
arXiv Detail & Related papers (2024-03-18T18:49:20Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions [8.012552653212687]
This study sheds light on the development of explainable artificial intelligence (XAI) approaches for autonomous driving.
First, we provide a thorough overview of the state-of-the-art and emerging approaches for XAI-based autonomous driving.
We then propose a conceptual framework that considers the essential elements for explainable end-to-end autonomous driving.
arXiv Detail & Related papers (2021-12-21T22:51:37Z) - Towards Safe, Explainable, and Regulated Autonomous Driving [11.043966021881426]
We propose a framework that integrates autonomous control, explainable AI (XAI), and regulatory compliance.
We describe relevant XAI approaches that can help achieve the goals of the framework.
arXiv Detail & Related papers (2021-11-20T05:06:22Z) - A Survey on Autonomous Vehicle Control in the Era of Mixed-Autonomy:
From Physics-Based to AI-Guided Driving Policy Learning [7.881140597011731]
This paper serves as an introduction and overview of the potentially useful models and methodologies from artificial intelligence (AI) into the field of transportation engineering for autonomous vehicle (AV) control.
We will discuss state-of-the-art applications of AI-guided methods, identify opportunities and obstacles, raise open questions, and help suggest the building blocks and areas where AI could play a role in mixed autonomy.
arXiv Detail & Related papers (2020-07-10T04:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.