Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
- URL: http://arxiv.org/abs/2402.10086v2
- Date: Wed, 3 Jul 2024 08:31:45 GMT
- Title: Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
- Authors: Anton Kuznietsov, Balint Gyevnar, Cheng Wang, Steven Peters, Stefano V. Albrecht,
- Abstract summary: We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
- Score: 12.38351931894004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
Related papers
- The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis [4.119574613934122]
The black-box nature of machine learning models limits the use of conventional avenues of approach towards certifying complex technical systems.
As a potential solution, methods to give insights into this black-box could be used.
We find that XAI methods can be a helpful asset for safe AI development, but since certification relies on comprehensive and correct information about technical systems, their impact is expected to be limited.
arXiv Detail & Related papers (2024-07-22T16:08:21Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence [9.844540637074836]
This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems.
XAI has been broadly considered as a building block for responsible AI (RAI)
Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
arXiv Detail & Related papers (2023-12-04T00:54:04Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - SoK: On the Semantic AI Security in Autonomous Driving [42.15658768948801]
Autonomous Driving systems rely on AI components to make safety and correct driving decisions.
For such AI component-level vulnerabilities to be semantically impactful at the system level, it needs to address non-trivial semantic gaps.
In this paper, we define such research space as semantic AI security as opposed to generic AI security.
arXiv Detail & Related papers (2022-03-10T12:00:34Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Towards Safe, Explainable, and Regulated Autonomous Driving [11.043966021881426]
We propose a framework that integrates autonomous control, explainable AI (XAI), and regulatory compliance.
We describe relevant XAI approaches that can help achieve the goals of the framework.
arXiv Detail & Related papers (2021-11-20T05:06:22Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.