Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems
- URL: http://arxiv.org/abs/2106.03775v1
- Date: Mon, 7 Jun 2021 16:38:43 GMT
- Title: Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems
- Authors: Jeff Druce, Michael Harradon, James Tittle
- Abstract summary: We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
- Score: 0.8701566919381223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of providing users of deep Reinforcement Learning
(RL) based systems with a better understanding of when their output can be
trusted. We offer an explainable artificial intelligence (XAI) framework that
provides a three-fold explanation: a graphical depiction of the systems
generalization and performance in the current game state, how well the agent
would play in semantically similar environments, and a narrative explanation of
what the graphical information implies. We created a user-interface for our XAI
framework and evaluated its efficacy via a human-user experiment. The results
demonstrate a statistically significant increase in user trust and acceptance
of the AI system with explanation, versus the AI system without explanation.
Related papers
- Explainable AI does not provide the explanations end-users are asking
for [0.0]
We discuss XAI's limitations in deployment and conclude that transparency alongside with rigorous validation are better suited to gaining trust in AI systems.
XAI techniques are frequently required by users in many AI systems with the goal of understanding complex models, their associated predictions, and gaining trust.
arXiv Detail & Related papers (2023-01-25T10:34:38Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Principles of Explanation in Human-AI Systems [0.7768952514701895]
Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems.
XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability.
We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems.
arXiv Detail & Related papers (2021-02-09T17:43:45Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.