Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies
- URL: http://arxiv.org/abs/2301.05347v1
- Date: Fri, 13 Jan 2023 01:08:49 GMT
- Title: Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies
- Authors: Pradyumna Tambwekar and Matthew Gombolay
- Abstract summary: Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
- Score: 2.715884199292287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interactive Artificial Intelligence (AI) agents are becoming increasingly
prevalent in society. However, application of such systems without
understanding them can be problematic. Black-box AI systems can lead to
liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and
end-users, by offering insights into how an AI algorithm functions. Many modern
algorithms focus on making the AI model "transparent", i.e. unveil the inherent
functionality of the agent in a simpler format. However, these approaches do
not cater to end-users of these systems, as users may not possess the requisite
knowledge to understand these explanations in a reasonable amount of time.
Therefore, to be able to develop suitable XAI methods, we need to understand
the factors which influence subjective perception and objective usability. In
this paper, we present a novel user-study which studies four differing XAI
modalities commonly employed in prior work for explaining AI behavior, i.e.
Decision Trees, Text, Programs. We study these XAI modalities in the context of
explaining the actions of a self-driving car on a highway, as driving is an
easily understandable real-world task and self-driving cars is a keen area of
interest within the AI community. Our findings highlight internal consistency
issues wherein participants perceived language explanations to be significantly
more usable, however participants were better able to objectively understand
the decision making process of the car through a decision tree explanation. Our
work also provides further evidence of importance of integrating user-specific
and situational criteria into the design of XAI systems. Our findings show that
factors such as computer science experience, and watching the car succeed or
fail can impact the perception and usefulness of the explanation.
Related papers
- Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z) - Explainable Goal-Driven Agents and Robots -- A Comprehensive Review [13.94373363822037]
The paper reviews approaches on explainable goal-driven intelligent agents and robots.
It focuses on techniques for explaining and communicating agents perceptual functions and cognitive reasoning.
It suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
arXiv Detail & Related papers (2020-04-21T01:41:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.