Questioning the AI: Informing Design Practices for Explainable AI User
Experiences
- URL: http://arxiv.org/abs/2001.02478v3
- Date: Fri, 3 Sep 2021 20:10:35 GMT
- Title: Questioning the AI: Informing Design Practices for Explainable AI User
Experiences
- Authors: Q. Vera Liao, Daniel Gruen, Sarah Miller
- Abstract summary: A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
- Score: 33.81809180549226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A surge of interest in explainable AI (XAI) has led to a vast collection of
algorithmic work on the topic. While many recognize the necessity to
incorporate explainability features in AI systems, how to address real-world
user needs for understanding AI remains an open question. By interviewing 20 UX
and design practitioners working on various AI products, we seek to identify
gaps between the current XAI algorithmic work and practices to create
explainable AI products. To do so, we develop an algorithm-informed XAI
question bank in which user needs for explainability are represented as
prototypical questions users might ask about the AI, and use it as a study
probe. Our work contributes insights into the design space of XAI, informs
efforts to support design practices in this space, and identifies opportunities
for future XAI work. We also provide an extended XAI question bank and discuss
how it can be used for creating user-centered XAI.
Related papers
- Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - "Help Me Help the AI": Understanding How Explainability Can Support
Human-AI Interaction [22.00514030715286]
We conducted a study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app.
We found that people express a need for practically useful information that can improve their collaboration with the AI system.
We also assessed end-users' perceptions of existing XAI approaches, finding that they prefer part-based explanations.
arXiv Detail & Related papers (2022-10-02T20:17:11Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Designer-User Communication for XAI: An epistemological approach to
discuss XAI design [4.169915659794568]
We take the Signifying Message as our conceptual tool to structure and discuss XAI scenarios.
We experiment with its use for the discussion of a healthcare AI-System.
arXiv Detail & Related papers (2021-05-17T13:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.