A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction
- URL: http://arxiv.org/abs/2109.12912v1
- Date: Mon, 27 Sep 2021 09:56:23 GMT
- Title: A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction
- Authors: Marco Matarese, Francesco Rea, Alessandra Sciutti
- Abstract summary: We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
- Score: 70.11080854486953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State of the art Artificial Intelligence (AI) techniques have reached an
impressive complexity. Consequently, researchers are discovering more and more
methods to use them in real-world applications. However, the complexity of such
systems requires the introduction of methods that make those transparent to the
human user. The AI community is trying to overcome the problem by introducing
the Explainable AI (XAI) field, which is tentative to make AI algorithms less
opaque. However, in recent years, it became clearer that XAI is much more than
a computer science problem: since it is about communication, XAI is also a
Human-Agent Interaction problem. Moreover, AI came out of the laboratories to
be used in real life. This implies the need for XAI solutions tailored to
non-expert users. Hence, we propose a user-centred framework for XAI that
focuses on its social-interactive aspect taking inspiration from cognitive and
social sciences' theories and findings. The framework aims to provide a
structure for interactive XAI solutions thought for non-expert users.
Related papers
- Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
Support Human-AI Scientific Writing [45.187790784934734]
This paper focuses on Conversational XAI for AI-assisted scientific writing tasks.
We identify four design rationales: "multifaceted", "controllability", "mix-initiative", "context-aware drill-down"
We incorporate them into an interactive prototype, ConvXAI, which facilitates heterogeneous AI explanations for scientific writing through dialogue.
arXiv Detail & Related papers (2023-05-16T19:48:49Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Designer-User Communication for XAI: An epistemological approach to
discuss XAI design [4.169915659794568]
We take the Signifying Message as our conceptual tool to structure and discuss XAI scenarios.
We experiment with its use for the discussion of a healthcare AI-System.
arXiv Detail & Related papers (2021-05-17T13:18:57Z) - Explainable Artificial Intelligence Approaches: A Survey [0.22940141855172028]
Lack of explainability of a decision from an Artificial Intelligence based "black box" system/model is a key stumbling block for adopting AI in high stakes applications.
We demonstrate popular Explainable Artificial Intelligence (XAI) methods with a mutual case study/task.
We analyze for competitive advantages from multiple perspectives.
We recommend paths towards responsible or human-centered AI using XAI as a medium.
arXiv Detail & Related papers (2021-01-23T06:15:34Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.