Designer-User Communication for XAI: An epistemological approach to
discuss XAI design
- URL: http://arxiv.org/abs/2105.07804v1
- Date: Mon, 17 May 2021 13:18:57 GMT
- Title: Designer-User Communication for XAI: An epistemological approach to
discuss XAI design
- Authors: Juliana Jansen Ferreira and Mateus Monteiro
- Abstract summary: We take the Signifying Message as our conceptual tool to structure and discuss XAI scenarios.
We experiment with its use for the discussion of a healthcare AI-System.
- Score: 4.169915659794568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence is becoming part of any technology we use nowadays.
If the AI informs people's decisions, the explanation about AI's outcomes,
results, and behavior becomes a necessary capability. However, the discussion
of XAI features with various stakeholders is not a trivial task. Most of the
available frameworks and methods for XAI focus on data scientists and ML
developers as users. Our research is about XAI for end-users of AI systems. We
argue that we need to discuss XAI early in the AI-system design process and
with all stakeholders. In this work, we aimed at investigating how to
operationalize the discussion about XAI scenarios and opportunities among
designers and developers of AI and its end-users. We took the Signifying
Message as our conceptual tool to structure and discuss XAI scenarios. We
experiment with its use for the discussion of a healthcare AI-System.
Related papers
- XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On Two XAI Cultures: A Case Study of Non-technical Explanations in
Deployed AI System [3.4918511133757977]
Not much of XAI is comprehensible to non-AI experts, who nonetheless are the primary audience and major stakeholders of deployed AI systems in practice.
We advocate that it is critical to develop XAI methods for non-technical audiences.
We then present a real-life case study, where AI experts provided non-technical explanations of AI decisions to non-technical stakeholders.
arXiv Detail & Related papers (2021-12-02T07:02:27Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.