Transcending XAI Algorithm Boundaries through End-User-Inspired Design
- URL: http://arxiv.org/abs/2208.08739v1
- Date: Thu, 18 Aug 2022 09:44:51 GMT
- Title: Transcending XAI Algorithm Boundaries through End-User-Inspired Design
- Authors: Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Xiaoxiao Li,
Ghassan Hamarneh
- Abstract summary: Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
- Score: 27.864338632191608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The boundaries of existing explainable artificial intelligence (XAI)
algorithms are confined to problems grounded in technical users' demand for
explainability. This research paradigm disproportionately ignores the larger
group of non-technical end users of XAI, who do not have technical knowledge
but need explanations in their AI-assisted critical decisions. Lacking
explainability-focused functional support for end users may hinder the safe and
responsible use of AI in high-stakes domains, such as healthcare, criminal
justice, finance, and autonomous driving systems. In this work, we explore how
designing XAI tailored to end users' critical tasks inspires the framing of new
technical problems. To elicit users' interpretations and requirements for XAI
algorithms, we first identify eight explanation forms as the communication tool
between AI researchers and end users, such as explaining using features,
examples, or rules. Using the explanation forms, we then conduct a user study
with 32 layperson participants in the context of achieving different
explanation goals (such as verifying AI decisions, and improving user's
predicted outcomes) in four critical tasks. Based on the user study findings,
we identify and formulate novel XAI technical problems, and propose an
evaluation metric verifiability based on users' explanation goal of verifying
AI decisions. Our work shows that grounding the technical problem in end users'
use of XAI can inspire new research questions. Such end-user-inspired research
questions have the potential to promote social good by democratizing AI and
ensuring the responsible use of AI in critical domains.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Invisible Users: Uncovering End-Users' Requirements for Explainable AI
via Explanation Forms and Goals [19.268536451101912]
Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies.
Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques.
This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems.
arXiv Detail & Related papers (2023-02-10T19:35:57Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.