Painting the black box white: experimental findings from applying XAI to
an ECG reading setting
- URL: http://arxiv.org/abs/2210.15236v1
- Date: Thu, 27 Oct 2022 07:47:50 GMT
- Title: Painting the black box white: experimental findings from applying XAI to
an ECG reading setting
- Authors: Federico Cabitza and Matteo Cameli and Andrea Campagner and Chiara
Natali and Luca Ronzio
- Abstract summary: The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI)
We focus on the cognitive dimension of users' perception of explanations and XAI systems.
- Score: 0.13124513975412253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The shift from symbolic AI systems to black-box, sub-symbolic, and
statistical ones has motivated a rapid increase in the interest toward
explainable AI (XAI), i.e. approaches to make black-box AI systems explainable
to human decision makers with the aim of making these systems more acceptable
and more usable tools and supports. However, we make the point that, rather
than always making black boxes transparent, these approaches are at risk of
\emph{painting the black boxes white}, thus failing to provide a level of
transparency that would increase the system's usability and comprehensibility;
or, even, at risk of generating new errors, in what we termed the
\emph{white-box paradox}. To address these usability-related issues, in this
work we focus on the cognitive dimension of users' perception of explanations
and XAI systems. To this aim, we designed and conducted a questionnaire-based
experiment by which we involved 44 cardiology residents and specialists in an
AI-supported ECG reading task. In doing so, we investigated different research
questions concerning the relationship between users' characteristics (e.g.
expertise) and their perception of AI and XAI systems, including their trust,
the perceived explanations' quality and their tendency to defer the decision
process to automation (i.e. technology dominance), as well as the mutual
relationships among these different dimensions. Our findings provide a
contribution to the evaluation of AI-based support systems from a Human-AI
interaction-oriented perspective and lay the ground for further investigation
of XAI and its effects on decision making and user experience.
Related papers
- Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - Explainable Artificial Intelligence (XAI): An Engineering Perspective [0.0]
XAI is a set of techniques and methods to convert the so-called black-box AI algorithms to white-box algorithms.
We discuss the stakeholders in XAI and describe the mathematical contours of XAI from engineering perspective.
This work is an exploratory study to identify new avenues of research in the field of XAI.
arXiv Detail & Related papers (2021-01-10T19:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.