Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study
- URL: http://arxiv.org/abs/2304.08861v1
- Date: Tue, 18 Apr 2023 09:52:09 GMT
- Title: Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study
- Authors: Lukas-Valentin Herm
- Abstract summary: This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While the emerging research field of explainable artificial intelligence
(XAI) claims to address the lack of explainability in high-performance machine
learning models, in practice, XAI targets developers rather than actual
end-users. Unsurprisingly, end-users are often unwilling to use XAI-based
decision support systems. Similarly, there is limited interdisciplinary
research on end-users' behavior during XAI explanations usage, rendering it
unknown how explanations may impact cognitive load and further affect end-user
performance. Therefore, we conducted an empirical study with 271 prospective
physicians, measuring their cognitive load, task performance, and task time for
distinct implementation-independent XAI explanation types using a COVID-19 use
case. We found that these explanation types strongly influence end-users'
cognitive load, task performance, and task time. Further, we contextualized a
mental efficiency metric, ranking local XAI explanation types best, to provide
recommendations for future applications and implications for sociotechnical XAI
research.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A Meta-Analysis on the Utility of Explainable Artificial Intelligence in
Human-AI Decision-Making [0.0]
We present an initial synthesis of existing research on XAI studies using a statistical meta-analysis.
We observe a statistically positive impact of XAI on users' performance.
We find no effect of explanations on users' performance compared to sole AI predictions.
arXiv Detail & Related papers (2022-05-10T19:08:10Z) - Explainable Artificial Intelligence Methods in Combating Pandemics: A
Systematic Review [7.140215556873923]
The impact of artificial intelligence during the COVID-19 pandemic was greatly limited by lack of model transparency.
We find that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making.
arXiv Detail & Related papers (2021-12-23T16:55:27Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.