A Meta-Analysis on the Utility of Explainable Artificial Intelligence in
Human-AI Decision-Making
- URL: http://arxiv.org/abs/2205.05126v1
- Date: Tue, 10 May 2022 19:08:10 GMT
- Title: A Meta-Analysis on the Utility of Explainable Artificial Intelligence in
Human-AI Decision-Making
- Authors: Max Schemmer and Patrick Hemmer and Maximilian Nitsche and Niklas
K\"uhl and Michael V\"ossing
- Abstract summary: We present an initial synthesis of existing research on XAI studies using a statistical meta-analysis.
We observe a statistically positive impact of XAI on users' performance.
We find no effect of explanations on users' performance compared to sole AI predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Research in Artificial Intelligence (AI)-assisted decision-making is
experiencing tremendous growth with a constantly rising number of studies
evaluating the effect of AI with and without techniques from the field of
explainable AI (XAI) on human decision-making performance. However, as tasks
and experimental setups vary due to different objectives, some studies report
improved user decision-making performance through XAI, while others report only
negligible effects. Therefore, in this article, we present an initial synthesis
of existing research on XAI studies using a statistical meta-analysis to derive
implications across existing research. We observe a statistically positive
impact of XAI on users' performance. Additionally, first results might indicate
that human-AI decision-making yields better task performance on text data.
However, we find no effect of explanations on users' performance compared to
sole AI predictions. Our initial synthesis gives rise to future research to
investigate the underlying causes as well as contribute to further development
of algorithms that effectively benefit human decision-makers in the form of
explanations.
Related papers
- How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study [0.0]
This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
arXiv Detail & Related papers (2023-04-18T09:52:09Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.