How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations
- URL: http://arxiv.org/abs/2312.04379v1
- Date: Thu, 7 Dec 2023 15:49:39 GMT
- Title: How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations
- Authors: Marco Matarese, Francesco Rea, Alessandra Sciutti
- Abstract summary: The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
- Score: 53.01494092422942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is an increasing consensus about the effectiveness of user-centred
approaches in the explainable artificial intelligence (XAI) field. Indeed, the
number and complexity of personalised and user-centred approaches to XAI have
rapidly grown in recent years. Often, these works have a two-fold objective:
(1) proposing novel XAI techniques able to consider the users and (2) assessing
the \textit{goodness} of such techniques with respect to others. From these new
works, it emerged that user-centred approaches to XAI positively affect the
interaction between users and systems. However, so far, the goodness of XAI
systems has been measured through indirect measures, such as performance. In
this paper, we propose an assessment task to objectively and quantitatively
measure the goodness of XAI systems in terms of their \textit{information
power}, which we intended as the amount of information the system provides to
the users during the interaction. Moreover, we plan to use our task to
objectively compare two XAI techniques in a human-robot decision-making task to
understand deeper whether user-centred approaches are more informative than
classical ones.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Measuring User Understanding in Dialogue-based XAI Systems [2.4124106640519667]
State-of-the-art in XAI is still characterized by one-shot, non-personalized and one-way explanations.
In this paper, we measure understanding of users in three phases by asking them to simulate the predictions of the model they are learning about.
We analyze the data to reveal patterns of how the interaction between groups with high vs. low understanding gain differ.
arXiv Detail & Related papers (2024-08-13T15:17:03Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
Explainable AI Systems [14.940404609343432]
We evaluate two currently common techniques for evaluating XAI systems.
We show that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks.
Our results suggest that by employing misleading evaluation methods, our field may be inadvertently slowing its progress toward developing human+AI teams that can reliably perform better than humans or AIs alone.
arXiv Detail & Related papers (2020-01-22T22:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.