A System's Approach Taxonomy for User-Centred XAI: A Survey
- URL: http://arxiv.org/abs/2303.02810v1
- Date: Mon, 6 Mar 2023 00:50:23 GMT
- Title: A System's Approach Taxonomy for User-Centred XAI: A Survey
- Authors: Ehsan Emamirad, Pouya Ghiasnezhad Omran, Armin Haller, Shirley Gregor
- Abstract summary: We propose a unified, inclusive and user-centred taxonomy for XAI based on the principles of General System's Theory.
This provides a basis for evaluating the appropriateness of XAI approaches for all user types, including both developers and end users.
- Score: 0.6882042556551609
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in AI have coincided with ever-increasing efforts in the
research community to investigate, classify and evaluate various methods aimed
at making AI models explainable. However, most of existing attempts present a
method-centric view of eXplainable AI (XAI) which is typically meaningful only
for domain experts. There is an apparent lack of a robust qualitative and
quantitative performance framework that evaluates the suitability of
explanations for different types of users. We survey relevant efforts, and
then, propose a unified, inclusive and user-centred taxonomy for XAI based on
the principles of General System's Theory, which serves us as a basis for
evaluating the appropriateness of XAI approaches for all user types, including
both developers and end users.
Related papers
- Dimensions of Generative AI Evaluation Design [51.541816010127256]
We propose a set of general dimensions that capture critical choices involved in GenAI evaluation design.
These dimensions include the evaluation setting, the task type, the input source, the interaction style, the duration, the metric type, and the scoring method.
arXiv Detail & Related papers (2024-11-19T18:25:30Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning [43.87507227859493]
This paper presents OpenHEXAI, an open-source framework for human-centered evaluation of XAI methods.
OpenHEAXI is the first large-scale infrastructural effort to facilitate human-centered benchmarks of XAI methods.
arXiv Detail & Related papers (2024-02-20T22:17:59Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Towards a Comprehensive Human-Centred Evaluation Framework for
Explainable AI [1.7222662622390634]
We propose to adapt the User-Centric Evaluation Framework used in recommender systems.
We integrate explanation aspects, summarise explanation properties, indicate relations between them, and categorise metrics that measure these properties.
arXiv Detail & Related papers (2023-07-31T09:20:16Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal [0.0]
This study conducts a thorough review of extant research in Explainable Machine Learning (XML)
Our main objective is to offer a classification of XAI methods within the realm of XML.
We propose a mapping function that take to account users and their desired properties and suggest an XAI method to them.
arXiv Detail & Related papers (2023-02-07T01:06:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations [18.971689499890363]
We identify and analyze 97core papers with human-based XAI evaluations over the past five years.
Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems.
We propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners.
arXiv Detail & Related papers (2022-10-20T20:53:00Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.