Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
- URL: http://arxiv.org/abs/2302.03180v2
- Date: Wed, 19 Jun 2024 06:58:30 GMT
- Title: Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
- Authors: Maryam Hashemi, Ali Darejeh, Francisco Cruz,
- Abstract summary: This study conducts a thorough review of extant research in Explainable Machine Learning (XML)
Our main objective is to offer a classification of XAI methods within the realm of XML.
We propose a mapping function that take to account users and their desired properties and suggest an XAI method to them.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing complexity of AI systems has led to the growth of the field of Explainable Artificial Intelligence (XAI), which aims to provide explanations and justifications for the outputs of AI algorithms. While there is considerable demand for XAI, there remains a scarcity of studies aimed at comprehensively understanding the practical distinctions among different methods and effectively aligning each method with users individual needs, and ideally, offer a mapping function which can map each user with its specific needs to a method of explainability. This study endeavors to bridge this gap by conducting a thorough review of extant research in XAI, with a specific focus on Explainable Machine Learning (XML), and a keen eye on user needs. Our main objective is to offer a classification of XAI methods within the realm of XML, categorizing current works into three distinct domains: philosophy, theory, and practice, and providing a critical review for each category. Moreover, our study seeks to facilitate the connection between XAI users and the most suitable methods for them and tailor explanations to meet their specific needs by proposing a mapping function that take to account users and their desired properties and suggest an XAI method to them. This entails an examination of prevalent XAI approaches and an evaluation of their properties. The primary outcome of this study is the formulation of a clear and concise strategy for selecting the optimal XAI method to achieve a given goal, all while delivering personalized explanations tailored to individual users.
Related papers
- OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning [43.87507227859493]
This paper presents OpenHEXAI, an open-source framework for human-centered evaluation of XAI methods.
OpenHEAXI is the first large-scale infrastructural effort to facilitate human-centered benchmarks of XAI methods.
arXiv Detail & Related papers (2024-02-20T22:17:59Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Strategies to exploit XAI to improve classification systems [0.0]
XAI aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions.
Most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploited to improve an AI system.
arXiv Detail & Related papers (2023-06-09T10:38:26Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Invisible Users: Uncovering End-Users' Requirements for Explainable AI
via Explanation Forms and Goals [19.268536451101912]
Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies.
Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques.
This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems.
arXiv Detail & Related papers (2023-02-10T19:35:57Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Human-centered Explainable AI: A Survey of User Studies for
Model Explanations [19.6851366307368]
We identify and analyze 97core papers with human-based XAI evaluations over the past five years.
Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems.
We propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners.
arXiv Detail & Related papers (2022-10-20T20:53:00Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.